Content from Before we Start
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How to find your way around RStudio?
- How to interact with R?
- How to manage your environment?
- How to install packages?
Objectives
- Install Linux Shell
- Install latest version of R.
- Install latest version of RStudio.
- Navigate the RStudio GUI.
- Install additional packages using the packages tab.
- Install additional packages using R code.
What is R? What is RStudio?
The term “R
” is used to refer to both the programming
language and the software that interprets the scripts written using
it.
RStudio is currently a very popular way to not only write your R scripts but also to interact with the R software. To function correctly, RStudio needs R and therefore both need to be installed on your computer.
To make it easier to interact with R, we will use RStudio. RStudio is the most popular IDE (Integrated Development Environment) for R. An IDE is a piece of software that provides tools to make programming easier.
Why learn R?
R does not involve lots of pointing and clicking, and that’s a good thing
The learning curve might be steeper than with other software, but with R, the results of your analysis do not rely on remembering a succession of pointing and clicking, but instead on a series of written commands, and that’s a good thing! So, if you want to redo your analysis because you collected more data, you don’t have to remember which button you clicked in which order to obtain your results; you just have to run your script again.
Working with scripts makes the steps you used in your analysis clear, and the code you write can be inspected by someone else who can give you feedback and spot mistakes.
Working with scripts forces you to have a deeper understanding of what you are doing, and facilitates your learning and comprehension of the methods you use.
R code is great for reproducibility
Reproducibility is when someone else (including your future self) can obtain the same results from the same dataset when using the same analysis.
R integrates with other tools to generate manuscripts from your code. If you collect more data, or fix a mistake in your dataset, the figures and the statistical tests in your manuscript are updated automatically.
An increasing number of journals and funding agencies expect analyses to be reproducible, so knowing R will give you an edge with these requirements.
R is interdisciplinary and extensible
With 10,000+ packages that can be installed to extend its capabilities, R provides a framework that allows you to combine statistical approaches from many scientific disciplines to best suit the analytical framework you need to analyze your data. For instance, R has packages for image analysis, GIS, time series, population genetics, and a lot more.
R works on data of all shapes and sizes
The skills you learn with R scale easily with the size of your dataset. Whether your dataset has hundreds or millions of lines, it won’t make much difference to you.
R is designed for data analysis. It comes with special data structures and data types that make handling of missing data and statistical factors convenient.
R can connect to spreadsheets, databases, and many other data formats, on your computer or on the web.
R produces high-quality graphics
The plotting functionalities in R are endless, and allow you to adjust any aspect of your graph to convey most effectively the message from your data.
R has a large and welcoming community
Thousands of people use R daily. Many of them are willing to help you through mailing lists and websites such as Stack Overflow, or on the RStudio community. Questions which are backed up with short, reproducible codesnippets are more likely to attract knowledgeable responses.
Not only is R free, but it is also open-source and cross-platform
Anyone can inspect the source code to see how R works. Because of this transparency, there is less chance for mistakes, and if you (or someone else) find some, you can report and fix bugs.
Because R is open source and is supported by a large community of developers and users, there is a very large selection of third-party add-on packages which are freely available to extend R’s native capabilities.
RStudio extends what R can do, and makes it easier to write R code and interact with R.
A tour of RStudio
Knowing your way around RStudio
Let’s start by learning about RStudio, which is an Integrated Development Environment (IDE) for working with R.
The RStudio IDE open-source product is free under the Affero General Public License (AGPL) v3. The RStudio IDE is also available with a commercial license and priority email support from RStudio, Inc.
We will use the RStudio IDE to write code, navigate the files on our computer, inspect the variables we create, and visualize the plots we generate. RStudio can also be used for other things (e.g., version control, developing packages, writing Shiny apps) that we will not cover during the workshop.
One of the advantages of using RStudio is that all the information you need to write code is available in a single window. Additionally, RStudio provides many shortcuts, autocompletion, and highlighting for the major file types you use while developing in R. RStudio makes typing easier and less error-prone.
Getting set up
It is good practice to keep a set of related data, analyses, and text self-contained in a single folder called the working directory. All of the scripts within this folder can then use relative paths to files. Relative paths indicate where inside the project a file is located (as opposed to absolute paths, which point to where a file is on a specific computer). Working this way makes it a lot easier to move your project around on your computer and share it with others without having to directly modify file paths in the individual scripts.
RStudio provides a helpful set of tools to do this through its “Projects” interface, which not only creates a working directory for you but also remembers its location (allowing you to quickly navigate to it). The interface also (optionally) preserves custom settings and open files to make it easier to resume work after a break.
Create a new project
- Under the
File
menu, click onNew project
, chooseNew directory
, thenNew project
- Enter a name for this new folder (or “directory”) and choose a
convenient location for it. This will be your working
directory for the rest of the day (e.g.,
~/data-carpentry
) - Click on
Create project
- Create a new file where we will type our scripts. Go to File >
New File > R script. Click the save icon on your toolbar and save
your script as “
script.R
”.
The simplest way to open an RStudio project once it has been created
is to navigate through your files to where the project was saved and
double click on the .Rproj
(blue cube) file. This will open
RStudio and start your R session in the same directory
as the .Rproj
file. All your data, plots and scripts will
now be relative to the project directory. RStudio projects have the
added benefit of allowing you to open multiple projects at the same time
each open to its own project directory. This allows you to keep multiple
projects open without them interfering with each other.
The RStudio Interface
Let’s take a quick tour of RStudio.
RStudio is divided into four “panes”. The placement of these panes and their content can be customized (see menu, Tools -> Global Options -> Pane Layout).
The Default Layout is:
- Top Left - Source: your scripts and documents
- Bottom Left - Console: what R would look and be like without RStudio
- Top Right - Environment/History: look here to see what you have done
- Bottom Right - Files and more: see the contents of the project/working directory here, like your Script.R file
Organizing your working directory
Using a consistent folder structure across your projects will help keep things organized and make it easy to find/file things in the future. This can be especially helpful when you have multiple projects. In general, you might create directories (folders) for scripts, data, and documents. Here are some examples of suggested directories:
-
data/
Use this folder to store your raw data and intermediate datasets. For the sake of transparency and provenance, you should always keep a copy of your raw data accessible and do as much of your data cleanup and preprocessing programmatically (i.e., with scripts, rather than manually) as possible. -
data_output/
When you need to modify your raw data, it might be useful to store the modified versions of the datasets in a different folder. -
documents/
Used for outlines, drafts, and other text. -
fig_output/
This folder can store the graphics that are generated by your scripts. -
scripts/
A place to keep your R scripts for different analyses or plotting.
You may want additional directories or subdirectories depending on your project needs, but these should form the backbone of your working directory.
The working directory
The working directory is an important concept to understand. It is the place where R will look for and save files. When you write code for your project, your scripts should refer to files in relation to the root of your working directory and only to files within this structure.
Using RStudio projects makes this easy and ensures that your working
directory is set up properly. If you need to check it, you can use
getwd()
. If for some reason your working directory is not
the same as the location of your RStudio project, it is likely that you
opened an R script or RMarkdown file not your
.Rproj
file. You should close out of RStudio and open the
.Rproj
file by double clicking on the blue cube! If you
ever need to modify your working directory in a script,
setwd('my/path')
changes the working directory. This should
be used with caution since it makes analyses hard to share across
devices and with other users.
Downloading the data and getting set up
For this lesson we will use the following folders in our working
directory: data/
,
data_output/
and
fig_output/
. Let’s write them all in
lowercase to be consistent. We can create them using the RStudio
interface by clicking on the “New Folder” button in the file pane
(bottom right), or directly from R by typing at console:
R
dir.create("data")
dir.create("data_output")
dir.create("fig_output")
Interacting with R
The basis of programming is that we write down instructions for the computer to follow, and then we tell the computer to follow those instructions. We write, or code, instructions in R because it is a common language that both the computer and we can understand. We call the instructions commands and we tell the computer to follow the instructions by executing (also called running) those commands.
There are two main ways of interacting with R: by using the console or by using script files (plain text files that contain your code). The console pane (in RStudio, the bottom left panel) is the place where commands written in the R language can be typed and executed immediately by the computer. It is also where the results will be shown for commands that have been executed. You can type commands directly into the console and press Enter to execute those commands, but they will be forgotten when you close the session.
Because we want our code and workflow to be reproducible, it is better to type the commands we want in the script editor and save the script. This way, there is a complete record of what we did, and anyone (including our future selves!) can easily replicate the results on their computer.
RStudio allows you to execute commands directly from the script editor by using the Ctrl + Enter shortcut (on Mac, Cmd + Return will work). The command on the current line in the script (indicated by the cursor) or all of the commands in selected text will be sent to the console and executed when you press Ctrl + Enter. If there is information in the console you do not need anymore, you can clear it with Ctrl + L. You can find other keyboard shortcuts in this RStudio cheatsheet about the RStudio IDE.
At some point in your analysis, you may want to check the content of a variable or the structure of an object without necessarily keeping a record of it in your script. You can type these commands and execute them directly in the console. RStudio provides the Ctrl + 1 and Ctrl + 2 shortcuts allow you to jump between the script and the console panes.
If R is ready to accept commands, the R console shows a
>
prompt. If R receives a command (by typing,
copy-pasting, or sent from the script editor using Ctrl +
Enter), R will try to execute it and, when ready, will show
the results and come back with a new >
prompt to wait
for new commands.
If R is still waiting for you to enter more text, the console will
show a +
prompt. It means that you haven’t finished
entering a complete command. This is likely because you have not
‘closed’ a parenthesis or quotation, i.e. you don’t have the same number
of left-parentheses as right-parentheses or the same number of opening
and closing quotation marks. When this happens, and you thought you
finished typing your command, click inside the console window and press
Esc; this will cancel the incomplete command and return you
to the >
prompt. You can then proofread the command(s)
you entered and correct the error.
Installing additional packages using the packages tab
In addition to the core R installation, there are in excess of 10,000 additional packages which can be used to extend the functionality of R. Many of these have been written by R users and have been made available in central repositories, like the one hosted at CRAN, for anyone to download and install into their own R environment. You should have already installed the packages ‘ggplot2’ and ’dplyr. If you have not, please do so now using these instructions.
You can see if you have a package installed by looking in the
packages
tab (on the lower-right by default). You can also
type the command installed.packages()
into the console and
examine the output.
Additional packages can be installed from the ‘packages’ tab. On the packages tab, click the ‘Install’ icon and start typing the name of the package you want in the text box. As you type, packages matching your starting characters will be displayed in a drop-down list so that you can select them.
At the bottom of the Install Packages window is a check box to ‘Install’ dependencies. This is ticked by default, which is usually what you want. Packages can (and do) make use of functionality built into other packages, so for the functionality contained in the package you are installing to work properly, there may be other packages which have to be installed with them. The ‘Install dependencies’ option makes sure that this happens.
Scroll through packages tab down to ‘tidyverse’. You can also type a few characters into the searchbox. The ‘tidyverse’ package is really a package of packages, including ‘ggplot2’ and ‘dplyr’, both of which require other packages to run correctly. All of these packages will be installed automatically. Depending on what packages have previously been installed in your R environment, the install of ‘tidyverse’ could be very quick or could take several minutes. As the install proceeds, messages relating to its progress will be written to the console. You will be able to see all of the packages which are actually being installed.
Because the install process accesses the CRAN repository, you will need an Internet connection to install packages.
It is also possible to install packages from other repositories, as well as Github or the local file system, but we won’t be looking at these options in this lesson.
Installing additional packages using R code
If you were watching the console window when you started the install of ‘tidyverse’, you may have noticed that the line
R
install.packages("tidyverse")
was written to the console before the start of the installation messages.
You could also have installed the
tidyverse
packages by running this command
directly at the R terminal.
Content from Using RMarkdown
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How do you write a lesson using R Markdown and sandpaper?
Objectives
- Explain how to use markdown with the new lesson template
- Demonstrate how to include pieces of code, figures, and nested challenge blocks
Introduction
This is a lesson created via The Carpentries Workbench. It is written in Pandoc-flavored Markdown for static files and R Markdown for dynamic files that can render code into output. Please refer to the Introduction to The Carpentries Workbench for full documentation.
What you need to know is that there are three sections required for a valid Carpentries lesson template:
-
questions
are displayed at the beginning of the episode to prime the learner for the content. -
objectives
are the learning objectives for an episode displayed with the questions. -
keypoints
are displayed at the end of the episode to reinforce the objectives.
OUTPUT
[1] "This new lesson looks good"
You can add a line with at least three colons and a
solution
tag.
Figures
You can also include figures generated from R Markdown:
R
pie(
c(Sky = 78, "Sunny side of pyramid" = 17, "Shady side of pyramid" = 5),
init.angle = 315,
col = c("deepskyblue", "yellow", "yellow3"),
border = FALSE
)
Or you can use standard markdown for static figures with the following syntax:
![optional caption that appears below the figure](figure url){alt='alt text for accessibility purposes'}
Math
One of our episodes contains \(\LaTeX\) equations when describing how to create dynamic reports with {knitr}, so we now use mathjax to describe this:
$\alpha = \dfrac{1}{(1 - \beta)^2}$
becomes: \(\alpha = \dfrac{1}{(1 - \beta)^2}\)
Cool, right?
Content from Submit a parallel job
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How do you get a high performance computing cluster to run a program?
Objectives
- Introduce a parallel R program
- Submit a parallel R program to a job scheduler on a cluster
Introduction
R
## This script describes two levels of parallelism:
## Top level: Distributed MPI runs several copies of this entire script.
## Instances differ by their comm.rank() designation.
## Inner level: The unix fork (copy-on-write) shared memory parallel execution
## of the mc.function() managed by parallel::mclapply()
## Further levels are possible: multithreading in compiled code and communicator
## splitting at the distributed MPI level.
suppressMessages(library(pbdMPI))
comm.print(sessionInfo())
## get node name
host = system("hostname", intern = TRUE)
mc.function = function(x) {
Sys.sleep(1) # replace with your function for mclapply cores here
Sys.getpid() # returns process id
}
## Compute how many cores per R session are on this node
local_ranks_query = "echo $OMPI_COMM_WORLD_LOCAL_SIZE"
ranks_on_my_node = as.numeric(system(local_ranks_query, intern = TRUE))
cores_on_my_node = parallel::detectCores()
cores_per_R = floor(cores_on_my_node/ranks_on_my_node)
cores_total = allreduce(cores_per_R) # adds up over ranks
## Run mclapply on allocated cores to demonstrate fork pids
my_pids = parallel::mclapply(1:cores_per_R, mc.function, mc.cores = cores_per_R)
my_pids = do.call(paste, my_pids) # combines results from mclapply
##
## Same cores are shared with OpenBLAS (see flexiblas package)
## or for other OpenMP enabled codes outside mclapply.
## If BLAS functions are called inside mclapply, they compete for the
## same cores: avoid or manage appropriately!!!
## Now report what happened and where
msg = paste0("Hello World from rank ", comm.rank(), " on host ", host,
" with ", cores_per_R, " cores allocated\n",
" (", ranks_on_my_node, " R sessions sharing ",
cores_on_my_node, " cores on this host node).\n",
" pid: ", my_pids, "\n")
comm.cat(msg, quiet = TRUE, all.rank = TRUE)
comm.cat("Total R sessions:", comm.size(), "Total cores:", cores_total, "\n",
quiet = TRUE)
comm.cat("\nNotes: cores on node obtained by: detectCores {parallel}\n",
" ranks (R sessions) per node: OMPI_COMM_WORLD_LOCAL_SIZE\n",
" pid to core map changes frequently during mclapply\n",
quiet = TRUE)
finalize()
Submit a job on a cluster
BASH
#!/bin/bash
#SBATCH -J hello
#SBATCH -A CSC489
#SBATCH -p batch
#SBATCH --nodes=4
#SBATCH --mem=0
#SBATCH -t 00:00:10
#SBATCH -e ./hello.e
#SBATCH -o ./hello.o
#SBATCH --open-mode=truncate
## above we request 4 nodes and all memory on the nodes
## assumes this repository was cloned in your home area
cd ~/R4HPC/code_1
pwd
## modules are specific to andes.olcf.ornl.gov
module load openblas/0.3.17-omp
module load flexiblas
flexiblas add OpenBLAS $OLCF_OPENBLAS_ROOT/lib/libopenblas.so
export LD_PRELOAD=$OLCF_FLEXIBLAS_ROOT/lib64/libflexiblas.so
module load r
echo -e "loaded R with FlexiBLAS"
module list
## above supplies your R code with FlexiBLAS-OpenBLAS on Andes
## but matrix computation is not used in the R illustration below
# An illustration of fine control of R scripts and cores on several nodes
# This runs 4 R sessions on each of 4 nodes (for a total of 16).
#
# Each of the 16 hello_world.R scripts will calculate how many cores are
# available per R session from environment variables and use that many
# in mclapply.
#
# NOTE: center policies may require dfferent parameters
#
# runs 4 R sessions per node
mpirun --map-by ppr:4:node Rscript hello_balance.R
BASH
#!/bin/bash
#PBS -N hello
#PBS -A DD-21-42
#PBS -l select=4:mpiprocs=16
#PBS -l walltime=00:00:10
#PBS -q qprod
#PBS -e hello.e
#PBS -o hello.o
cat $BASH_SOURCE
cd ~/R4HPC/code_1
pwd
## module names can vary on different platforms
module load R
echo "loaded R"
## prevent warning when fork is used with MPI
export OMPI_MCA_mpi_warn_on_fork=0
export RDMAV_FORK_SAFE=1
# Fix for warnings from libfabric/1.12 on Karolina
module swap libfabric/1.12.1-GCCcore-10.3.0 libfabric/1.13.2-GCCcore-11.2.0
time mpirun --map-by ppr:4:node Rscript hello_balance.R
Content from Multicore
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- Can parallelisation decrease time to solution for my program?
- What is machine learning?
Objectives
- Introduce machine learning, in particular the random forest algorithm
- Demonstrate serial and parallel implementations of the random forest algorithm
- Show that statistical machine learning models can be used to classify data after training on an existing dataset
Introduction
Serial Implementation
R
suppressMessages(library(randomForest))
data(LetterRecognition, package = "mlbench")
set.seed(seed = 123)
n = nrow(LetterRecognition)
n_test = floor(0.2 * n)
i_test = sample.int(n, n_test)
train = LetterRecognition[-i_test, ]
test = LetterRecognition[i_test, ]
rf.all = randomForest(lettr ~ ., train, ntree = 500, norm.votes = FALSE)
pred = predict(rf.all, test)
correct = sum(pred == test$lettr)
cat("Proportion Correct:", correct/(n_test), "\n")
Parallel Multicore Implementation
R
library(parallel) #<<
library(randomForest)
data(LetterRecognition, package = "mlbench")
set.seed(seed = 123, "L'Ecuyer-CMRG") #<<
n = nrow(LetterRecognition)
n_test = floor(0.2 * n)
i_test = sample.int(n, n_test)
train = LetterRecognition[-i_test, ]
test = LetterRecognition[i_test, ]
nc = as.numeric(commandArgs(TRUE)[2]) #<<
ntree = lapply(splitIndices(500, nc), length) #<<
rf = function(x, train) randomForest(lettr ~ ., train, ntree=x, #<<
norm.votes = FALSE) #<<
rf.out = mclapply(ntree, rf, train = train, mc.cores = nc) #<<
rf.all = do.call(combine, rf.out) #<<
crows = splitIndices(nrow(test), nc) #<<
rfp = function(x) as.vector(predict(rf.all, test[x, ])) #<<
cpred = mclapply(crows, rfp, mc.cores = nc) #<<
pred = do.call(c, cpred) #<<
correct <- sum(pred == test$lettr)
cat("Proportion Correct:", correct/(n_test), "\n")
BASH
#!/bin/bash
#SBATCH -J rf
#SBATCH -A CSC143
#SBATCH -p batch
#SBATCH --nodes=1
#SBATCH -t 00:40:00
#SBATCH --mem=0
#SBATCH -e ./rf.e
#SBATCH -o ./rf.o
#SBATCH --open-mode=truncate
cd ~/R4HPC/code_2
pwd
## modules are specific to andes.olcf.ornl.gov
module load openblas/0.3.17-omp
module load flexiblas
flexiblas add OpenBLAS $OLCF_OPENBLAS_ROOT/lib/libopenblas.so
export LD_PRELOAD=$OLCF_FLEXIBLAS_ROOT/lib64/libflexiblas.so
module load r
echo -e "loaded R with FlexiBLAS"
module list
time Rscript rf_serial.r
time Rscript rf_mc.r --args 1
time Rscript rf_mc.r --args 2
time Rscript rf_mc.r --args 4
time Rscript rf_mc.r --args 8
time Rscript rf_mc.r --args 16
time Rscript rf_mc.r --args 32
time Rscript rf_mc.r --args 64
BASH
#!/bin/bash
#PBS -N rf
#PBS -l select=1:ncpus=128
#PBS -l walltime=00:05:00
#PBS -q qexp
#PBS -e rf.e
#PBS -o rf.o
cd ~/R4HPC/code_2
pwd
module load R
echo "loaded R"
time Rscript rf_serial.r
time Rscript rf_mc.r --args 1
time Rscript rf_mc.r --args 2
time Rscript rf_mc.r --args 4
time Rscript rf_mc.r --args 8
time Rscript rf_mc.r --args 16
time Rscript rf_mc.r --args 32
time Rscript rf_mc.r --args 64
time Rscript rf_mc.r --args 128
Content from Blas
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How much can parallel libraries improve time to solution for your program?
Objectives
- Introduce the Basic Linear Algebra Subroutines (BLAS)
- Show that BLAS routines are used from R for statistical calculations
- Demonstrate that parallelisation can improve time to solution
Introduction
R
library(flexiblas)
flexiblas_avail()
flexiblas_version()
flexiblas_current_backend()
flexiblas_list()
flexiblas_list_loaded()
getthreads = function() {
flexiblas_get_num_threads()
}
setthreads = function(thr, label = "") {
cat(label, "Setting", thr, "threads\n")
flexiblas_set_num_threads(thr)
}
setback = function(backend, label = "") {
cat(label, "Setting", backend, "backend\n")
flexiblas_switch(flexiblas_load_backend(backend))
}
#' PT
#' A function to time one or more R expressions after setting the number of
#' threads available to the BLAS library.
#'
#' !!
#' DO NOT USE PT RECURSIVELY
#'
#' Use:
#' variable-for-result = PT(your-num-threads, a-quoted-text-comment, {
#' expression
#' expression
#' ...
#' expression-to-assign
#' })
PT = function(threads, text = "", expr) {
setthreads(threads, label = text)
print(system.time({result = {expr}}))
result
}
R
source("flexiblas_setup.R")
memuse::howbig(5e4, 2e3)
parallel::detectCores()
x = matrix(rnorm(1e8), nrow = 5e4, ncol = 2e3)
beta = rep(1, ncol(x))
err = rnorm(nrow(x))
y = x %*% beta + err
data = as.data.frame(cbind(y, x))
names(data) = c("y", paste0("x", 1:ncol(x)))
setback("OPENBLAS")
# qr --------------------------------------
for(i in 0:4) {
setthreads(2^i, "qr")
print(system.time((qr(x, LAPACK = TRUE))))
}
# prcomp --------------------------------------
for(i in 0:4) {
setthreads(2^i, "prcomp")
print(system.time((prcomp(x))))
}
# princomp --------------------------------------
for(i in 0:4) {
setthreads(2^i, "princomp")
print(system.time((princomp(x))))
}
# crossprod --------------------------------------
for(i in 0:5) {
setthreads(2^i, "crossprod")
print(system.time((crossprod(x))))
}
# %*% --------------------------------------------
for(i in 0:5) {
setthreads(2^i, "%*%")
print(system.time((t(x) %*% x)))
}
BASH
#!/bin/bash
#SBATCH -J flexiblas
#SBATCH -A CSC489
#SBATCH -p batch
#SBATCH --nodes=1
#SBATCH --mem=0
#SBATCH -t 00:15:00
#SBATCH -e ./flexiblas.e
#SBATCH -o ./flexiblas.o
#SBATCH --open-mode=truncate
## assumes this repository was cloned in your home area
cd ~/R4HPC/code_3
pwd
## modules are specific to andes.olcf.ornl.gov
module load openblas/0.3.17-omp
module load flexiblas
flexiblas add OpenBLAS $OLCF_OPENBLAS_ROOT/lib/libopenblas.so
export LD_PRELOAD=$OLCF_FLEXIBLAS_ROOT/lib64/libflexiblas.so
module load r
echo -e "loaded R with FlexiBLAS"
module list
Rscript flexiblas_bench.R
BASH
#!/bin/bash
#PBS -N fx
#PBS -l select=1:ncpus=128,walltime=00:50:00
#PBS -q qexp
#PBS -e fx.e
#PBS -o fx.o
cd ~/R4HPC/code_3
pwd
module load R
echo "loaded R"
time Rscript flexiblas_bench2.R
Content from MPI - Distributed Memory Parallelism
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How do you utilize more than one shared memory node?
Objectives
- Demonstrate how to submit a job on multiple nodes
- Demonstrate that a program with distributed memory parallelism can be run on a shared memory node
Introduction
Hello World!
R
suppressMessages(library(pbdMPI))
my_rank = comm.rank()
nranks = comm.size()
msg = paste0("Hello World! My name is Rank", my_rank,
". We are ", nranks, " identical siblings.")
cat(msg, "\n")
finalize()
BASH
#!/bin/bash
#SBATCH -J hello
#SBATCH -A CSC143
#SBATCH -p batch
#SBATCH --nodes=1
#SBATCH -t 00:40:00
#SBATCH --mem=0
#SBATCH -e ./hello.e
#SBATCH -o ./hello.o
#SBATCH --open-mode=truncate
cd ~/R4HPC/code_5
pwd
## modules are specific to andes.olcf.ornl.gov
module load openblas/0.3.17-omp
module load flexiblas
flexiblas add OpenBLAS $OLCF_OPENBLAS_ROOT/lib/libopenblas.so
export LD_PRELOAD=$OLCF_FLEXIBLAS_ROOT/lib64/libflexiblas.so
module load r
echo -e "loaded R with FlexiBLAS"
module list
mpirun --map-by ppr:32:node Rscript hello_world.R
BASH
#!/bin/bash
#PBS -N hello
#PBS -l select=1:ncpus=32
#PBS -l walltime=00:05:00
#PBS -q qexp
#PBS -e hello.e
#PBS -o hello.o
cd ~/R4HPC/code_5
pwd
module load R
echo "loaded R"
mpirun --map-by ppr:32:node Rscript hello_world.R
Content from pbdMPI - Parallel and Big Data interface to MPI
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- What types of functions does MPI provide?
Objectives
- Demonstrate some of the functionality of the pbd bindings to the Message Passing Interface (MPI)
Introduction
Hello World in Serial
R
library( pbdMPI, quiet = TRUE )
text = paste( "Hello, world from", comm.rank() )
print( text )
finalize()
Rank
R
library( pbdMPI, quiet = TRUE )
my.rank <- comm.rank()
comm.print( my.rank, all.rank = TRUE )
finalize()
Hello World in Parallel
R
library( pbdMPI, quiet = TRUE )
print( "Hello, world print" )
comm.print( "Hello, world comm.print" )
comm.print( "Hello from all", all.rank = TRUE, quiet = TRUE )
finalize()
Map Reduce
R
library( pbdMPI , quiet = TRUE)
## Your "Map" code
n = comm.rank() + 1
## Now "Reduce" but give the result to all
all_sum = allreduce( n ) # Sum is default
text = paste( "Hello: n is", n, "sum is", all_sum )
comm.print( text, all.rank = TRUE )
finalize ()
Calculate Pi
R
### Compute pi by simulaton
library( pbdMPI, quiet = TRUE )
comm.set.seed( seed = 1234567, diff = TRUE )
my.N = 1e7 %/% comm.size()
my.X = matrix( runif( my.N * 2 ), ncol = 2 )
my.r = sum( rowSums( my.X^2 ) <= 1 )
r = allreduce( my.r )
PI = 4*r / ( my.N * comm.size() )
comm.print( PI )
finalize()
Broadcast
R
library( pbdMPI, quiet = TRUE )
if ( comm.rank() == 0 ){
x = matrix( 1:4, nrow = 2 )
} else {
x = NULL
}
y = bcast( x )
comm.print( y, all.rank = TRUE )
comm.print( x, all.rank = TRUE )
finalize()
Gather
R
library( pbdMPI, quiet = TRUE )
comm.set.seed( seed = 1234567, diff = TRUE )
my_rank = comm.rank()
n = sample( 1:10, size = my_rank + 1 )
comm.print(n, all.rank = TRUE)
gt = gather(n)
obj_len = gather(length(n))
comm.cat("gathered unequal size objects. lengths =", unlist(obj_len), "\n")
comm.print( unlist( gt ), all.rank = TRUE )
finalize()
Gather Unequal
R
library( pbdMPI, quiet = TRUE )
comm.set.seed( seed = 1234567, diff = TRUE )
my_rank = comm.rank( )
n = sample( 1:10, size = my_rank + 1 )
comm.print( n, all.rank = TRUE )
gt = gather( n )
obj_len = gather( length( n ) )
comm.cat( "gathered unequal size objects. lengths =", obj_len, "\n" )
comm.print( unlist( gt ), all.rank = TRUE )
finalize( )
Gather Named
R
library( pbdMPI, quiet = TRUE )
comm.set.seed( seed = 1234567, diff = TRUE )
my_rank = comm.rank()
n = sample( 1:10, size = my_rank + 1 )
names(n) = paste0("a", 1:(my_rank + 1))
comm.print(n, all.rank = TRUE)
gt = gather( n )
comm.print( unlist( gt ), all.rank = TRUE )
finalize()
Chunk
R
library( pbdMPI, quiet = TRUE )
my.rank = comm.rank( )
k = comm.chunk( 10 )
comm.cat( my.rank, ":", k, "\n", all.rank = TRUE, quiet = TRUE)
k = comm.chunk( 10 , form = "vector")
comm.cat( my.rank, ":", k, "\n", all.rank = TRUE, quiet = TRUE)
k = comm.chunk( 10 , form = "vector", type = "equal")
comm.cat( my.rank, ":", k, "\n", all.rank = TRUE, quiet = TRUE)
finalize( )
Timing
R
library( pbdMPI, quiet = TRUE )
comm.set.seed( seed = 1234567, diff = T )
test = function( timed )
{
ltime = system.time( timed )[ 3 ]
mintime = allreduce( ltime, op='min' )
maxtime = allreduce( ltime, op='max' )
meantime = allreduce( ltime, op='sum' ) / comm.size()
return( data.frame( min = mintime, mean = meantime, max = maxtime ) )
}
# generate 10,000,000 random normal values (total)
times = test( rnorm( 1e7/comm.size() ) ) # ~76 MiB of data
comm.print( times )
finalize()
Are there bindings to MPI_wtime()?
Covariance
R
library( pbdMPI, quiet = TRUE )
comm.set.seed( seed = 1234567, diff = TRUE )
## Generate 10 rows and 3 columns of data per process
my.X = matrix( rnorm(10*3), ncol = 3 )
## Compute mean
N = allreduce( nrow( my.X ), op = "sum" )
mu = allreduce( colSums( my.X ) / N, op = "sum" )
## Sweep out mean and compute crossproducts sum
my.X = sweep( my.X, STATS = mu, MARGIN = 2 )
Cov.X = allreduce( crossprod( my.X ), op = "sum" ) / ( N - 1 )
comm.print( Cov.X )
finalize()
Matrix reduction
R
library( pbdMPI, quiet = TRUE )
x <- matrix( 10*comm.rank() + (1:6), nrow = 2 )
comm.print( x, all.rank = TRUE )
z <- reduce( x ) # knows it's a matrix
comm.print( z, all.rank = TRUE )
finalize()
Ordinary Least Squares
R
### Least Squares Fit wia Normal Equations (see lm.fit for a better way)
library( pbdMPI, quiet = TRUE )
comm.set.seed( seed = 12345, diff = TRUE )
## 10 rows and 3 columns of data per process
my.X = matrix( rnorm(10*3), ncol = 3 )
my.y = matrix( rnorm(10*1), ncol = 1 )
## Form the Normal Equations components
my.Xt = t( my.X )
XtX = allreduce( my.Xt %*% my.X, op = "sum" )
Xty = allreduce( my.Xt %*% my.y, op = "sum" )
## Everyone solve the Normal Equations
ols = solve( XtX, Xty )
comm.print( ols )
finalize()
QR Decomposition
R
library(cop, quiet = TRUE)
rank = comm.rank()
size = comm.size()
rows = 3
cols = 3
xb = matrix((1:(rows*cols*size))^2, ncol = cols) # a full matrix
xa = xb[(1:rows) + rank*rows, ] # split by row blocks
comm.print(xa, all.rank = TRUE)
comm.print(xb)
## compute usual QR from full matrix
rb = qr.R(qr(xb))
comm.print(rb)
## compute QR from gathered local QRs
rloc = qr.R(qr(xa)) # compute local QRs
rra = allgather(rloc) # gather them into a list
rra = do.call(rbind, rra) # rbind list elements
comm.print(rra) # print combined local QRs
ra = qr.R(qr(rra)) # QR the combined local QRs
comm.print(ra)
## use cop package to do it again via qr_allreduce
ra = qr_allreduce(xa)
comm.print(ra)
finalize()
Collective communication for a one dimensional domain decomposition
R
## Splits the world communicator into two sets of smaller communicators and
## demonstrates how a sum collective works
library(pbdMPI)
.pbd_env$SPMD.CT
comm_world = .pbd_env$SPMD.CT$comm # default communicator
my_rank = comm.rank(comm_world) # my default rank in world communicator
comm_new = 5L # new communicators can be 5 and up (0-4 are taken)
row_color = my_rank %/% 2L # set new partition colors and split accordingly
comm.split(comm_world, color = row_color, key = my_rank, newcomm = comm_new)
barrier()
my_newrank = comm.rank(comm_new)
comm.cat("comm_world:", comm_world, "comm_new", comm_new, "row_color:",
row_color, "my_rank:", my_rank, "my_newrank", my_newrank, "\n",
all.rank = TRUE)
x = my_rank + 1
comm.cat("x", x, "\n", all.rank = TRUE, comm = comm_world)
xa = allreduce(x, comm = comm_world)
xb = allreduce(x, comm = comm_new)
comm.cat("xa", xa, "xb", xb, "\n", all.rank = TRUE, comm = comm_world)
comm.free(comm_new)
finalize()
Collective communication for a two dimensional domain decomposition
R
## Run with:
## mpiexec -np 32 Rscript comm_split8x4.R
##
## Splits a 32-rank communicator into 4 row-communicators of size 8 and
## othogonal to them 8 column communicators of size 4. Prints rank assignments,
## and demonstrates how sum collectives work in each set of communicators.
##
## Useful row ooperations or column operations on tile-distrobued matrices. But
## note there is package pbdDMAT that already has these operations powered by
## ScaLAPACK.
## It can also serve for any two levels of distributed parallelism that are
## nested.
##
library(pbdMPI)
ncol = 8
nrow = 4
if(comm.size() != ncol*nrow) stop("Error: Must run with -np 32")
## Get world communicator rank
comm_w = .pbd_env$SPMD.CT$comm # world communicator (normallly assigned 0)
rank_w = comm.rank(comm_w) # world rank
## Split comm_w into ncol communicators of size nrow
comm_c = 5L # assign them a number
color_c = rank_w %/% nrow # ranks of same color are in same communicator
comm.split(comm_w, color = color_c, key = rank_w, newcomm = comm_c)
## Split comm_w into nrow communicators of size ncol
comm_r = 6L # assign them a number
color_r = rank_w %% nrow # make these orthogonal to the row communicators
comm.split(comm_w, color = color_r, key = rank_w, newcomm = comm_r)
## Print the resulting communicator colors and ranks
comm.cat(comm.rank(comm = comm_w),
paste0("(", color_r, ":", comm.rank(comm = comm_r), ")"),
paste0("(", color_c, ":", comm.rank(comm = comm_c), ")"),
"\n", all.rank = TRUE, quiet = TRUE, comm = comm_w)
## Print sums of rank numbers across each communicator to illustrate collectives
x = comm.rank(comm_w)
w = allreduce(x, op = "sum", comm = comm_w)
comm.cat(" ", w, all.rank = TRUE, quiet = TRUE)
comm.cat("\n", quiet = TRUE)
r = allreduce(x, op = "sum", comm = comm_r)
comm.cat(" ", r, all.rank = TRUE, quiet = TRUE)
comm.cat("\n", quiet = TRUE)
c = allreduce(x, op = "sum", comm = comm_c)
comm.cat(" ", c, all.rank = TRUE, quiet = TRUE)
comm.cat("\n", quiet = TRUE)
#comm.free(comm_c)
#comm.free(comm_r)
finalize()
Content from MPI - Distributed Memory Parallelism
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How do you utilize more than one shared memory node?
Objectives
- Demonstrate that distributed memory parallelism is useful for working with large data
- Demonstrate that distributed memory parallelism can lead to improved time to solution
Introduction
Distributed Memory Random Forest
Digit Recognition
R
suppressPackageStartupMessages(library(randomForest))
data(LetterRecognition, package = "mlbench")
library(pbdMPI, quiet = TRUE) #<<
comm.set.seed(seed = 7654321, diff = FALSE) #<<
n = nrow(LetterRecognition)
n_test = floor(0.2 * n)
i_test = sample.int(n, n_test)
train = LetterRecognition[-i_test, ]
test = LetterRecognition[i_test, ][comm.chunk(n_test, form = "vector"), ] #<<
comm.set.seed(seed = 1234, diff = TRUE) #<<
my.rf = randomForest(lettr ~ ., train, ntree = comm.chunk(500), norm.votes = FALSE) #<<
rf.all = allgather(my.rf) #<<
rf.all = do.call(combine, rf.all) #<<
pred = as.vector(predict(rf.all, test))
correct = allreduce(sum(pred == test$lettr)) #<<
comm.cat("Proportion Correct:", correct/(n_test), "\n")
finalize()
Diamond Classification
R
library(randomForest)
data(diamonds, package = "ggplot2")
library(pbdMPI) #<<
comm.set.seed(seed = 7654321, diff = FALSE) #<<
n = nrow(diamonds)
n_test = floor(0.5 * n)
i_test = sample.int(n, n_test)
train = diamonds[-i_test, ]
test = diamonds[i_test, ][comm.chunk(n_test, form = "vector"), ] #<<
comm.set.seed(seed = 1e6 * runif(1), diff = TRUE) #<<
my.rf = randomForest(price ~ ., train, ntree = comm.chunk(100), norm.votes = FALSE) #<<
rf.all = allgather(my.rf) #<<
rf.all = do.call(combine, rf.all) #<<
pred = as.vector(predict(rf.all, test))
sse = sum((pred - test$price)^2)
comm.cat("MSE =", reduce(sse)/n_test, "\n")
finalize() #<<
BASH
#!/bin/bash
#SBATCH -J rf
#SBATCH -A CSC143
#SBATCH -p batch
#SBATCH --nodes=1
#SBATCH -t 00:40:00
#SBATCH --mem=0
#SBATCH -e ./rf.e
#SBATCH -o ./rf.o
#SBATCH --open-mode=truncate
cd ~/R4HPC/code_5
pwd
## modules are specific to andes.olcf.ornl.gov
module load openblas/0.3.17-omp
module load flexiblas
flexiblas add OpenBLAS $OLCF_OPENBLAS_ROOT/lib/libopenblas.so
export LD_PRELOAD=$OLCF_FLEXIBLAS_ROOT/lib64/libflexiblas.so
module load r
echo -e "loaded R with FlexiBLAS"
module list
time Rscript ../code_2/rf_serial.R
time mpirun --map-by ppr:1:node Rscript rf_mpi.R
time mpirun --map-by ppr:2:node Rscript rf_mpi.R
time mpirun --map-by ppr:4:node Rscript rf_mpi.R
time mpirun --map-by ppr:8:node Rscript rf_mpi.R
time mpirun --map-by ppr:16:node Rscript rf_mpi.R
time mpirun --map-by ppr:32:node Rscript rf_mpi.R
BASH
#!/bin/bash
#PBS -N rf
#PBS -l select=1:ncpus=32
#PBS -l walltime=00:05:00
#PBS -q qexp
#PBS -e rf.e
#PBS -o rf.o
cd ~/R4HPC/code_5
pwd
module load R
echo "loaded R"
time Rscript ../code_2/rf_serial.R
time mpirun --map-by ppr:1:node Rscript rf_mpi.R
time mpirun --map-by ppr:2:node Rscript rf_mpi.R
time mpirun --map-by ppr:4:node Rscript rf_mpi.R
time mpirun --map-by ppr:8:node Rscript rf_mpi.R
time mpirun --map-by ppr:16:node Rscript rf_mpi.R
time mpirun --map-by ppr:32:node Rscript rf_mpi.R
Content from Parallel Randomized Singular Value Decomposition for Classification
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How well can an alternative parallel classification algorithm work?
Objectives
- Introduce the Randomized singular value decomposition (RSVD)
- Use the randomized singular value decomposition to classify digits
Introduction
What is the randomized singular value decomposition?
R
rsvd <- function(x, k=1, q=3, retu=TRUE, retvt=TRUE) {
n <- ncol(x)
if (class(x) == "matrix")
Omega <- matrix(runif(n*2L*k), nrow=n, ncol=2L*k)
else if (class(x) == "ddmatrix") #<<
Omega <- ddmatrix("runif", nrow=n, ncol=2L*k, bldim=x@bldim, ICTXT=x@ICTXT) #<<
Y <- x %*% Omega
Q <- qr.Q(qr(Y))
for (i in 1:q) {
Y <- crossprod(x, Q)
Q <- qr.Q(qr(Y))
Y <- x %*% Q
Q <- qr.Q(qr(Y))
}
B <- crossprod(Q, x)
if (!retu) nu <- 0
else nu <- min(nrow(B), ncol(B))
if (!retvt) nv <- 0
else nv <- min(nrow(B), ncol(B))
svd.B <- La.svd(x=B, nu=nu, nv=nv)
d <- svd.B$d
d <- d[1L:k]
# Produce u/vt as desired
if (retu) {
u <- svd.B$u
u <- Q %*% u
u <- u[, 1L:k, drop=FALSE]
}
if (retvt) vt <- svd.B$vt[1L:k, , drop=FALSE]
# wrangle return
if (retu) {
if (retvt) svd <- list(d=d, u=u, vt=vt)
else svd <- list(d=d, u=u)
} else {
if (retvt) svd <- list(d=d, vt=vt)
else svd <- list(d=d)
}
return( svd )
}
HDF5 and reading in data
R
suppressMessages(library(rhdf5))
suppressMessages(library(pbdMPI))
file = "/gpfs/alpine/world-shared/gen011/mnist/train.hdf5"
dat1 = "image"
dat2 = "label"
## get and broadcast dimensions to all processors
if (comm.rank() == 0) {
h5f = H5Fopen(file, flags="H5F_ACC_RDONLY")
h5d = H5Dopen(h5f, dat1)
h5s = H5Dget_space(h5d)
dims = H5Sget_simple_extent_dims(h5s)$size
H5Dclose(h5d)
H5Fclose(h5f)
} else dims = NA
dims = bcast(dims)
nlast = dims[length(dims)] # last dim moves slowest
my_ind = comm.chunk(nlast, form = "vector")
## parallel read of data columns
my_train = as.double(h5read(file, dat1, index = list(NULL, NULL, my_ind)))
my_train_lab = as.character(h5read(file, dat2, index = list(my_ind)))
H5close()
dim(my_train) = c(prod(dims[-length(dims)]), length(my_ind))
my_train = t(my_train) # row-major write and column-major read
my_train = rbind(my_train, my_train, my_train, my_train, my_train, my_train)
comm.cat("Local dim at rank", comm.rank(), ":", dim(my_train), "\n")
total_rows = allreduce(nrow(my_train))
comm.cat("Total dim :", total_rows, ncol(my_train), "\n")
## plot for debugging
# if(comm.rank() == 0) {
# ivals = sample(nrow(my_train), 36)
# library(ggplot2)
# image = rep(ivals, 28*28)
# lab = rep(my_train_lab[ivals], 28*28)
# image = factor(paste(image, lab, sep = ": "))
# col = rep(rep(1:28, 28), each = length(ivals))
# row = rep(rep(1:28, each = 28), each = length(ivals))
# im = data.frame(image = image, row = row, col = col,
# val = as.numeric(unlist(my_train[ivals, ])))
# print(ggplot(im, aes(row, col, fill = val)) + geom_tile() + facet_wrap(~ image))
# }
#barrier()
## remove finalize if sourced in another script
#finalize()
Using the Randomized Singular Value Decomposition for Classification
R
source("mnist_read_mpi.R") # reads blocks of rows
suppressMessages(library(pbdDMAT))
suppressMessages(library(pbdML))
init.grid()
## construct block-cyclic ddmatrix
bldim = c(allreduce(nrow(my_train), op = "max"), ncol(my_train))
gdim = c(allreduce(nrow(my_train), op = "sum"), ncol(my_train))
dmat_train = new("ddmatrix", Data = my_train, dim = gdim,
ldim = dim(my_train), bldim = bldim, ICTXT = 2)
cyclic_train = as.blockcyclic(dmat_train)
comm.print(comm.size())
t1 = as.numeric(Sys.time())
rsvd_train = rsvd(cyclic_train, k = 10, q = 3, retu = FALSE, retvt = TRUE)
t2 = as.numeric(Sys.time())
t1 = allreduce(t1, op = "min")
t2 = allreduce(t2, op = "max")
comm.cat("Time:", t2 - t1, "seconds\n")
comm.cat("dim(V):", dim(rsvd_train$vt), "\n")
comm.cat("rsvd top 10 singular values:", rsvd_train$d, "\n")
finalize()
BASH
#!/bin/bash
#SBATCH -J rsve
#SBATCH -A gen011
#SBATCH -p batch
#SBATCH --nodes=4
#SBATCH --mem=0
#SBATCH -t 00:00:10
#SBATCH -e ./rsve.e
#SBATCH -o ./rsve.o
#SBATCH --open-mode=truncate
## assumes this repository was cloned in your home area
cd ~/R4HPC/rsvd
pwd
## modules are specific to andes.olcf.ornl.gov
module load openblas/0.3.17-omp
module load flexiblas
flexiblas add OpenBLAS $OLCF_OPENBLAS_ROOT/lib/libopenblas.so
export LD_PRELOAD=$OLCF_FLEXIBLAS_ROOT/lib64/libflexiblas.so
export UCX_LOG_LEVEL=error # no UCX warn messages
module load r
echo -e "loaded R with FlexiBLAS"
module list
time mpirun --map-by ppr:1:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:2:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:4:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:8:node Rscript mnist_rsvd.R
BASH
#!/bin/bash
#PBS -N rsvd
#PBS -l select=1:mpiprocs=64,walltime=00:10:00
#PBS -q qexp
#PBS -e rsvd.e
#PBS -o rsvd.o
cd ~/ROBUST2022/mpi
pwd
module load R
echo "loaded R"
## Fix for warnings from libfabric/1.12 bug
module swap libfabric/1.12.1-GCCcore-10.3.0 libfabric/1.13.2-GCCcore-11.2.0
export UCX_LOG_LEVEL=error
time mpirun --map-by ppr:1:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:2:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:4:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:8:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:16:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:32:node Rscript mnist_rsvd.R
time mpirun --map-by ppr:64:node Rscript mnist_rsvd.R
Content from Using RMarkdown
Last updated on 2024-02-06 | Edit this page
Overview
Questions
- How do you write a lesson using R Markdown and sandpaper?
Objectives
- Explain how to use markdown with the new lesson template
- Demonstrate how to include pieces of code, figures, and nested challenge blocks
Introduction
This is a lesson created via The Carpentries Workbench. It is written in Pandoc-flavored Markdown for static files and R Markdown for dynamic files that can render code into output. Please refer to the Introduction to The Carpentries Workbench for full documentation.
What you need to know is that there are three sections required for a valid Carpentries lesson template:
-
questions
are displayed at the beginning of the episode to prime the learner for the content. -
objectives
are the learning objectives for an episode displayed with the questions. -
keypoints
are displayed at the end of the episode to reinforce the objectives.
OUTPUT
[1] "This new lesson looks good"
You can add a line with at least three colons and a
solution
tag.
Figures
You can also include figures generated from R Markdown:
R
pie(
c(Sky = 78, "Sunny side of pyramid" = 17, "Shady side of pyramid" = 5),
init.angle = 315,
col = c("deepskyblue", "yellow", "yellow3"),
border = FALSE
)
Or you can use standard markdown for static figures with the following syntax:
![optional caption that appears below the figure](figure url){alt='alt text for accessibility purposes'}
Math
One of our episodes contains \(\LaTeX\) equations when describing how to create dynamic reports with {knitr}, so we now use mathjax to describe this:
$\alpha = \dfrac{1}{(1 - \beta)^2}$
becomes: \(\alpha = \dfrac{1}{(1 - \beta)^2}\)
Cool, right?