Although today’s handy laptops perform many advanced and computationally intensive tasks, projects involving Big Data require significantly more resources. That need is satisfied by the HPC infrastructure, built from a network of computing clusters combined with immense memory. Access to these resources is remote, so job submission and data preview occurs through an interface on any local computing machine from any (allowed) geolocation. The HPC infrastructure is a shared community space, so you might want to familiarize yourself with the usage policy to avoid disrupting peer work.

Table of contents

1. Setting up your home directory for data analysis

2. Introduction to HPC infrastructure

3. Secure Shell Connection (SSH)

4. Remote Data Access
(see more in section 7: Data Acquisition)

5. Available Software

6. Introduction to Job Scheduling

7. Introduction to GNU Parallel

8. Introduction to Containers

Homepage Prior Section Next Section