# Lab Resource

Here is a very general overview of lab-wide resources.

### 📧Accounts

1. [Slack](http://zhanglabcedars-sinai.slack.com)
2. [GitHub](https://github.com/zhanglab-aim)
3. Google Calendar (TBD)

### 🐎HPC

If you are new to High-Performance Computing clusters, be sure to read the HPC guidelines.

1. Apply for Cedars HPC using Cedars-Sinai Service Center. Open a request ticket in this [URL](https://csmc.service-now.com/cssp?id=sc_cat_item\&sys_id=d2f4e4e34f3cea0041bce8128110c749). You will need your Cedars login to open this webpage.

> **NOTE**: remember to add one sentence in the "Additional Information" textbox to specify that you'd like access to BOTH the <mark style="color:red;">old 2013 Cisco</mark> and the <mark style="color:red;">new 2022 HPE</mark> clusters.
>
> Read more about these two clusters in [HPC guidelines](https://zhanglab-aim.gitbook.io/labwiki/handbook/hpc-usage#basic-concepts).

1. If you are working using Cedars intranet (i.e. onsite), go to [this wiki page](https://zhanglab-aim.gitbook.io/labwiki/handbook) for instructions on how to SSH to the HPC server.
2. If you are working remotely, you will need to open another ticket for requesting the Cedars VPN [here](https://csmc.service-now.com/cssp?id=sc_cat_item\&sys_id=c46fdabb4f44e20041bce8128110c7bd).
3. Additional guidelines on HPC can be found [here](https://zhanglab-aim.gitbook.io/labwiki/handbook/hpc-usage) in our handbook.

### 🏎GPUs

GPUs on Cedars-Sinai HPE cluster can be accessed like this:

```bash
# For Nvidia V100
salloc -p gpu --gpus=v100:1 --time=1-0 --mem=64g --cpus-per-task 8 --ntasks=1 
# For Nvidia A100
salloc -p gpu --gpus=a100:1 --time=1-0 --mem=64g --cpus-per-task 8 --ntasks=1 
# init bash on your allocated compute node
srun --pty bash -i
```
