Link Search Menu Expand Document

Penn Brain Science Center cluster computing resources

This is the index page for the Penn Brain Science Center cluster computing documentation.

The BSC computing cluster is hosted by DART (formerly PMACS) in their “Limited Performance Computing” (LPC) environment.

The LPC is distinct from the PMACS High Performance Computing (HPC) service. The HPC is better suited for large-scale data processing beyond the capabilities of the LPC.

The LPC serves multiple different labs and centers. Some groups affiliated with the BSC also have independent LPC resources outside of the BSC cluster, which are generally similar, but not identical to the BSC. Therefor the LPC documentation provided by other LPC users can be a useful source of information to BSC users, but some details may differ.

  1. Penn Brain Science Center cluster computing resources
  2. LPC overview
    1. Access
    2. Security and privacy
    3. Software
    4. User groups
    5. Billing
    6. Additional PMACS resources
    7. LPC accounts
    8. LPC vs CFN cluster usage
    9. LPC Access
  3. External resources
    1. PMACS documentation (requires Pennkey authentication)
    2. IBM LSF documentation
    3. Computing skills
      1. Linux and shell scripting

LPC overview

The LPC is a shared resource for cluster computing, supported by DART and the BSC. The LPC provides a substantial upgrade in hardware and software performance compared to the previous CFN cluster. Like the old CFN cluster, the LPC is a Linux computing environment with multiple compute nodes, a grid engine job scheduler, and backed-up network storage.

Brain Science Center (BSC) Computing Cluster. The BSC manages a dedicated neuroimaging computing cluster with 9 compute nodes. There are 8 PowerEdge R640 Servers each with 2 Intel Xeon Gold 6240R 2.4G processors that have 24 cores, 384GB of RAM, and and 8GB RAM per core along with 2TB of SSD local

storage for scratch space, 10Gig Dual Ethernet ports, and 1Gig Quad Ethernet ports. In addition there is 1 Dell PowerEdge 640 high memory node with 2 Intel Xeon Gold 6240R 2.4G processors that have 24 cores each with 768GB of RAM and 16GB RAM per core, 2TB of SSD local storage for scratch space, 10Gig Dual Ethernet ports, and 1Gig Quad Ethernet port, All nine nodes are housed in the Perlman School of Medicine’s commercial-grade server room with redundant power supplies; UPS power backup systems; fire suppression system; and 24-hour restricted access and security. The BSC cluster is connected to the PMAC local “cloud” storage array that provides economical storage with backup and archiving capabilities

Access

Please see the article on accessing the PMACS cluster about how to access the BSC PMACS cluster.

Security and privacy

All data on the cluster should be de-identified. Contact PMACS if you have specific requirements

Software

User groups

Billing

Additional PMACS resources

LPC accounts

How to get an LPC account.

LPC vs CFN cluster usage

Main article: LSF for CFN users

The LPC cluster uses the IBM LSF system for parallel computing, while the CFN cluster used the Sun Grid Engine. While many of the same concepts apply, there are several differences in how jobs are submitted and run.

LPC Access

BSC LPC Access

External resources

PMACS documentation (requires Pennkey authentication)

LSF Basics

IBM LSF documentation

Full LSF documentation

Computing skills

Using the cluster effectively requires familiarity with the Linux command line and basic scripting. There are many resources online for this. Penn personnel can access LinkedIn Learning (formerly Lynda), by signing in with their PennKey at

http://linkedinlearning.upenn.edu/

Linux and shell scripting

Some courses of interest:

Learning Linux Command Line this course gives an introduction to using the Linux command line.

Learning Linux Shell Scripting this course covers basic shell scripting.

Linux: Bash Shell and Scripts this is a longer course on shell scripting that covers the basics but also gets into more advanced topics. Shell scripting is powerful and convenient to run basic tasks quickly, but users should consider another language such as Python for complex systems.