Difference between revisions of "SLab:Systems"

From CCGB
Jump to: navigation, search
(Created page with '= Schuster Lab Computing Resources = Schuster Lab machines in 509 Wartik == linne cluster == The linne cluster runs [http://wikis.sun.com/dis…')
 
 
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
= Schuster Lab Computing Resources =
 
= Schuster Lab Computing Resources =
[[File:Schuster_Machines.png|frame|Schuster Lab machines in 509 Wartik]]
+
 
 +
== persephone cluster ==
 +
([http://en.wikipedia.org/wiki/List_of_Firefly_planets_and_moons#Persephone Persephone] name origins)
 +
 
 +
The persephone cluster is a member of the central [[BX:SGE|BX SGE]] installation.
 +
 
 +
Queue status can be seen at http://qstat.bx.psu.edu
 +
 
 +
Physical specifications can be seen in the Ganglia Physical View for Persephone: http://ganglia.bx.psu.edu/?p=2&c=persephone
 +
 
 +
There is one global all.q, with appropriate access restrictions to limit jobs to the appropriate nodes. It is absolutely essential that you specify your job's exact requirements when submitting with qsub (memory, # of cpus, etc). c1 and c2 are in a special 454pipeline.q, used for off-rig analysis of 454 Titanium runs, which only the FLX sequencer users can submit to (schuster-flx[1234]).
 +
 
 +
Users can currently ssh directly into the nodes. This is highly discouraged, as interactive jobs not under control of SGE can adversely affect other jobs. Direct ssh logins will be disabled ''soon'', so it is recommended that you become familiar with qsub.
 +
 
 +
== s1 (s1.persephone.bx.psu.edu and s1.linne.bx.psu.edu) ==
 +
s1 is dual-homed, it is connected to both the persephone and linne network switches.  s1 can be used by anyone to run arbitrary jobs.  It is especially useful for jobs that need a lot of local disk space.
 +
* [http://h20000.www2.hp.com/bizsupport/TechSupport/Home.jsp?lang=en&cc=us&prodTypeId=15351&prodSeriesId=1121516&submit.y=0&submit.x=0&lang=en&cc=us HP ProLiant DL380 (G5)]
 +
* Two quad-core 3.0 GHz Xeon X5450 processors (a total of 8 cores)
 +
* 16GB Ram
 +
* 3TB /scratch
 +
* CentOS 5.5
 +
 
 +
== c14 ==
 +
c14 is part of the Chestnut project. Access to c14 is restricted to a small set of users. Jobs are run on c14 as usual using qsub
 +
* PowerEdge R815
 +
* 4x 12-core AMD Opteron 6172 (2.1GHz), 512KB L2
 +
* 256GB Memory (64GB local to each socket)
 +
* ~2.5TB /scratch
 +
* CentOS 5.5
 +
 
 
== linne cluster ==
 
== linne cluster ==
The linne cluster runs [http://wikis.sun.com/display/sungridengine/Home SGE 6.0u6] and can be used by anyone to submit arbitrary jobs.  In addition, the linne cluster runs BioTeam iNquiry, which can be accessed [http://linne.bx.psu.edu/bipod/ here].
+
The linne cluster runs [http://docs.sun.com/app/docs/doc/817-6117 SGE 6.0u6] and can be used by anyone to submit arbitrary jobs.  In addition, the linne cluster runs BioTeam iNquiry, which can be accessed [http://linne.bx.psu.edu/bipod/ here].
 
* head node: linne.bx.psu.edu
 
* head node: linne.bx.psu.edu
 
** [http://www.apple.com/support/xserve/ Apple Xserve G5]
 
** [http://www.apple.com/support/xserve/ Apple Xserve G5]
Line 16: Line 45:
 
** MacOS X 10.4  
 
** MacOS X 10.4  
 
* 5 compute nodes: node036.linne.bx.psu.edu --> node040.linne.bx.psu.edu
 
* 5 compute nodes: node036.linne.bx.psu.edu --> node040.linne.bx.psu.edu
** [http://support.dell.com/support/edocs/systems/pe1950/ Dell PowerEdge 1950]
+
** These nodes have been decomissioned - re-tasked as nodes in persephone. They were horribly outdated and had been disable in the SGE queue on linne for over a year.
** Two quad-core 2.83 GHz Xeon E5440 processors (a total of 8 cores)
 
** 16GB RAM
 
** 100GB /scratch
 
** Red Hat Enterprise Linux 4.5
 
 
 
== persephone cluster ==
 
The persephone cluster runs [http://wikis.sun.com/display/sungridengine/Home SGE 6.2u1] and is used for running signal processing jobs for our various sequencing platforms.
 
* 5 compute nodes: c1.persephone.bx.psu.edu --> c4.persephone.bx.psu.edu, c8.persephone.bx.psu.edu
 
** [http://support.dell.com/support/edocs/systems/pe1950/ Dell PowerEdge 1950]
 
** Two quad-core 2.83 GHz Xeon E5440 processors (a total of 8 cores)
 
** 16GB RAM
 
** 100GB /scratch
 
** CentOS 5.4
 
* 3 compute nodes: c5.persephone.bx.psu.edu --> c7.persephone.bx.psu.edu
 
** [http://support.dell.com/support/edocs/systems/pe1435sc/ Dell PowerEdge SC1435]
 
** Two quad-core 2.7 GHz Opteron 2384 processors (a total of 8 cores)
 
** 32GB RAM
 
** 100GB /scratch
 
** CentOS 5.4
 
 
 
== s1.persephone.bx.psu.edu ==
 
s1.persephone.bx.psu.edu can be used by anyone to run arbitrary jobs.  It is especially useful for jobs that need a lot of local disk space.
 
* [http://h20000.www2.hp.com/bizsupport/TechSupport/Home.jsp?lang=en&cc=us&prodTypeId=15351&prodSeriesId=1121516&submit.y=0&submit.x=0&lang=en&cc=us HP ProLiant DL380 (G5)]
 
* Two quad-core 3.0 GHz Xeon X5450 processors (a total of 8 cores)
 
* 16GB Ram
 
* 3TB /scratch
 
* CentOS 5.4
 

Latest revision as of 16:04, 4 October 2010

Schuster Lab Computing Resources

persephone cluster

(Persephone name origins)

The persephone cluster is a member of the central BX SGE installation.

Queue status can be seen at http://qstat.bx.psu.edu

Physical specifications can be seen in the Ganglia Physical View for Persephone: http://ganglia.bx.psu.edu/?p=2&c=persephone

There is one global all.q, with appropriate access restrictions to limit jobs to the appropriate nodes. It is absolutely essential that you specify your job's exact requirements when submitting with qsub (memory, # of cpus, etc). c1 and c2 are in a special 454pipeline.q, used for off-rig analysis of 454 Titanium runs, which only the FLX sequencer users can submit to (schuster-flx[1234]).

Users can currently ssh directly into the nodes. This is highly discouraged, as interactive jobs not under control of SGE can adversely affect other jobs. Direct ssh logins will be disabled soon, so it is recommended that you become familiar with qsub.

s1 (s1.persephone.bx.psu.edu and s1.linne.bx.psu.edu)

s1 is dual-homed, it is connected to both the persephone and linne network switches. s1 can be used by anyone to run arbitrary jobs. It is especially useful for jobs that need a lot of local disk space.

  • HP ProLiant DL380 (G5)
  • Two quad-core 3.0 GHz Xeon X5450 processors (a total of 8 cores)
  • 16GB Ram
  • 3TB /scratch
  • CentOS 5.5

c14

c14 is part of the Chestnut project. Access to c14 is restricted to a small set of users. Jobs are run on c14 as usual using qsub

  • PowerEdge R815
  • 4x 12-core AMD Opteron 6172 (2.1GHz), 512KB L2
  • 256GB Memory (64GB local to each socket)
  • ~2.5TB /scratch
  • CentOS 5.5

linne cluster

The linne cluster runs SGE 6.0u6 and can be used by anyone to submit arbitrary jobs. In addition, the linne cluster runs BioTeam iNquiry, which can be accessed here.

  • head node: linne.bx.psu.edu
    • Apple Xserve G5
    • Two 2.3 GHz PowerPC G5 processors
    • 6GB RAM
    • 75GB /scratch
    • MacOS X 10.4
  • 31 compute nodes: node001.linne.bx.psu.edu --> node031.linne.bx.psu.edu
    • Apple Xserve G5
    • Two 2.3 GHz PowerPC G5 processors
    • 2GB RAM
    • 75GB /scratch
    • MacOS X 10.4
  • 5 compute nodes: node036.linne.bx.psu.edu --> node040.linne.bx.psu.edu
    • These nodes have been decomissioned - re-tasked as nodes in persephone. They were horribly outdated and had been disable in the SGE queue on linne for over a year.