Difference between revisions of "SLab:Servers"

From CCGB
Jump to: navigation, search
(linne)
 
(6 intermediate revisions by the same user not shown)
Line 9: Line 9:
 
* [http://www.apple.com/support/xserveraid/ Xserve RAID]
 
* [http://www.apple.com/support/xserveraid/ Xserve RAID]
 
** xraid_HD1: 2.7TB
 
** xraid_HD1: 2.7TB
*** Seven 500GB FC disks in a RAID 5 set
+
*** Seven 500GB PATA disks in a RAID 5 set
 +
**** [http://www.hitachigst.com/tech/techlib.nsf/products/Deskstar_7K500 Hitachi Deskstar 7K500]
 
** xraid_HD2: 2.7TB
 
** xraid_HD2: 2.7TB
*** Seven 500GB FC disks in a RAID 5 set
+
*** Seven 500GB PATA disks in a RAID 5 set
linne is a file server as well as the head node for the linne cluster (31 nodes running MacOS X 10.4, 5 nodes running Red Hat Enterprise Linux 4.5).  linne maintains it's own user accounts, separate from the standard bx user accounts.  To make it esier to share data, the uids of linne accounts have, for the most part, been synchronized so that they are the same as the uids of the corresponding bx accounts.
+
**** [http://www.hitachigst.com/tech/techlib.nsf/products/Deskstar_7K500 Hitachi Deskstar 7K500]
 +
 
 +
linne is a file server as well as the head node for the linne cluster (31 nodes running MacOS X 10.4, 5 nodes running Red Hat Enterprise Linux 4.5).  linne maintains it's own user accounts, separate from the standard bx user accounts.  To make it easier to share data, the uids of linne accounts have, for the most part, been synchronized so that they are the same as the uids of the corresponding bx accounts.
 +
 
 +
=== Sun Grid Engine ===
 +
linne uses the [http://docs.sun.com/app/docs/coll/1017.3/ Sun Grid Engine 6.0u6] batch-queuing system to manage jobs submitted to the linne cluster.
 +
 
 +
=== BioTeam iNquiry ===
 +
linne also runs BioTeam's iNquiry Bioinformatics Portal which can be accessed [http://linne.bx.psu.edu/bipod/ here].
  
 
== s2 ==
 
== s2 ==
Line 23: Line 32:
 
** md1k-6: 12TB
 
** md1k-6: 12TB
 
*** Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
 
*** Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
 +
**** [http://www.hitachigst.com/tech/techlib.nsf/products/Ultrastar_A7K1000 Hitachi Ultrastar A7K1000]
 
*** zpool md1k-6
 
*** zpool md1k-6
 
** md1k-5: 12TB
 
** md1k-5: 12TB
 
*** Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
 
*** Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
 +
**** [http://www.hitachigst.com/tech/techlib.nsf/products/Ultrastar_A7K1000 Hitachi Ultrastar A7K1000]
 
*** zpool md1k-5
 
*** zpool md1k-5
 
** md1k-4: 12 TB
 
** md1k-4: 12 TB
 
*** Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
 
*** Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
 +
**** [http://www.hitachigst.com/tech/techlib.nsf/products/Ultrastar_A7K1000 Hitachi Ultrastar A7K1000]
 
*** zpool md1k-4
 
*** zpool md1k-4
 
   
 
   
 
s2 is a file server that is connected to both the persephone (s2.persephone.bx.psu.edu) and linne (s2.linne.bx.psu.edu) network switches.  Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system.  Each virtual drive is used to create a [http://docs.sun.com/app/docs/doc/819-5461 ZFS] zpool.  Each zpool contains two zfs file systems.
 
s2 is a file server that is connected to both the persephone (s2.persephone.bx.psu.edu) and linne (s2.linne.bx.psu.edu) network switches.  Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system.  Each virtual drive is used to create a [http://docs.sun.com/app/docs/doc/819-5461 ZFS] zpool.  Each zpool contains two zfs file systems.
 +
 +
s2 runs samba and makes three locations available to illumina-ga:
 +
<pre>
 +
s2.persephone\\illumina-4 is on s2:/zfs/md1k-4/data/illumina
 +
s2.persephone\\illumina-5 is on s2:/zfs/md1k-5/data/illumina
 +
s2.persephone\\illumina-5 is on s2:/zfs/md1k-6/data/illumina
 +
</pre>
  
 
* md1k-6
 
* md1k-6
Line 60: Line 79:
 
** md1k-3: 8.7TB
 
** md1k-3: 8.7TB
 
*** Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
 
*** Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
 +
**** [http://www.seagate.com/ww/v/index.jsp?locale=en-US&name=barracuda-support&vgnextoid=dadc63a0d92b5210VgnVCM1000001a48090aRCRD Seagate Barracuda ES]
 
*** zpool md1k-3
 
*** zpool md1k-3
 
** md1k-2: 8.7TB
 
** md1k-2: 8.7TB
 
*** Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
 
*** Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
 +
**** [http://www.seagate.com/ww/v/index.jsp?locale=en-US&name=barracuda-support&vgnextoid=dadc63a0d92b5210VgnVCM1000001a48090aRCRD Seagate Barracuda ES]
 
*** zpool md1k-2
 
*** zpool md1k-2
 
** md1k-1: 8.7TB
 
** md1k-1: 8.7TB
 
*** Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
 
*** Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
 +
**** [http://www.seagate.com/ww/v/index.jsp?locale=en-US&name=barracuda-support&vgnextoid=dadc63a0d92b5210VgnVCM1000001a48090aRCRD Seagate Barracuda ES]
 
*** zpool md1k-1
 
*** zpool md1k-1
  

Latest revision as of 12:51, 26 March 2010

Schuster Lab Server Information

The Schuster Lab has three main file servers: linne, s2, and s3

linne

linne is a file server as well as the head node for the linne cluster (31 nodes running MacOS X 10.4, 5 nodes running Red Hat Enterprise Linux 4.5). linne maintains it's own user accounts, separate from the standard bx user accounts. To make it easier to share data, the uids of linne accounts have, for the most part, been synchronized so that they are the same as the uids of the corresponding bx accounts.

Sun Grid Engine

linne uses the Sun Grid Engine 6.0u6 batch-queuing system to manage jobs submitted to the linne cluster.

BioTeam iNquiry

linne also runs BioTeam's iNquiry Bioinformatics Portal which can be accessed here.

s2

s2 is a file server that is connected to both the persephone (s2.persephone.bx.psu.edu) and linne (s2.linne.bx.psu.edu) network switches. Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system. Each virtual drive is used to create a ZFS zpool. Each zpool contains two zfs file systems.

s2 runs samba and makes three locations available to illumina-ga:

s2.persephone\\illumina-4 is on s2:/zfs/md1k-4/data/illumina
s2.persephone\\illumina-5 is on s2:/zfs/md1k-5/data/illumina
s2.persephone\\illumina-5 is on s2:/zfs/md1k-6/data/illumina
  • md1k-6
    • md1k-6/archive
      • used for archiving sequencing runs
      • exported read-only
      • gzip compression
    • md1k-6/data
      • used for arbitrary data
      • exported read-write
      • lzjb compression
  • md1k-5
    • md1k-5/archive
    • md1k-5/data
  • md1k-4
    • md1k-4/archive
    • md1k-4/data

Each zpool is auto-scrubbed once a week. A cron job runs the /root/monitor/check_server_status shell script once an hour and emails and errors to email addresses in the /root/monitor/error_log_recipients.txt file.

s3

  • Dell PowerEdge 1950
  • Two quad-core 2.0 GHz Xeon E5405 processors (8 cores total)
  • 8GB RAM
  • Solaris 10
  • PERC 5/E Adapter, 256MB
  • Three Dell PowerVault MD1000 Storage Enclosures
    • md1k-3: 8.7TB
      • Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
      • zpool md1k-3
    • md1k-2: 8.7TB
      • Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
      • zpool md1k-2
    • md1k-1: 8.7TB
      • Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
      • zpool md1k-1


s3 is a file server that is connected to both the persephone (s3.persephone.bx.psu.edu) and linne (s3.linne.bx.psu.edu) network switches. Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system. Each virtual drive is used to create a ZFS zpool. Each zpool contains two zfs file systems.

  • md1k-3
    • md1k-3/archive
      • used for archiving sequencing runs
      • exported read-only
      • gzip compression
    • md1k-3/data
      • used for arbitrary data
      • exported read-write
      • lzjb compression
  • md1k-2
    • md1k-2/archive
    • md1k-2/data
  • md1k-1
    • md1k-1/archive
    • md1k-1/data

Each zpool is auto-scrubbed once a week. A cron job runs the /root/monitor/check_server_status shell script once an hour and emails and errors to email addresses in the /root/monitor/error_log_recipients.txt file.