Difference between revisions of "SLab:Servers"
(→linne) |
(→linne) |
||
Line 14: | Line 14: | ||
linne is a file server as well as the head node for the linne cluster (31 nodes running MacOS X 10.4, 5 nodes running Red Hat Enterprise Linux 4.5). linne maintains it's own user accounts, separate from the standard bx user accounts. To make it easier to share data, the uids of linne accounts have, for the most part, been synchronized so that they are the same as the uids of the corresponding bx accounts. | linne is a file server as well as the head node for the linne cluster (31 nodes running MacOS X 10.4, 5 nodes running Red Hat Enterprise Linux 4.5). linne maintains it's own user accounts, separate from the standard bx user accounts. To make it easier to share data, the uids of linne accounts have, for the most part, been synchronized so that they are the same as the uids of the corresponding bx accounts. | ||
+ | |||
+ | === Sun Grid Engine === | ||
+ | linne uses the [http://docs.sun.com/app/docs/coll/1017.3/ Sun Grid Engine 6.0u6] batch-queuing system to manage jobs submitted to the linne cluster. | ||
+ | |||
+ | === BioTeam iNquiry === | ||
+ | linne also runs BioTeam's iNquiry Bioinformatics Protal which can be accessed [http://linne.bx.psu.edu/bipod/ here]. | ||
== s2 == | == s2 == |
Revision as of 15:03, 11 March 2010
Contents
Schuster Lab Server Information
The Schuster Lab has three main file servers: linne, s2, and s3
linne
- Apple Xserve G5
- Two 2.3 GHz PowerPC G5 processors
- 6GB RAM
- MacOS X 10.4
- Xserve RAID
- xraid_HD1: 2.7TB
- Seven 500GB FC disks in a RAID 5 set
- xraid_HD2: 2.7TB
- Seven 500GB FC disks in a RAID 5 set
- xraid_HD1: 2.7TB
linne is a file server as well as the head node for the linne cluster (31 nodes running MacOS X 10.4, 5 nodes running Red Hat Enterprise Linux 4.5). linne maintains it's own user accounts, separate from the standard bx user accounts. To make it easier to share data, the uids of linne accounts have, for the most part, been synchronized so that they are the same as the uids of the corresponding bx accounts.
Sun Grid Engine
linne uses the Sun Grid Engine 6.0u6 batch-queuing system to manage jobs submitted to the linne cluster.
BioTeam iNquiry
linne also runs BioTeam's iNquiry Bioinformatics Protal which can be accessed here.
s2
- Dell PowerEdge 1950
- Two quad-core 2.0 GHz Xeon E5405 processors (8 cores total)
- 8GB RAM
- Solaris 10
- PERC 6/E Adapter, 512MB
- Three Dell PowerVault MD1000 Storage Enclosures
- md1k-6: 12TB
- Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
- zpool md1k-6
- md1k-5: 12TB
- Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
- zpool md1k-5
- md1k-4: 12 TB
- Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
- zpool md1k-4
- md1k-6: 12TB
s2 is a file server that is connected to both the persephone (s2.persephone.bx.psu.edu) and linne (s2.linne.bx.psu.edu) network switches. Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system. Each virtual drive is used to create a ZFS zpool. Each zpool contains two zfs file systems.
- md1k-6
- md1k-6/archive
- used for archiving sequencing runs
- exported read-only
- gzip compression
- md1k-6/data
- used for arbitrary data
- exported read-write
- lzjb compression
- md1k-6/archive
- md1k-5
- md1k-5/archive
- md1k-5/data
- md1k-4
- md1k-4/archive
- md1k-4/data
Each zpool is auto-scrubbed once a week. A cron job runs the /root/monitor/check_server_status shell script once an hour and emails and errors to email addresses in the /root/monitor/error_log_recipients.txt file.
s3
- Dell PowerEdge 1950
- Two quad-core 2.0 GHz Xeon E5405 processors (8 cores total)
- 8GB RAM
- Solaris 10
- PERC 5/E Adapter, 256MB
- Three Dell PowerVault MD1000 Storage Enclosures
- md1k-3: 8.7TB
- Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
- zpool md1k-3
- md1k-2: 8.7TB
- Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
- zpool md1k-2
- md1k-1: 8.7TB
- Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
- zpool md1k-1
- md1k-3: 8.7TB
s3 is a file server that is connected to both the persephone (s3.persephone.bx.psu.edu) and linne (s3.linne.bx.psu.edu) network switches. Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system. Each virtual drive is used to create a ZFS zpool. Each zpool contains two zfs file systems.
- md1k-3
- md1k-3/archive
- used for archiving sequencing runs
- exported read-only
- gzip compression
- md1k-3/data
- used for arbitrary data
- exported read-write
- lzjb compression
- md1k-3/archive
- md1k-2
- md1k-2/archive
- md1k-2/data
- md1k-1
- md1k-1/archive
- md1k-1/data
Each zpool is auto-scrubbed once a week. A cron job runs the /root/monitor/check_server_status shell script once an hour and emails and errors to email addresses in the /root/monitor/error_log_recipients.txt file.