Difference between revisions of "SLab:Servers"
(→linne) |
(→linne) |
||
Line 7: | Line 7: | ||
* 6GB RAM | * 6GB RAM | ||
* MacOS X 10.4 | * MacOS X 10.4 | ||
+ | * [http://www.apple.com/support/xserveraid/ Xserve RAID] | ||
+ | ** xraid_HD1: 2.7TB | ||
+ | *** Seven 500GB FC disks in a RAID 5 set | ||
+ | ** xraid_HD2: 2.7TB | ||
+ | *** Seven 500GB FC disks in a RAID 5 set | ||
== s2 == | == s2 == |
Revision as of 10:35, 11 March 2010
Schuster Lab Server Information
The Schuster Lab has three main file servers: linne, s2, and s3
linne
- Apple Xserve G5
- Two 2.3 GHz PowerPC G5 processors
- 6GB RAM
- MacOS X 10.4
- Xserve RAID
- xraid_HD1: 2.7TB
- Seven 500GB FC disks in a RAID 5 set
- xraid_HD2: 2.7TB
- Seven 500GB FC disks in a RAID 5 set
- xraid_HD1: 2.7TB
s2
- Dell PowerEdge 1950
- Two quad-core 2.0 GHz Xeon E5405 processors (8 cores total)
- 8GB RAM
- Solaris 10
- PERC 6/E Adapter, 512MB
- Three Dell PowerVault MD1000 Storage Enclosures
- md1k-6: 12TB
- Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
- zpool md1k-6
- md1k-5: 12TB
- Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
- zpool md1k-5
- md1k-4: 12 TB
- Fourteen 1TB SATA disks in a RAID 5 set and one 1TB SATA disk hot spare
- zpool md1k-4
- md1k-6: 12TB
s2 is a file server that is connected to both the persephone (s2.persephone.bx.psu.edu) and linne (s2.linne.bx.psu.edu) network switches. Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system. Each virtual drive is used to create a ZFS zpool. Each zpool contains two zfs file systems.
- md1k-6
- md1k-6/archive
- used for archiving sequencing runs
- exported read-only
- gzip compression
- md1k-6/data
- used for arbitrary data
- exported read-write
- lzjb compression
- md1k-6/archive
- md1k-5
- md1k-5/archive
- md1k-5/data
- md1k-4
- md1k-4/archive
- md1k-4/data
Each zpool is auto-scrubbed once a week. A cron job runs the /root/monitor/check_server_status shell script once an hour and emails and errors to email addresses in the /root/monitor/error_log_recipients.txt file.
s3
- Dell PowerEdge 1950
- Two quad-core 2.0 GHz Xeon E5405 processors (8 cores total)
- 8GB RAM
- Solaris 10
- PERC 5/E Adapter, 256MB
- Three Dell PowerVault MD1000 Storage Enclosures
- md1k-3: 8.7TB
- Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
- zpool md1k-3
- md1k-2: 8.7TB
- Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
- zpool md1k-2
- md1k-1: 8.7TB
- Fourteen 750GB SATA disks in a RAID 5 set and one 750GB SATA disk hot spare
- zpool md1k-1
- md1k-3: 8.7TB
s3 is a file server that is connected to both the persephone (s3.persephone.bx.psu.edu) and linne (s3.linne.bx.psu.edu) network switches. Each Dell PowerVault MD1000 uses 14 drives in a RAID 5 set with 1 drive as a hot spare to present a single virtual drive to the operating system. Each virtual drive is used to create a ZFS zpool. Each zpool contains two zfs file systems.
- md1k-3
- md1k-3/archive
- used for archiving sequencing runs
- exported read-only
- gzip compression
- md1k-3/data
- used for arbitrary data
- exported read-write
- lzjb compression
- md1k-3/archive
- md1k-2
- md1k-2/archive
- md1k-2/data
- md1k-1
- md1k-1/archive
- md1k-1/data
Each zpool is auto-scrubbed once a week. A cron job runs the /root/monitor/check_server_status shell script once an hour and emails and errors to email addresses in the /root/monitor/error_log_recipients.txt file.