SLab:Todo
From CCGB
in progress
- md1k-2 disk problems - currently waiting for problems to show up again to get fresh RAID controller log entries, and to verify that switching to the spare EMM (array controller) didn't fix the problem
queued
- nagios monitoring (http://kaylee.bx.psu.edu/nagios [login as guest/guest])
- what do we want to monitor?
- who gets notified? everyone all at once, or use elapsed-time-based escalations?
- automatic snapshots for the ZFS datasets on s2 and s3 (see http://blogs.sun.com/timf/resource/README.zfs-auto-snapshot.txt)
- more scripts:
- migrate sequencing runs from temp to staging (currently /afs/bx.psu.edu/depot/data/schuster_lab/sequencing/support/software/archive/move_*_temp_to_staging)
- perhaps notify by email automatically when there are finished runs ready to be moved?
- notify by email when this is done so any interested parties will see that it has been done, and provide paths to new runs
- this should call a script to update symlinks and release the data.schuster_lab volume
- script to better handle submitting illumina jobs to cluster, with email notifications
- migrate sequencing runs from temp to staging (currently /afs/bx.psu.edu/depot/data/schuster_lab/sequencing/support/software/archive/move_*_temp_to_staging)
- Migrate linne to bx network.
- Migrate Schuster Lab machines to bx network.
- Or at least, install AFS client.
- Automate the archiving of sequencing run directories.
- Maybe after two weeks in staging they're moved into the archive?
- Combine linne and persephone clusters - dependent on finishing linne-to-bx migration
- tsm backups of s2 and s3?
- Replace BioTeam iNquiry
- Use Galaxy instead?
- Implement a centralized database of sequencing run information.
- Basically a small LIMS.
- Maybe integrate with galaxy
- After problems with md1k-2 are fixed, turn on automated scrubbing.
- clean up old files in /afs/bx.psu.edu/depot/data/schuster_lab/old_stuff_to_cleanup