The Center for Imaging of Neurodegenerative Diseases
The CIND has a Bruker MedSpec 4.0 Tesla whole body MRI system. In addition, a state-of-the-art Siemens Magnetom 3.0T Skyra and a Magnetom 7.0 Tesla MRI system have been ordered by the Department of Veterans Affairs exclusively for CIND research.
High Field MRI Systems:
The Magnetom 3 Tesla Skyra MRI system is the most recent new generation of high-field, high-density coil systems by Siemens. The Skyra comes equipped with an actively-shielded 70cm bore magnet, an actively-shielded water-cooled gradient system, providing 45mT/m gradients with a slew rate of 200T/m/s, a digital RF transmit and a 64 channels receive system that includes graphics processing units (GPUs) for high speed online computation, and a host computer. The system also has a multi-element body matrix RF coil and 32 elements receive head coil as well as other specialized RF coils for peripherial angiography, shoulder and knee imaging. In addition to the hardware, the Skyra system includes packages for diffusion tensor imaging, arterial spin labeling perfusion MRI, susceptibility-weighted imaging, BOLD fMRI, single voxel spectroscopy, 2D and 3D chemical shift imaging, and multinuclear spectroscopy.
CIND is also collaborating with the MR development and application center at the University of Freiburg, Germany to implement an optical motion tracking and correction package.
In late 2003, a Bruker MedSpec 4.0 Tesla whole body MRI system was delivered to the CIND in a separate but adjacent 1000 square foot building containing a waiting room and bathroom in addition to the MRI instrumentation. The Bruker state-of-the-art MRI system is the result of a collaborative effort between Bruker and Siemens Medical Systems, and includes an actively shielded, whole body 4.0 Tesla magnet, 40 mT/m gradients (Siemens Symphony gradients) with a slew rate of 200 T/m/s. The system is equipped with an eight channel receiver and a number of transmit/receive headcoils including birdcage, TEM, and array coil headcoils. The broadband system uses a 7 kW RF amplifier at the proton frequency, and utilizes Siemens interface software for data collection, manipulation, and display.
This ultra-high field MRI system is equipped with a state-of-the-art actively-shielded whole body magnet with 60 cm bore size, an actively-shielded water-cooled gradient system, providing up to 70mT/m gradients per axis with a maximum slew rate of 200 T/m/s. The digital RF components of the system is based on Skyra technology and comes with 8 independently controllable RF transmit channels and 64 RF receive channels. The system also includes a multi-element body coil and multichannel transmit/32 channel receive head coil as well as a RF power amplifier providing up to 8 kW power. In addition, the 7T system includes a multinuclear MR package, as well as neurosoftware imaging packages and IDEA/ICE research development tools for sequence development.
The CIND manages 65 terabytes of acquired and processed data storage on 3 Windows 2003 Enterprise servers. Most of the data storage is maintained on compressed file systems to conserve space. The estimated volume of the uncompressed data is between 200 and 400 terabytes. Another 30 terabytes of acquired and processed data storage are hosted on a state-of-the-art Hewlett-Packard PolyServe clustered file system; this technology enables us to distribute file server traffic across all the nodes in the cluster evenly, eliminating bottlenecks which may occur during processing.
To process and manage the data, the CIND has a network of 18 computers which utilize the Sun Grid Engine (SGE) to distribute processing jobs between them. Each system has two quad-core CPUs and 16 gigabytes of main memory with the capacity to run 8 jobs simultaneously. This provides a total additional capacity of 144 simultaneous computing jobs. To support visualization and minor processing needs, the CIND has three computers using CentOS 5 available for shared use by lab members.
For parallel processing applications, CIND has acquired a Beowulf compute cluster under the Research Resource grant which currently consists of 28 computing nodes directed by a head node. Each compute node has two quad-core CPUs and 64 gigabytes of main memory. The cluster also includes two Nvidia graphics processing units (GPU) which gives application developers the option of utilizing its superior image processing power to further enhance image analysis and reconstruction processes.
To assist in processing very large data, we have a Dell PowerEdge R910 system. This system runs Red Hat Enterprise Linux 5 with 1TB RAM and four processors hosting a total of 32 processing cores, with 13TB disk storage to store data while processing.
The principal computing servers and storage systems are housed in a computer room which is part of the CIND building. This room has two dedicated air-conditioning units, a 40KVa Power Distribution Unit which provides battery backup in case of a power failure. The room also has an Infrastructure Manager which can safely shut down all system if the batteries should run low.