Research Laboratories and Centers
Nuclear Materials Lab
131 Stanley Hall
|Nuclear Science and Security Consortium (NSSC)||Vujic|
Reactor Design Group
Richmond Field Stations, BLDG 113
Nuclear Waste Computational Lab
Professor Ahn’s research group utilizes LLNL supercomputer facilities, as well as a Sun work stations equipped with RELAP-5. The group also maintains extensive software for multi-dimensional compressible flow calculations. Macintosh computers with video acquisition cards are used for image processing, animation of numerical computation results, and graphics output. 6 Pentium II or III-based PCs, a Sparc 20 workstation, two Macintosh computers, a laser printer, and a scanner form the intranet for the Nuclear Waste Research Lab for analyses and computer-code development, including (1) groundwater flow models in heterogeneous fracture networks in geologic formations, (2) radionuclide transport models through engineered barriers of a geologic repository and through hosting geologic formations, (3) integrated models for repository performance assessment using object-oriented approach and parallel computing, using PVM, and (4) mass flow models of radioactive materials in a nuclear fuel cycle.
Prof. Hosemann has acquired state of the art material science equipment including but not limited to the following items:
Struers Rotopol Polishing machine for sample preparation
Buehler high speed cut off saw and Allied slow speed cut off saw fo sample preparation
3 Lindberg one zone tube furnaces with vacuum system (1200C), one 3 zone tube Lindberg furnace (1200C), one Fisher scientific box furnace (1100C) and one large high temperature (1500C) tube furnace with vacuum system for heat treatments and sample synthesis
ZOZ CM01 Powder mill for material synthesis.
One Wilson micro hardness tester
One high temperature nanoindenter from Micro Materials with two load ranges 0-500mN and 0-20N capable of heating sample and tip to 750C in inert environment. his system is also equipped with an Scanning probe stage.
One Hysitron PI-85 indenter for in-situ micro-mechanical testing with a load range up to 30mN.
One High temperature furnace for the MTS Criterion 43 Tensile tester which is operated jointly with Material Science and Engineering.
Jointly operating the Quanta 3D FEG dual beam FIB equipped with a GIS, Oxford EDS and Oxford EBSD detector as well as a S-TEM detector. Cryostage is available as well as Kleindiek lift out and electrical probing devices.
4 liquid metal autoclaves for corrosion testing of various materials in heavy liquid metals equipped with oxygen probes and Keithly 181 nanovoltmeters.
4 liquid metal creep stages to study liquid metal enhanced creep.
2 stainless steel glove boxes for handling radioactive and other hazardous materials
2 acrylic glove boxes to handle oxygen sensitive materials.
2 high power optical microscopes (max mag 1000x)
Site licenses for handling and studying a large range of radioactive materials is viable.
High Flux Neutron Generator
Students in the nuclear engineering department can utilize the High Flux Neutron Generator (HFNG). The HFNG is a dual-ion source-based DD neutron generator located in a 62”-thick concrete enclosure in Etcheverry Hall on the UC-Berkeley campus. Collaborative research, including radioactive material transport between LBNL and UC-Berkeley is facilitated by the designation of Etcheverry Hall as a location on the LBNL campus.
The HFNG uses a self-loading titanium-coated copper target to provide continuous operation. Voltages from 80-120 keV are used to accelerate beams from 1-50 mA onto the production target. The target is designed to allow the placement of samples in the center of the generator less than 5 mm from the DD reaction surfaces. In addition, the HFNG can be positioned to allow the extraction of an external beam of monochromatic 2.45 MeV neutron beam for use in prompt (n,n) and (n,n’g) measurements. The HFNG currently runs at a total neutron output of 108 n/s into 4π solid angle, but fluxes up to several 109, to 1010 could be achieved if deemed necessary.
Figure 1 shows the HFNG facility layout, including the external beam-line with the inset showing a photo of the HFNG with its ion sources energized (left) and a map of the neutron field from MCNP.
Equipment at the HFNG includes HPGe g-ray, Si X-ray, and proton-recoil detectors. Researchers can also utilize the adjacent teaching laboratories with on a by-arrangement basis The HFNG is run and maintained by students in the UC-Berkeley department of nuclear engineering. For information about running at the HFNG researchers should contact Lee Bernstein (email@example.com) or Karl Van Bibber (firstname.lastname@example.org).
We have received a tandem pelletron accelerator with associated hardware from the Department of Homeland Security. The accelerator will be used to perform experiments in nuclear resonance fluorescence and positron annihilation spectroscopy in association with the research activities outlined above. The accelerator will be used to obtain NRF cross section data in SNM nuclei of interest. We are particularly interested in completing the database for 235U and 239Pu. The 3.5 MeV endpoint of this machine will be useful towards this end. The pelletron electron source has been run at full extraction voltage and found to be brought to focus with millimeter spot size. Another proposed use of the machine is for a proposed scheme to generate Doppler-compensated NRF photons using direct excitation of the target nucleus to be detected, such as 235U. Design calculations have been undertaken to explore the feasibility of electron excitation of the same nuclear level in an accelerator target as the level sought in the cargo’s nucleus of interest. This technique requires that the target nucleus be moving in order to balance the Mossbauer-type recoil effects for reabsorption on the similar nucleus.
A rotating target scheme has been examined to explore this possibility. Examples of rotating-target NRF in non-SNM applications have been found in the literature. We plan to continue these studies and plan to use the Pelletron for inelastic excitation measurements in 235U.
Radiation Detection and Imaging Laboratory
We have established a new Radiation Detection and Imaging Laboratory housing projects focusing to the development of advanced radiation detection concepts in the detection of gamma rays and neutrons. These developments include the registration and correlation of visual and nuclear emission information in order to improve the sensitivity and currently available capabilities in the detection, localization, and tracking of radioactive sources. This laboratory is part of our new Bearing (Berkeley Advanced Radiation Imaging for Neutrons and Gamma Rays) effort and closely related to efforts within the DoNuTS project, such as the detector, electronics, and data analysis developments led by Prof. Siegrist from the Physics Department and feature extraction and data mining led by Prof. Hochbaum from the Industrial Engineering and Operations Research Department.
Experimental facilities include:
• Gamma-ray imaging laboratory
– Electron-tracking based Compton imaging instruments: High-resolution, scientific CCD, temperature variable cryostat, double-sided strip HPGe detector, and fully digital data acquisition system (including 10 8-channel, 16 bit, 100 MHZ waveform digitizer system).
– A Class-10,000 clean room for development, assembly, and characterization of semiconductor devices, including a probe station, a clean device storage area, and a Class-100 work bench.
– High-energy gamma-ray imaging instruments for radiography experiments consisting of custom-made, collimated 8x8 (5x5x50 mm3) BGO array and data acquisition system.
•Machine-Vision Radiation Detection System Setup
– Large-Area Coded-Aperture Imager consisting of 10x10 (10x10x10 cm3) NaI(Tl) array, a 2x2.5 m2 coded aperture, and a fully digital acquisition system (including 15 8-channel, 16 bit, 100MHz waveform
– Large, two-dimensional translational scanner to simulate two-dimensional movements in the FOV of the imaging instrument.
– Several additional NaI(Tl) detectors.
– Sets of video cameras to capture and track objects.
• Compact Compton Imager
– High-resolution Compton imaging instrument consisting of two large-volume HPGe and two large-volume Si(Li) detectors implemented in double-sided strip configuration, a fully digital acquisition system (including 20 8-channel, 16 bit, 100 MHz waveform digitizers), and a video camera and photo camera, all integrated on a movable cart.
• Teaching Laboratory
– Several basic experimental stations including G-M and proportional counters, plastic, liquid, NaI(Tl) scintillations detectors, and three HPGe detectors as well as pulse-processing electronics and computers.
The Thermal Hydraulics (TH) laboratories in 4118 and 1140 Etcheverry Hall are equipped with extensive experimental capabilities to perform scaled experiments to validate thermal hydraulics codes for advanced reactors, including fluoride-salt cooled high-temperature reactors (FHRs). The TH labs rely extensively on LabView platforms for data acquisition and control, Solidworks for 3-D modeling of experimental systems, and extensive machine-shop and 3-D printing capabilities available in Etcheverry Hall. Simulations are commonly performed using RELAP5-3D, Flownex. Current experimental facilities in the TH lab include the Compact Integral Effects Test (pictured left), which replicates steady state and transient heat transport in FHRs using passive safety systems; the X-ray Pebble Recirculation Experiment, which performs 3-D x-ray tomography of pebble beds; and a variety of smaller separate effect tests studying convective heat transport and thermophotovoltaic power conversion.
Department Berkelium Cluster
The initial configuration of the Berkelium Cluster was developed by Professor Wirth’s research group with assistance from NE system administrators, from hardware donated by LLNL.
The cluster configuration was partially asymmetric in order to allow for a variety of different simulations to run optimally, while still providing computing resources for simulations that often can only be run at larger facilities like NERSC. All of the machines had the same CPU, each with 8 cores, and a total of 4 CPUs per machine for an ultimate packing density of 32 cores per computer. Redundant power supplies were used in order to prevent accidental damage to the computer in the case of a power supply failure. One 320 GB hard disk drive is more then sufficient to store the Operating System, associated applications, MPI Libraries, Codes, and their associated data. A separate server is being used to store any data from the simulations. Two of these computers have a total of 256GB of memory installed them. This will allow for large simulations and calculations that are often not possible on other machines due to the memory requirements. Four have 128 GB of memory for intermediate sized simulations. The other computers each have 64GB of memory, which equates to 2 GB per core. These are provisioned for more traditional simulations that are often bound by CPU time and not overall memory. Still, all of the machines will be available on similar queues and the higher memory computers will be interchangeably used for all simulations. The real advantage this cluster will offer over our previous clusters is the use of the Infiniband network interconnects. Quad Data Rate (QDR) Adapters will allow up to 40 GB per second of data to be transferred between computers to allow for simulations that have to exchange large amounts of information to run speedily. These inter-connects also offer latencies that are on the order 1/100th the latency of traditional Ethernet Networking. Many modern codes are bound by MPI Latencies which scale with the interconnect latency. The Ethernet network serves as a management interface to keep the Infiniband network dedicated to simulations. The cluster initially run CentOS 5.5 for i386, a free rebuild of Red Hat Enterprise Linux 5.5 RHEL 5.5). The PAE kernel was used, which allows for files larger than 2 GiB. The file server runs CentOS 5.4 with x86 64 architecture; the 64-bit kernel allows native support for large files, but this architecture is not available on the Livermore nodes.
In addition to the initial hardware donation by LLNL, Professor Wirth was able to secure the DOE NEUP Infrastructure Grant of $150,000. This funding was used to buy an upgrade for the existing PowerWulf Compute Engine. The upgrade includes 576 CPU Cores, 1,504 GB System RAM and a QDR InfiniBand interconnect. Each computer (12 of them) has 48 processors, so that communication between processors should be should be extremely fast. Minimum RAM is 2 GiB per core, with one computer having 96 GiB total as a minimum (two have 192 GiB/node and one more has 256 GiB/node).