In the OSTI Collections: Monte Carlo Methods

“The first thoughts and attempts I made … were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than ‘abstract thinking’ might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later… [in 1946, I] described the idea to John von Neumann and we began to plan actual calculations.”

–“Stan Ulam, John von Neumann, and the Monte Carlo Method” by Roger Eckhardt, Los Alamos Science No. 15 (1987), p. 131[Information Bridge]

 

Thus did Stanislaw Ulam describe how he came to introduce a new use of random statistical sampling to gain insight into phenomena for which there’s no obvious method of exact analysis.  The same use had been invented independently by physicist Enrico Fermi almost 15 years earlier (ibid., p. 128, “The Beginning of the Monte Carlo Method” by N. Metropolis[Information Bridge]), but it was after mathematician John von Neumann began to champion it that one of his colleagues suggested the name “Monte Carlo” for the technique (ibid., p. 127, Metropolis[Information Bridge]).  Early on, Monte Carlo methods were used to figure out processes involving nuclear reactions.  Today they’re used to understand phenomena in many other fields as well, including chemistry, aerodynamics, particle physics, astronomy, microelectronics, power engineering, geology, materials processing, cytology, meteorology, economics, and communications. 

 

Even when the law that governs a process is well understood, it isn’t always feasible to mathematically derive what that law implies about how the process will turn out under a given set of conditions.  But if different possible processes under those conditions are appropriately sampled, the sample is likely to be representative of the actual range of possibilities.  For one example, suppose one is interested in randomly generating a simple substitution cipher to make a low-security encryption of a message.  If the cipher is to substitute a different letter for each of the plaintext letters, so that each letter in the encrypted message stands for something other than itself, one is unlikely to produce the cipher by putting each letter of the alphabet in a bag and drawing them out in some random order—more often than not, at least one of the letters drawn will be in the same spot in the reshuffled alphabet that it has when the alphabet is in standard sequence.  Most likely, you would have to reshuffle the letters more than once to get a sequence with all the letters rearranged. 

 

How often would a single reshuffling completely reorder all the letters?  There is a simple formula for the ratio of completely changed sequences to all possible sequences[Wikipedia], but the formula may not be obvious if you don’t know it already.  However, with a computer and a program to generate pseudorandom numbers, it’s easy to have the computer reshuffle the alphabet thousands of times and count how often it comes out completely scrambled versus how often one or more letters remain where they started.  In one experiment involving 1,332,000 reshufflings, 36.82% of them had every letter of the alphabet assigned to a new place—a result within one standard deviation (0.04%) of the 36.79% ratio found from the exact formula.  Incidentally, it wouldn’t be practical to try every possible arrangement of letters in the alphabet:  the number of possible arrangements of 26 letters is 26 × 25 × 24 × 23 × 22 × 21 × 20 × 19 × 18 × 17 × 16 × 15 × 14 × 13 × 12 × 11 × 10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1, or approximately 4.03 × 1026—a number that a 10-GHz computer processor would take more than a billion years to count to—so Monte Carlo random sampling is a useful technique here. 

 



Figure 1.  Of all the roughly 4.03 × 1026 ways one can arrange the alphabet, what fraction of them have every one of the letters in positions different from their original ones?  If one were to find out by checking each arrangement, it would take more than a billion years to do so if one checked ten billion arrangements every second.  There is an exact, if unobvious, formula  that expresses the fraction, which can be deduced by carefully considering the different kinds of possible rearrangements.  But in the absence of this formula it’s still possible to make a very accurate estimate of the fraction (approximately 36.79%) by randomly sampling a large number of different rearrangements.   In the very small sample above, 3 of the 10 arrangements (30%) have all the letters repositioned, while the other 7 have at least one letter in its original place.

 

While Monte Carlo methods can be used to study all sorts of complex processes, a sample of recent research sponsored by the Department of Energy shows that understanding nuclear-reactor processes is still one of their major uses.  Other Energy Department investigations use Monte Carlo methods in particle physics, environmental science, and the chemistry and physics of solids.  In these investigations, Monte Carlo methods are tools for understanding physical processes.  In other work, Monte Carlo calculations are simply used as standard examples to test the soundness and capabilities of new computing systems on representative problems; what the calculations themselves are about is of less interest than what they demonstrate about the computers running them. 

 

Chemistry, Higgs Bosons, Aquifers

 

It was discovered four decades ago that vinyl acetate[Wikipedia] can be produced from ethylene, oxygen, and acetic acid if the reaction is catalyzed on the surface of a gold-palladium alloy.  Exactly how the molecules get rearranged on the metal surface, and the exact structure of the metal surface itself, is the subject of the report “Probing Surface Chemistry Under Catalytic Conditions: Olefin Hydrogenation, Cyclization and Functionalization”[Information Bridge].  The report, on research conducted at the University of Virginia and the University of Wisconsin-Milwaukee, mentions that a Monte Carlo calculation was used to work out how the gold and palladium atoms are likely to be arranged on the alloy surface, and cites an earlier publication in the journal Physical Review B[References Phys. Rev. B 77, 045422 (2008),] that gives details.  For a given temperature and gold/palladium ratio, a random arrangement of gold and palladium atoms is initially assumed.  Adjoining gold-palladium atom pairs are then randomly selected, and switched around with a given probability according to how likely such an exchange is on a real gold-palladium surface under the given conditions, until no further changes in the atom distribution are found; the result is a representative arrangement of the atoms.  The report also describes the development of a Monte Carlo program for simulating the chemical reactions that produce vinyl acetate on the alloy surface.

 

“Predictive Capability for Strongly Correlated Systems”[Information Bridge] describes the accomplishments of one research team in discovering new Monte Carlo-based methods to compute the chemical and physical properties of certain materials.  These properties depend to a great extent on how the materials’ electrons are distributed within them.  How likely it is that any given electron configuration actually occurs in a material depends in part on how the electrons affect each other.  The electrons have identical magnetic fields, which tend to align with each other, and identical electric charges, so they repel each other; and on top of this, since electrons are fermions[Wikipedia], no two of them can be in the same place with the same orientation at the same time, regardless of their electric or magnetic interactions. 

 

The most obvious way to calculate a material’s properties from its possible electron configurations would require accounting for every conceivable configuration—which is feasible when the probability of each distribution is a simple-enough function of a small number of parameters, as it might be if the electrons had little or no effect on each other.  But in materials whose electrons affect each other greatly, the position and orientation of each electron is strongly correlated with those of the others, and that makes the probability a complicated function of a large number of electron positions and orientations.  In this case, Monte Carlo sampling of the probability function is used to extract information about the material’s behavior.  The Cornell-based research team, in collaboration with another group at Duke University and a student at Stanford, investigated the localization of electrons in quantum dots[Wikipedia] and quantum wires[Wikipedia] with standard Monte Carlo techniques, and also demonstrated improved Monte Carlo methods for analyzing such things as the stability of C2 molecules and the energy levels of methylene. 

 

Before the 2012 announcement that events observed at the Large Hadron Collider were consistent with the existence of Higgs bosons[Wikipedia], Monte Carlo methods were applied to the questions of whether Higgs bosons do exist and what their properties might be.  The 2011 report “Extracting  Higgs Decay Signals using Multivariate Techniques”[Information Bridge] by George Washington University undergraduate student and SLAC National Laboratory intern W Clarke Smith described a new way to analyze experimental data that could determine the Higgs boson’s mass more precisely than other methods studied earlier. 

 

In one type of experiment performed in the Large Hadron Collider, beams of protons are accelerated to high energies and directed toward each other so that pairs of protons will collide with many possible end results, each having its own probability.  One possible result is that two gluons[Wikipedia], one from each of the protons, will interact to ultimately produce a  quark[Wikipedia] and a  antiquark (or  quark).  Most such processes occur with the gluons producing the pair directly, but in a small fraction of them, the gluons would first produce a heavy quark pair (either  or —that is, quark and  antiquark) that combines to produce a single Higgs boson, which quickly decays into a  pair.  (Higgs bosons could produce other end products as well, but -pair production would be the dominant process.)  Whether Higgs bosons are involved or not, these final  and  quarks will each produce a jet of strongly-interacting particles that are tracked in the particle detector surrounding the proton-collision point.  How often  pairs result from a proton collision depends, first, on whether Higgs bosons exist and second, if they do exist, on what mass a Higgs boson has. 

 



Figure 2.  Feynman diagrams, each read from left to right, of two different processes by which colliding gluons could produce a b quark and a b antiquark.  (After “Extracting Higgs Decay Signals using Multivariate Techniques” by W Clarke Smith.[Information Bridge])

 

The calculation described in the report began with a Monte Carlo sampling, much like those of the solitaire or cipher examples described above.  Several likely masses for the Higgs boson were assumed, and different possible consequences of proton-pair collisions were randomly sampled.  Processes that ended with  pairs producing particle jets were analyzed to determine what sorts of particle jets would be produced, and how often, if the Higgs mass had one value or another.  From this data, functions were worked out to determine what the mass of the Higgs boson is most likely to be if -generated particle jets are observed to have given characteristics.  A comprehensive mathematical description of the entire range of jet-producing reactions was not feasible, so a key element of these calculations was the random sampling of possible processes in which Higgs bosons might be involved. 

 

The report “Bayesian data assimilation for stochastic multiscale models of transport in porous media”[Information Bridge] from Sandia National Laboratories describes new techniques for using aboveground observations to estimate the sizes and distribution of underground features such as high-permeability inclusions in a low-permeability matrix.  Using several geologic-field examples, the report describes various analysis techniques, including a Monte Carlo method known as Markov Chain Monte Carlo, which can make the estimates and quantify their uncertainties without restricting the probability distributions of the estimated parameters.  The final example involves actual data from a 15,000 square-mile corner of the Ogallala – High Plains aquifer in Kansas, though that example uses probability-oriented techniques other than Monte Carlo. 

 

Fission Reactors:  Improving Monte Carlo Methods

 

As noted above, Monte Carlo methods have long been used to analyze the behavior of fission reactors.  The Los Alamos National Laboratory report “A Review of Monte Carlo Criticality Calculations – Convergence, Bias, Statistics”[Information Bridge] describes the basic theory of reactor-criticality calculations and gives practical guidance on their implementation, while several other recent reports describe approaches to improving standard Monte Carlo computer programs. 

 

The Los Alamos slide presentation “A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport”[Information Bridge] describes an improvement on a Monte Carlo method known as “residual Monte Carlo”.  The residual method represents space as a discrete lattice instead of a continuum; this avoids computational problems, but can in some cases yield seriously inaccurate results.  In its earlier standard form, the residual Monte Carlo method treats radiation as propagating only along certain discretely defined angles over certain discrete distances; the improved version still simulates radiation as propagating for only certain discrete distances, but allows for its propagation along a continuum of angles and still avoids the computational problems of treating space as completely continuous. 

 

Another improvement is described in the Lawrence Livermore National Laboratory report “FREYA—a new Monte Carlo code for improved modeling of fission chains”[Information Bridge].  The improvement here is to account for the fact that when a massive nucleus undergoes fission and releases neutrons and gamma rays, the released particles’ timing and directions are correlated.  Earlier simulation software took no account of these correlations.  This is not a problem for the analysis of phenomena like average flux, average energy deposition into shielding or detectors, or multiplication of neutrons in a chain reaction.  However, the correlations are significant in other ways; for example, they provide a way to distinguish whether or not a neutron was generated by a nuclear fission.  Adding the FREYA code to existing software thus allows researchers to more accurately predict the effects of nuclear fission. 

 



Figure 3.  The “average fission model” is used in conventional Monte Carlo code.  FREYA treats fission in the more detailed way depicted on the right.  (From “FREYA—a new Monte Carlo code for improved modeling of fission chains”[Information Bridge].)

 

The Oak Ridge National Laboratory reports “Criticality Safety Validation of Scale 6.1”[Information Bridge] and “Criticality Safety Validation of SCALE 6.1 with ENDF/B-VII.0 Libraries”[Information Bridge] describe tests of a new computer application, SCALE 6.1, which uses Monte Carlo methods to calculate the average number of neutrons produced in one fission that cause another fission (also known as the effective neutron multiplication factor).  The SCALE code’s novel feature is its ability to model nuclear reactors of several different types and complicated configurations.  To test the code for use in analyzing reactor safety, the code’s calculations were compared to a library of results from actual experiments. 

 

Another report by investigators from Oak Ridge National Laboratory and the University of Tennessee, “Under-Prediction of Localized Tally Uncertainties in Monte Carlo Eigenvalue Calculations”[Information Bridge], addresses potential problems in Monte Carlo reactor simulations.  The report notes that localized uncertainties in tallies of the effective neutron multiplication factor may be underpredicted by a factor of five or more in select cases, and shows how such underpredictions can occur if the simulations are run without proper precautions and what precautions will avoid them. 

 

Fission Reactors:  Monte Carlo Methods among Others

 

While Monte Carlo methods can provide useful insights when other mathematical techniques aren’t available, they do have their limitations as some of the aforementioned reports show, and at times people discover very different ways to analyze some problems that surpass Monte Carlo methods.  From the study of several examples, physicist and probability theorist E. T. Jaynes proposed “as a general principle: Whenever there is a randomized way of doing something, there is a nonrandomized way that yields better results from the same data, but requires more thinking.  Perhaps this principle does not have quite the status of a theorem, but we are confident that, whenever one is willing to do the required extra thinking, it will be confirmed”[References Probability Theory: The Logic of Science, pp. 497n, 532]  although, as software architect Kevin S. Van Horn notes, “the better, nonrandomized way often enough seems to require much more thinking, and years of research.”[References]  Among recent updates on Nuclear Energy Advanced Modeling and Simulation, several uses of Monte Carlo methods in current work are mentioned, including some research in which other types of calculation are competitive or complementary.  

 

Materials in nuclear reactors can develop vacancies and other defects among their atoms, as well as gas bubbles and mobile clusters.  Page 3 of the October-December 2011 Nuclear Energy Advanced Modeling and Simulation (NEAMS) update[Information Bridge] mentions the implementation of a Monte Carlo approach in a computer code that’s expected to account for such phenomena in material simulations.  Simulating these phenomena requires accounting for how the atoms interact, the way such interactions had to be accounted for in the gold-platinum catalyst described above to determine the atoms’ likely arrangements; an effort to add such information to the Monte Carlo simulation, as well as to a molecular-dynamic simulation, is mentioned in a followup note on page 6 of the April-June 2012 update[Information Bridge].  The April-June update also mentions on page 3 a newly constructed Monte Carlo tool for neutron-photon processing, which was used to demonstrate generation of reaction cross sections[Wikipedia].  Before that, page 4 of the January-March 2012 update[Information Bridge] described how Idaho National Laboratory’s Advanced Test Reactor, which is being upgraded, is also being modeled with the help of a Monte Carlo program for particle transport.

 



Figure 4.  The core of Idaho National Laboratory’s Advanced Test Reactor:  (a) geometry; (b) thermal neutron flux.  Monte Carlo methods are among the mathematical techniques used to analyze particle transport in the reactor.  (From “NEAMS update quarterly report for January - March 2012”.[Information Bridge])

 

The January-March 2012 update Nuclear Energy Advanced Modeling and Simulation also describes how other mathematical methods are being used alongside Monte Carlo methods to address certain problems:  mesoscale evolution of microstructure in uranium dioxide (phase-field[Wikipedia], Potts[Wikipedia], and kinetic Monte Carlo[Wikipedia] methods—page 6); rare-event inference, which includes many safety-related problems (importance sampling, i.e. sampling on values of input variables that result in particular outputs of interest such as mean failure probability, can result in large savings of computational cost over traditional Monte Carlo and Latin-hypercube[Wikipedia] sampling—page 7); and modeling the motions and interactions of neutrons and gamma rays in complex, heterogeneous reactors (approximating the spatial continuum occupied by the reactor using a fine mesh, without random sampling, versus using Monte Carlo sampling—page 9). 

 

Monte Carlo Calculations to Test New Computing Systems

 

Finally, two other recent reports describe how Monte Carlo calculations were simply used to examine the performance of existing and future computers. 

 

One test involved a subsystem of IBM’s Blue Gene/Q parallel-processing supercomputers, as described in the short report “Performance Analysis of and Tool Support for Transactional Memory on BG/Q”[Information Bridge] from Lawrence Livermore National Laboratory.  Machines that process multiple threads using shared memory have to avoid conflicts when the different threads update the memory contents.  The transactional memory technique, which is implemented in Blue Gene/Q’s hardware, allows updates without up-front restrictions, but checks afterward for conflicts and changes the course of updates if conflicts are found.[References]  Lawrence Livermore’s software “Monte Carlo Benchmark”[Energy Science and Technology Software Center] was used to test the performance of Blue Gene/Q’s transactional memory system.  The Monte Carlo Benchmark uses typical features of Monte Carlo algorithms to solve a simple transport equation on a computer that uses parallel processing, thus giving an idea of how other Monte Carlo computations would work on the same computer. 

 

The slide presentation “Exascale Monte Carlo R&D”[Information Bridge] from Los Alamos National Laboratory illustrates how a simplified version (MCMini) of the standard Monte Carlo software package MCNP (for “Monte Carlo N-Particle code”[References]) was used to rapidly compare the capabilities of different hardware configurations for solving nuclear-physics problems involving different large numbers of particles.  While MCMini does the same general types of calculations as MCNP, it models the nuclear physics processes much less accurately, so MCMini would not predict the actual results of nuclear processes reliably.  On the other hand, since MCMini can finish its rough simulations in minutes or seconds versus the much longer time required by the more accurate MCNP, it’s useful for making initial tests of different systems’ performance at large-scale Monte Carlo computations.  The slide presentation discusses the tests’ implications for future exascale[Wikipedia] computers—machines that could execute at least 1018 floating-point operations per second.[Wikipedia]  

 

References

 

Wikipedia

 

Research Organizations

 

Reports Available through OSTI’s Information Bridge

 

  • Los Alamos Science No. 15: Special Issue on Stanislaw Ulam 1909-1984” [Abstract and full text via OSTI’s Information Bridge].  See the articles “Stan Ulam, John von Neumann, and the Monte Carlo Method” by Roger Eckhardt (pp. 131-137) and “The Beginning of the Monte Carlo Method” by N. Metropolis (pp. 125-13). 
  • “Probing Surface Chemistry Under Catalytic Conditions: Olefin Hydrogenation, Cyclization and Functionalization” [Abstract and full text via OSTI’s Information Bridge]
  • “Predictive Capability for Strongly Correlated Systems” [Abstract and full text via OSTI’s Information Bridge]
  • “Extracting  Higgs Decay Signals using Multivariate Techniques” [Abstract and full text via OSTI’s Information Bridge]
  • “Bayesian data assimilation for stochastic multiscale models of transport in porous media” [Abstract and full text via OSTI’s Information Bridge]
  • “A Review of Monte Carlo Criticality Calculations – Convergence, Bias, Statistics” [Abstract and full text via OSTI’s Information Bridge]
  • “A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport” [Abstract and full text via OSTI’s Information Bridge]
  • “FREYA—a new Monte Carlo code for improved modeling of fission chains” [Abstract and full text via OSTI’s Information Bridge]
  • “Criticality Safety Validation of Scale 6.1” [Abstract and full text via OSTI’s Information Bridge]
  • “Criticality Safety Validation of SCALE 6.1 with ENDF/B-VII.0 Libraries” [Abstract and full text via OSTI’s Information Bridge]
  • “Under-Prediction of Localized Tally Uncertainties in Monte Carlo Eigenvalue Calculations” [Abstract and full text via OSTI’s Information Bridge]
  • Nuclear Energy Advanced Modeling and Simulation updates for October-December 2011, January-March 2012, and April-June 2012 [Abstracts (1, 2, 3) and full text via OSTI’s Information Bridge]
  • “Performance Analysis of and Tool Support for Transactional Memory on BG/Q” [Abstract and full text via OSTI’s Information Bridge]
  • “Exascale Monte Carlo R&D” [Abstract and full text via OSTI’s Information Bridge]
  • Additional References
  • “Monte Carlo and density functional theory analysis of the distribution of gold and palladium atoms on Au/Pd(111) alloys” by Jorge A. Boscoboinik, Craig Plaisance, Matthew Neurock, and Wilfred T. Tysoe, Physical Review B Volume 77, Issue 4, 045422 (2008)
  • Probability Theory:  The Logic of Science by E. T. Jaynes (2003); Cambridge University Press.  Quotation above is from p. 532; see also note on p. 497. 
  • Unofficial Errata and Commentary for E. T. Jaynes's Probability Theory: The Logic of Science” by Kevin S. Van Horn.  Quotation above is from Van Horn’s comment on p. 531 of Jaynes’ book. 
  • Experiments using BG/Q’s Hardware Transactional Memory” by Barna L. Bihari. 
  • “Monte Carlo Benchmark”[Description at OSTI’s Energy Science and Technology Software Center]. 
  • A General Monte Carlo N-Particle (MCNP) Transport Code”. 

 

Acknowledgement

 

Prepared by Dr. William N. Watson, Physicist
DOE Office of Scientific and Technical Information

Last updated on Thursday 28 July 2016