1. Introduction
More than 60 years ago, the laser was devised. One of its many applications became the selective excitation of levels in (usually neutral) atoms. If only one level is excited (populated), the decay curve (the decay signal intensity as a function of time) is an exponential, and the decay constant (the inverse of the level lifetime, which is also called the mean life) can be determined with ease. A straight line fit to the logarithm of the decay amplitude often yields a lifetime result with a precision on the order of 0.1%. Is that a limit?
Also in the early 1960s, the technique of beam-foil spectroscopy was conceived. It uses a beam of fast ions from an accelerator and sends it (in high vacuum) through a thin foil (roughly a thousandth of the thickness of writing paper). In passing through the solid body, the ions lose a small fraction of their kinetic energy, but basically continue on their straight flight path. Their electron shells are shaken in the interaction with the foil (serving as a dense electron target) and de-excite along the way. If one observes the optical emission from the ion beam after it has left the exciter foil, one can record decay curves (through an interference filter or a spectrometer) that rarely resemble a single exponential, but more often a superposition of several exponentials. With poor spectral resolution, there might be several spectral lines, and thus, the decays of several levels contribute. However, even with good spectral resolution and only a single line in view, the decay curve may be complex, because not only one atomic level is excited, but also higher ones that via cascades replenish the population of the level of interest (see discussion below). With only a few dominant contributions, this superposition of decay curves may be largely disentangled by multi-exponential fits. However, this is a non-linear fitting problem, and there is no “exact” solution.
On the one hand, the beam-foil technique enables (in principle) the level lifetime study of practically all levels in all ions of all elements. On the other, there are obstacles—such as the complexity of decay schemes—that limit the precision of the results of many beam-foil lifetime measurements to about 10% (in some cases even poorer), while in rare cases, lower uncertainties of merely a few percent have been reached. If one replaces the exciter foil by a suitable laser, precisions on the order of 0.1% can be achieved, because the selective level population is a key feature. However, even today, selective population of levels in multiply charged ions cannot regularly be effected starting from the ground level, because the photon energies of practical lasers do not reach from the ground level to the excitation level of interest. Maybe free-electron X-ray “lasers” will eventually be used for this task. Before they do, let me describe key aspects of the field that so far limit the precision of atomic lifetime determinations.
The discussion below treats mostly decay curves, that is, atomic lifetime measurements in the time domain. The Fourier transform of an exponential (such as a decay curve) is a Lorentzian, a resonance curve in frequency space. The width of such a Lorentzian curve (of a spectral line) is related to the level lifetime; the shorter-lived the level, the wider the spectral line. It has been argued that this is the superior measurement technique compared to the recording of a decay curve (in the time domain). Both depend, of course, on the same signal statistics, since the two representations are mathematically linked. Short level lifetimes relate to wider line profiles, and thus, line width measurements are better suited for these. Examples not just from laser irradiation of atoms, but from trapped highly charged ions are of considerable interest [
1,
2] per se, but are far from the high-precision atomic lifetime measurement frontier of this discussion. Long-lived levels result in a line broadening that usually remains much lower than some instrumental line width, and thus, is hardly measurable.
In the early days of beam-foil spectroscopy (in the late 1960s and early 1970s), the new option of atomic lifetime measurements was touted and widely tried. Disappointment was subsequently widespread when so many results seemed to scatter widely from theoretical estimates (which also scattered widely from what theory would obtain later). With some respect to the complexity of atomic level systems and decay paths, much of this meddling could have been avoided, and various results of better than 10% uncertainty have almost routinely be obtained. In some cases of key interest, an evaluation combining the results for several ions in an isoelectronic sequence would yield errors of only about 3%, for both precision and accuracy. In other cases, one should recognize that too many decay components of rather similar time constants cannot sensibly be disentangled by any fit algorithm that struggles with data of limited statistical reliability. Under such circumstances, systematic errors of, for example, a factor of about three have been reaped. This has been used as an argument against beam-foil spectroscopy. It would have been more appropriately used as an argument against belief in wonders. In whatever circumstances, a measurement should follow reason. If a measurement technique yields unreliable answers, it very likely does not match the given problem under the circumstances, and the user is challenged to think about improvements of the measurement situation and/or the measurement technique, and not just blame it. However, an opinion formed in the community that beam-foil spectroscopy was really not good and “evidently” unreliable for atomic lifetimes, a misconception that, of course, has influenced research funding and careers. I have heard it at various occasions, from numerous colleagues, including a very prominent theoretician who (in the 1990s) after listening to a talk asked me “why beam-foil lifetimes were so inaccurate”. Clearly, one unspoken implication was that atomic theory did much better. Looking back to that occasion and to what has been achieved afterwards by experiment, the situation has changed markedly.
A few years later, our group managed to measure an atomic lifetime in a doubly charged ion with unprecedented accuracy (0.13%), more than an order of magnitude more precise than had been typical in beam-foil work, and even slightly better than earlier laser work on fast atomic beams. We did not even use a laser. Moreover, around the same time, a theory on the same problem began to converge from an earlier factor-of-two uncertainty to the one percent range, too. Of course, we have measured more such lifetimes since.
Nothing in our detection system was revolutionary. Had the community missed crucial steps or techniques in their quest to obtain accurate atomic lifetimes earlier? No. Can all the old measurements be superseded by better data? Unfortunately (probably) not. In our case, the decisive step forward was that we used a heavy-ion storage ring to measure a long level lifetime. Why those two ingredients, measurement technique and study subject, matter and how they do, will be discussed below. In short, the mean life of atomic levels can relatively easily be measured with accuracy, while for the majority this is very unlikely to happen. The measurement situation posed by atomic structure is decisive, and there are also important circumstances. For a given atomic level and its lifetime, an experimenter does not have the choice of doing the measurement accurately. There are cases that can be performed accurately, and there are others for which all present means fail. To an experimenter, it is essential to know whether an experiment may likely succeed or not. Maybe when seen from a theory perspective, nothing of this is fundamental.
Lifetime measurements on both types of levels (amenable to measurement with high precision or not) depend on the same counting statistics. The difference lies in the process of level population, whether selective excitation can be achieved or at least be approximated—or not. A second factor is that a more precise clock is nice to have (and many are available), but in fact, it is not essential in most atomic lifetime measurements. I will show what the helpful (or other) circumstances are, what atomic features and properties are favourable (or not), and why lifetime measurements cannot be as precise as, say, wavelength measurements.
2. Decay Curve Components and Cascade Replenishment
For a measurement of atomic level lifetimes, one has to excite the atom or ion and to detect its subsequent de-excitation by spontaneous emission with some sort of time resolution in the recording. Typical transition rates in neutral atoms are in the order of 10
8 s
−1, that is, the level lifetimes are in the range of nanoseconds to tens of nanoseconds. Sixty years ago, it was challenging to achieve time resolution at this scale by electronic means. Moreover, the excitation has to be switched off over a similarly short time interval, or the decay curve will be smeared out. One type of experiments crossed an atomic beam by a beam of electrons, thus exploiting geometry (the atoms moving out of the interaction zone) as a means of achieving short interaction times. Beam-foil spectroscopy does this to the extreme; a beam of fast ions (with velocities on the order of 1% of the speed of light (
c) and higher) implies that the ions spend about 10
−15 s inside the exciter foil (which is dozens of nanometers “thick”) and even less in the transition zone at the rear side of the foil, where excitation ends. Techniques of atomic lifetime measurements using fast ion beams have been reviewed many times; my technical view of the field is described in [
3], and we do not need the details for the present discussion. A point of merit that deserves mentioning is that beam-foil lifetime measurements do not need a clock, but only a ruler. The constant velocity of each ion after leaving the exciter foil translates the distance from the foil into a time after excitation, and a mechanical measurement of centimeters into nanoseconds, or of, say, 10
m into picoseconds—without any electronic timing. This was clearly a great technique for the time before laboratory electronics reached this short-time range.
Among the groups that tried the measurement of atomic lifetimes by switching or modulating the current of electron beams that were directed at atomic beams, Bennett and Kindlmann [
4] also varied the electron beam energy in order to find optimum excitation conditions. With the electron beam energy below the excitation threshold of a level of interest, there would be no signal (of course not), but the signal would increase for a range of electron energies above. If you want to maximize your signal, set the excitation well above the threshold. However, Bennett and Kindlmann noted a qualitative change. For electron energies just above the threshold, the decay curve would be weak, but could be described by a single exponential; for electron energies also higher than the excitation energies of some other levels, the decay curve would be brighter, but multi-exponential as well. They concluded correctly that some decay branches of the higher levels eventually reach the level of primary interest and replenish it. On the upside, the decay curve of the primary level then reflects the lifetimes of the higher levels by components of the specific level lifetimes. Some 50 years ago, Larry Curtis demonstrated how in such a system of cascades the amplitudes of the decay curve components are affected by combinations of shorter and longer lifetimes among the cascades, but the individual lifetime values persist mathematically [
5]. On the downside, if only there were methods to reliably retrieve the individual level lifetimes from multi-exponential decay curves with their considerable statistical data scatter! I will come back to this point. By the way, later experiments on single trapped ions have used the observation of a single ground-state transition and the measurement of the dark periods between laser fluorescence (when the ion happened to be in any of the other low-lying levels) to derive the lifetimes of all the other levels involved (this watching for on–off periods was dubbed “telegraphy mode”).
In beam-foil spectroscopy, the excitation proceeds by interaction of the ion beam with a solid-state density electron target (the exciter foil)—an overwhelmingly violent process that reaches practically all levels, including high-lying ones and doubly-excited ones. This is very good if one wants to explore the richness of atomic structure and the properties of levels much less accessible to other excitation techniques. This is not so good if one wants to measure the decay properties of specific not-so exotic levels among so many others of the same class. We do not need to go into detail here, beyond noting that there can be cascades from short-lived or long-lived levels, and often from both, and from many of them, because so many singly-excited, doubly-excited, and so on, levels are populated.
Figure 1 shows what a decay curve of a single level might look like, with different amounts of background (detector dark rate, stray light, etc.). Only without a background does the logarithm of the signal yield a straight line that is easily evaluated, yielding a reliable level lifetime; any presence of a background bends that curve. One has to make certain that one determines the background reliably. Otherwise, one may find that fits of two exponentials (plus a background) to the data are not distinguishable in quality from the fit of just one decay component (plus background), but have different lifetime results; that is, of course, the ubiquitous signal-to-noise problem in a special setting. However, as long as the signal is much higher than the background level, the slope of the curve is close to that one without a background contributing to the signal. Here, I should remind the reader of the statistical noise of the signal, which can make it difficult for the fit algorithm to find the optimum fit (via the lowest
value). In
Figure 1 and
Figure 2, the statistical scatter has been left out.
In
Figure 2, a fifth trace has been added (labeled e). This fifth curve (schematically) resembles the decay curves of the 2s
2 –2s2p
resonance line in Be-like ions (and corresponding decay curves in other two-electron systems, such as the Mg- and Zn-like ions). A fast cascade from a level with a lifetime half as long as that of the resonance level (representing—in didactic excess—the
level decay), and a cascade with five times the lifetime of the resonance level (representing—in a slight didactic excess—the
level decay) have been added (as well as a moderate background). The composite curve shows an initial increase of the signal (due to the fast cascade, the so-called growing-in cascade) and a slow tail. Nowhere does its slope get even nearly as steep as that of the pure primary decay. This type of a decay curve has frustrated early researchers and has given beam-foil spectroscopy a bad reputation, because naive evaluations of the primary lifetime were systematically and massively off the expected range (some studies reported mismatches with theory by some 50%). With some consideration of the atomic structure situation, the evaluation can be much better, i.e., more precise (to some 5%) and accurate [
6,
7,
8,
9], using either direct measurements of the major cascades as input to the evaluation of the primary decay curve [
10,
11,
12] or cascade models [
13]. Of course, my graphical example is grossly simplified, not just for the statistical scatter: With the fast-ion foil excitation, one observes an additional infinitely long, multi-component cascade along the yrast chain of levels, encompassing cascade level lifetimes that range from short to very long.
Depending on the incidental level population distribution and the level lifetimes involved, the composite decay curve in a log plot may look almost straight (until it reaches the background level) or remains continually curved, all from the same physics that includes cascades, because the initial level population mechanism is non-selective.
3. Some Basic Observations on Atomic Data and Precision
In various reports on beam-foil work and lifetime measurements, I have discussed the data evaluation problem and how to tackle it. Here, I focus on some more fundamental aspects and happily avoid any specific lifetime data.
Spectroscopic wavelength measurements began as angle measurements of the light deflection achieved with refraction prisms or diffraction gratings. Practical measurements using a spectrometer are usually anchored to spectral lines of known wavelengths. The unknown line wavelength is obtained by an interpolation of the reference line positions on a focal plane, depending on precision mechanics, relative to accurate wavelength measurements undertaken with elaborate devices elsewhere.
Light diffraction is an interference phenomenon. For very high wavelength accuracy, interferometry is the tool of choice, nowadays aided by frequency combs. These interesting devices are on the way to be used also on the spectra of highly charged ions. With the meter defined optically and the speed of light measured, the measurement of the frequency of light uses basically the same apparatuses as interferometric wave meters do, just with a different perspective. Frequency combs tie the frequency of optical light to microwave radiation and to absolute frequency counters.
Time intervals are ultimately tied to frequencies of atomic emissions, and thus, to the SI second. In principle, any time interval from attoseconds to the duration of the year (and multiples thereof) can be measured with high precision and high accuracy by counting the oscillations of a reference oscillation. The accuracy of the latest development stage of atomic clocks is often advertised as “one second over the age of the universe”. However, do we know if anybody started the clock at ? In any case, the present estimates of the time elapsed since the Big Bang vary by at least dozens of millions of years.
Time or Lifetime?
Atomic level lifetimes are the inverse of the sum of decay rates, which are sometimes conceptually mixed up with decay probabilities. A result of many spontaneous decays, a quantum phenomenon, is the time distribution of the observed decay events, which yields a “macroscopic value of record”, the level mean life, not the constant frequency of some oscillation. This is the key point of this article: we may be able to measure a time difference with utmost accuracy, but that is not (!) the same as the mean life of a spontaneously decaying nucleus or atom. The events of spontaneous radioactivity or atomic decay are individually unpredictable. The nuclear or atomic lifetime is not defined by a single event, but by the statistical properties of an ensemble. A time scale on which to place all the detected events is necessary, but what we call a nuclear half life or an atomic level mean life is measure of a distribution, not to be determined from a single event, however precisely that may be timed.
Nonetheless, it has been tried, in a way. The UNILAC heavy-ion accelerator of the Gesellschaft für Schwerionenforschung at Darmstadt (Germany) was constructed decades ago with (among other goals) searches for superheavy elements in mind, elements much heavier than uranium, speculating an island of stability in the upper-right corner of the chart of nuclides (isotopes with high atomic number Z and a very high number of neutrons). The island was expected to result from nuclear shell closures. (In as early as the 1930s, the German author Hans Dominik wrote a science fiction novel “Atomic weight 500” about truly phantastic superheavies—twice as heavy as uranium nuclei). Concurrent with similar research at Dubna and Berkeley, a beam of heavy nuclei (often ) was to collide with a sample of other heavy elements, preferably with neutron-rich isotopes, and among the reaction products, one hoped to find a few superheavy isotopes. These needed to be filtered out of an immense background of other reaction products and ultimately to be detected by particle emission. The energy of such particles can be measured rather well, but they would be emitted by unknown isotopes of unknown structure, and hence, there could be no meaningful prediction or identification by theoretical means. One could hope for decay chains that eventually might reach “known territory” (on the chart of nuclides). A number of accelerator runs lasted a week or two, or three, and yielded perhaps a single event, or two, or even none, which is truly challenging. The accelerator was modified, the reaction schemes were varied, and isotopes of several new elements were detected, in all three laboratories.
In the context of the “island of stability”, it would be good to measure the decay rate by way of the half life of any new isotope, so that one can discuss relative “stability”. What half life can be measured on a single decay event or two? The idea is simple (with a grain of absurdity): after a half life, half of the number of potential emitters has decayed. Starting with a single nucleus, it is either still there, or gone. The probability of a decay before the half life is as high as the probability of a decay after one half life (by the definition of the term “half life”). What the researchers did is to assume that the measured time delay from production (by the initial collision of heavy ions) to detection was a measure of the half life. How does one specify an uncertainty (error) to that single number? How does one handle error progression in the rare case of two observations of decays of the same isotope, or three? It has been argued and (somehow) achieved, but obviously not just by exploiting counting statistics. On the strong side, the detection of an particle is a real measurement with an energy quantification, not just the “anonymous” click noise of a Geiger–Müller counter or a photomultiplier tube. The systematics of such measurements have been woven into a web of nuclear data. Apparently, the “island of stability” is a region of shallow waters in the sea of unstable superheavies; representative nuclide lifetimes may be longer in this section of the elemental tables than elsewhere, but no true stability has been found.
We return to the shore, to atomic lifetimes and traditional counting statistics.
4. Uncertainty of Evaluation
There have been simulated decay data for which evaluations by fitting one, two, or more exponential decay components plus a background have been discussed. The same applies to other fit functions. I remember an advert for a computer program that automatically tried out a wide variety of fit functions. This may be useful for finding a best fit, but it has no predictive power: physicists use models (concepts), and those correspond to specific functions, such as exponentials in an observed atomic decay. Just trying any conceivable function is a sign of despair, not of insight.
Here is a warning example from a (prominent) laboratory in which a radiofrequency ion trap was set to study the intercombination decay in C2+ ions. The distribution of the number of detected photons as a function of time after excitation must have resembled a diffuse cloud. The graduate student involved tried a fit with a single decay component and reported a decay rate of 100 s−1 with 20% uncertainty at a conference. Apparently, the senior members of the team preferred a fit of two decay components, and the longer-lived one of those (a decay rate of 75 ± 20 s−1) was presented in a formal publication in the following year. Another nine years later, the same team, now with the former graduate student leading the author list, published another study with a decay rate result of about 120 . What was not mentioned is that this likely was the “other” decay component of the previous analysis, based on the very same data sample—and without any new measurements.
The first reliable result of a precise measurement of the intercombination decay rate in such
ions was obtained some four years later, at another ion trap, a heavy-ion storage ring some three orders of magnitude larger than the aforementioned radiofrequency ion trap (ring circumference 55 m, but the trap size is not a decisive parameter of such a lifetime measurement). The result was a transition rate of 10
3 with an uncertainty of only some 0.13% [
14]. The irony is that this precise result corroborated the first estimate by the aforementioned graduate student, but the error was now two orders of magnitude smaller. Around the same time and independently, theoretical predictions of this decay rate have moved to the (estimated) 1% range of uncertainty (and smaller). The experimental and theoretical results are now compatible with each other, but it would be interesting to further improve on the uncertainties so that the mutual challenge can shed more light on the problem.
Numerous lifetime measurements at the heavy-ion storage ring have followed and have yielded a fair number of precise lifetime values on various ions. What was the essential part in the later experiment that enabled such a massive improvement? This has to do with the decay curve that resembled the third dataset from the top in
Figure 1 (trace c)—there was a bright signal, and there was a non-zero background, but it was very low. No, there was no laser involved.
5. Fundamental Options
5.1. Laser
It is time to discuss what role a laser could have filled. “A promising tool for measuring atomic lifetimes precisely and accurately”—the laser—what actually is meant by this claim? A measurement “by laser”? Or rather, a measurement using a laser somewhere in the process? The technique of LIPS (LIBS), laser-induced plasma (or breakdown) spectroscopy, is burgeoning, because the apparatus has been made small enough to be transportable, and measurements of environmental samples, or somewhere on the workshop floor, have become practical. In this technique, the energy of a tightly focused laser beam ignites a small plasma at the target surface (containing target material), which then emits light that by a small (pocket-size) spectrograph can be dispersed, detected, and semiautomatically analyzed. This is an efficient light source, but not a tool for precise atomic lifetime measurements. A larger relative of this system is the laser-produced plasma (LPP), which is formed in the focus of a high-power laser directed at a surface. The surface is heated and the material evaporates and is ionized and excited in the strong radiation field. The excitation and the level population is influenced by the electrons in a multi-eV thermal plasma—no excitation selectivity, no use for clean decay curve measurements—there are too many ions, levels, and transitions competing for the attention of the experimenter. Thus, “laser” as a generic term is rather imprecise.
The only laser helpful for precision lifetime measurements is one that is largely monochromatic and can excite a single level resonantly. To stress the point: this laser is not a measurement device, but a tool for selectively achieving a significantly high level population. The atoms or ions then decay spontaneously. We regularly use laser light outside of the laser, inside of which induced decays play a role in the light amplification. Photons are detected by sensitive devices; the registered signal underlies the same statistical laws and consideration as assumed above and after the decays of non-selectively populated levels. Of course, the resonant excitation is not limited to excitation from the ground state. Decades ago, it was demonstrated that it is possible to use a laser for shifting foil-excited level population remaining in a long-lived level to another, short-lived, and therefore, already “dead” level, and then measure the subsequent “clean” decay of that level [
15]. The same has long since been planned to be performed in electron beam ion traps [
16,
17], and has recently been achieved [
18,
19]. However, the population of the ground state is much higher than that of excited levels, and thus, the signal obtainable with laser excitation starting from there is much higher. This has a drawback, of course, as the excitation energy to be matched by the laser light is higher from the ground state than from another excited level. It is for this reason that selective laser excitation works well with neutral atoms and in some singly charged ions, but not in most more highly charged ions. An exception are excitations within the ground term. Since about 150 years, those have been seen in the solar corona during eclipses (leading to the temporary identification with the hypothetical element “coronium”, eventually discarded by the accurate X-ray measurements of Bengt Edlén some 80 years ago, who proved their origin from highly charged ions of ordinary elements). If the level lifetimes of the upper levels of these magnetic dipole (
) transitions could be measured to better than half a percent, this would test a QED correction at the edge of the Standard Model [
20,
21]. To date, the theoretical uncertainties of the quantum mechanical description of many-electron ions are not yet under sufficient control for such a test. However, this example demonstrates that accurate atomic lifetime measurements are of actual interest.
Laser excitation leads to lifetime measurements of better than 1%, in some cases close to 0.1%. Counting statistics indicate that for such a precision at least counts must have been registered. There better be only a single exponential decay and almost no background, or the value of the fit procedure will suffer from the errors of more parameters. Has anybody thought of running such experiments for a hundred times longer periods in order to reach a precision of 0.01%? Well, equipment stability is not easy to guarantee for such long accumulation times. Together with service, maintenance, and repairs, a 0.1% measurement that presently would require a few days of final operations would easily be stretched to years before it reaches 0.01% uncertainty. Moreover, a fair number of systematic error sources that may be kept under control at the 0.1% level might require much more investigative effort at the next step.
In the early days of atomic physics with fast ion beams, several groups have achieved fast-beam laser lifetime measurements on Ba II with a 1% uncertainty, but the theory was not sufficiently developed to be tested at this level. Therefore, the experimenters then looked for an atomic system that could be treated as well by theory, and addressed the
ns–
np resonance transitions in the alkalis (Li I, Na I, etc.). At the 1% level, several laser experiments with a fast atom beam or a gas cell obtained results compatible with theory, but some measurements claimed a higher precision and stated a disagreement with theory [
22,
23,
24,
25]. The theory was improved and remained near the earlier predictions—pointing to experimentation with the suspicion of unrecognized systematic errors. Quite a few years later, a new technique avoided the fast atom beams and their challenging geometry (mechanical displacements and optical alignments) altogether [
26,
27]. In these experiments, cold atoms in a magneto-optical trap were forming molecules, and the vibrational level structure of barely bound molecules at large interatomic distances was probed by laser spectroscopy. The interaction potential was modeled to match the level structure, and in this theoretical approach, the oscillator strength mattered and was thus derived by a type of “reverse engineering”. The level lifetimes deduced from these results reached the 0.1% error margin and matched theory. Around the same time, an experimental group who had been in the early 1% cluster reached the smaller error bars, too, corroborating theory and confirming the systematic error problem of several of the early achievers [
28]. Nowadays, it should be possible to reach even smaller uncertainties, but the simple problem of counting statistics points to a worsening ratio of achievement and cost in the photon-excitation experiments.
I would have been chided for not mentioning the obvious: a laser beam directed at a sample of atoms in a vapor cell can readily excite many of those. Pulse the laser light, and repeat those pulses frequently, and use a high-frequency photon detector. Such an arrangement must yield very high count rates so that superb data statistics are obtained in a matter of seconds, orders of magnitude faster than my above estimate of the likely duration of accurate atomic lifetime measurements. Alas, decades ago, in an attempt to obtain precise and reliable lifetimes of the resonance level in the ion, the ion trapping community found out that with two ions in a trap, the one disturbs the other measurably. Consequently, the high-flux, dense target atomic vapor experiment may promise excellent statistics, but not necessarily accurate results. Otherwise, this simple recipe would long since have filled all atomic databases with suitable atomic lifetime values.
5.2. Beam-Foil Experiments Extended to a Heavy-Ion Storage Ring
As mentioned above, fast-ion foil excitation has caused much disappointment because of the problems and uncertainties that result from the non-selective excitation process. What about turning the line of thought around, performing a fast-ion experiment that benefits from non-selective excitation and nevertheless achieves results as accurate as the ones that use a laser, and also on multiply charged ions? In a typical atomic system, the level lifetimes are short for the levels that can decay to the ground state, and they increase with the angular momentum quantum number
l and with the principal quantum number
n. Furthermore,
s levels (
) can only decay to levels with a higher value of
l; such transitions have a low rate, and consequently,
s levels have relatively long lifetimes. There is another group of levels that does not fit into such a simple scheme: displaced levels in the valence shell. Their lifetimes are usually significantly longer than those of the relatively low lying
levels. Before the displaced levels have much time to decay, they are replenished by much of the population of the fairly low lying
levels, boosting the displaced level population. Eventually, the population of the higher-lying
levels rains down, and the cascade intensity fades. Qualitatively, this resembles the magenta-colored dataset in
Figure 2 (trace e), replacing the individual growing-in cascade (from another displaced level) by the multitude of fast cascades, and the slow in-shell cascade by the multitude of cascades from high
levels. This would still be a messy problem for analysis. However, if the level of interest had a very long lifetime, the primary decay would dominate by far. In the example in
Figure 2, the three lifetime components relate as 1:2:10, with the middle one representing, for example, the 2s2p
first excited singlet level in a Be-like ion. In the same system, there also is a 2s2p
level that decays to the same ground state, but because of the necessary spin change, the transition rate is much lower. In the
ion, the lifetime of this level amounts to about 10 ms, while the bulk of the cascades have lifetimes in the range from nano- to microseconds. Thus, in the above relation, the middle value is increased by 4 to 5 orders of magnitude. Hence, compared to this long level lifetime, very many cascades quickly rain down to the 2s2p
level (and to the other low-lying levels) and boost its population. The decay curve—after a brief distortion period—consequently looks like a single exponential with a very weak, almost negligible tail. In fact, the real data (see [
14]) closely resemble trace c in
Figure 1.
In classical beam-foil spectroscopy, the photon detection concentrates on the first micrometers to centimeters. In contrast, the decay length (lifetime
times velocity
v) of such long-lived ions amounts to many kilometers (and thus, exceeds the size of usual laboratories). It is sensible to turn the ion beam around and let it circulate in a heavy-ion storage ring (injecting ions for a certain time and then ending injection). This has the additional benefit of not having to move the detector: the ions return to the field of view after every turn (every about 50 m or some 5
s), and one simply monitors the signal over time. There has to be a correction for the ion loss over time, but that is often small enough to permit lifetime uncertainties down to the 0.1% range. Quite a few atomic lifetimes have been measured this way, including some in the accuracy class just discussed. For examples and further details, see the review articles [
29,
30,
31,
32,
33] and the work cited therein.
6. Discussion
Problems on the way to higher accuracy of atomic lifetime measurements on ions have been discussed elsewhere [
34,
35]. A review of lifetime measurements of astrophysical interest [
36] has recently reminded me of the basic points of such efforts, of which types of levels can be measured well and which cannot, by the same technique. With the experience of hundreds of colleagues in the field for reference, the art of measuring an atomic lifetime precisely lies not in the art of designing a delicate experiment, but in selecting a suitable atomic level. Also, patience—on an atomic scale (for example, in the form of a heavy-ion storage ring—can be helpful. In the key Lamb shift experiment by Lamb and Retherford [
37], the hydrogen atomic beam was crossed and excited by an electron beam. The researchers wanted only H atoms that were excited in the metastable level—so they waited for the others to decay away (on the scale of milliseconds) while the atomic beam traveled along.
In whatever time interval a decay curve is to be recorded, inside that interval, the actual decay signals are statistically distributed. For this simple reason, the level lifetime measurement cannot much be improved by more precise clocks, beyond, say, a clock accuracy. The error analysis of the heavy-ion storage ring technique is not complete yet. I expect that an improvement by another order of magnitude in accuracy can be achieved. However, there are only a few heavy-ion storage rings around, which are oversubscribed by users with other interests but atomic lifetimes. Hence, the access to such a facility is hard to achieve.
The accuracy of certain QED effects or the stability of some optical clocks struggle at a fractional accuracy approaching . Is it fair to compare the accuracies of results of different theories on different subjects? Seventy years ago, QED was new, its capabilities barely emerging, and its computability unclear. QED has evolved since to yield the most accurate results of any current theory—but the quantum mechanical treatment of many-body systems is an entirely different matter. The atomic level lifetimes depend mostly on those quantum mechanical entities, while any slight energy corrections for QED contributions are easily accommodated as a correction. Consequently, the extreme accuracy of QED computations does not matter (much) for atomic level lifetimes—the decisive theory is multi-electron quantum mechanics. However, there is a new chapter opening up, with trapped highly charged ions that can be preserved in an ion crystal, and thus, interrogated by laser with high accuracy. In this way, some wavelengths can be measured with a very high precision indeed. The laser probing of the transition involves selective excitation and might, over very many cycles, be used to measure the level mean life. With only a single ion in the trap, the signal analysis would certainly be challenged by the signal-to-noise problem, and thus, struggle with the competition of the existing precise measurements at an electron beam ion trap. After all, this type of measurement is still related to spontaneous emission.
Atomic lifetime measurements involving a laser (on an atom or an ion beam or in an electron beam ion trap) or using a heavy-ion storage ring have reached a fractional accuracy of
. Are wavelength measurements in highly charged ions so much more precise? A fractional accuracy of
(or 100 ppm) is rarely achieved. Laser spectroscopic or interferometric work on atoms or singly charged ions can go beyond (and needs to in the search for and the study of exoplanets). An exceptional accuracy has also been reached by classical spectroscopy on ions in an electron beam ion trap, claiming an uncertainty of less than one part per million [
38], much helped by a stable and relatively cool light source and long data accumulation times. However, the computational atomic structure treatment of many-electron ions matches those high experimental accuracies only for ions with few electrons in the valence shell.
The lifetime experiments test atomic properties (such as the wave functions overall, or their radial parts) in ways no energy measurement can do, but they are limited by basic counting statistics and finite resources. Actually, not all atomic structure computations do any better. Except for ions with very few electrons in the valence shell, the typical ab initio predictive power for levels and transition energies is in the order of 1% for even most of the better codes. It irks me to see submitted or even published tables of term values or wavelengths with as many decimals as the computer program delivers. The scatter of the computed results for transition rates or oscillator strengths (in whatever gauge, Babushkin or Coulomb) is rarely as small as a few percent, indicating that only very few decimals are justified in the tabulations (which sometimes list many more). Apparently, theoretical computation and measurement of atomic mean lives are not so different in terms of accuracy.
There remains the problem of atomic lifetimes that defy measurement, because too many levels of rather similar lifetimes contribute to the decay curves. Surely, in these cases, theory is better off and can feed spectral models—but its results cannot directly be tested by experiments, until selective excitation of individual levels can be effected—some day. Until then, such predictions have to be considered as “untested”.