Sunt aliquid Manes: letum non omnis finit luridaque evictos effugit umbra rogos.
The Shades really exist: death does not end all things,
and the pale ghost, victorious, escapes the pyre.
ISAAC ASIMOV, whose essays are a principal inspiration for this space, had a saying about discovery that I think is right on. He said that the one phrase that most often presaged innovation and discovery was not "Eureka!" but "Hmm, that's funny…" Many's the discovery founded not on the careful and painstaking taking of data, but on an accidental observation.
The wonderful part of doing science is the playing around, with nothing particular in mind, and despite admonitions from CTOs and other bigwigs, such playing around often leads to serious work and results. Richard Feynman liked to tell the story of the period in his life when he was burned out, and decided one day to stop doing "serious physics" and just mess around. It seems that soon afterward, he was in the cafeteria, watching a plate wobble, and he wondered if the wobbling in the plate had anything to do with how fast it spun around. He concluded, after scribbling a little, that it did, and before long, he had worked out the beginnings of quantum electrodynamics (QED), for which he won the Nobel Prize in 1965. He shared it with two other physicists. Rather characteristically, he bought a beach house in Malibu with his prize money.
Nobody wants to pay for just playing around, especially when you put it that way, but it does put me in mind of a particular discovery in astronomy. It came about when one astronomer had carefully and systematically taken data on a large number of stars near the sun and earth. Thus far, nothing spectacular had come of all this data, but the astronomer in question was quite understandably proud of this body of work. Then another astronomer came along and asked about one little star…
In my last essay, I described what the sun is made of and how it fuses hydrogen atoms for energy. The power generated in this fashion is tremendous. Each second, the sun radiates enough energy to support the world's current energy needs for over a million years. Of course, most of that energy doesn't land on the earth, but instead falls on other solar system bodies or is radiated uselessly—at least as far as we are concerned—out into space. And the sun has been radiating at pretty much the same pace for about 5 billion years.
Just the same, the sun's hydrogen fuel supply is not infinite, but only unimaginably huge. It starts out with a certain amount of hydrogen, and there exists no known mechanism to replenish it. Therefore, the amount of hydrogen is not only finite, but constantly decreasing. And besides that, the sun will cease to fuse hydrogen long before the hydrogen is actually all gone, because some of it is in the outer layers of the sun, where it's too cool—under 15 million degrees—for the hydrogen nuclei to stick together. So it's completely natural to ask: What happens when the sun runs out of fuel?
At the turn of the century, it was gradually becoming possible to determine the distances to the stars. Before this time, many people thought that the stars were all roughly the same brightness, and only distance made some appear dimmer than others. For example, it was thought by some that Sirius was only the brightest star in the night sky, at magnitude –1.4, because it was so close. But now that a few stellar distances were being determined, it became clear that that wasn't the case. Sirius is about 8.3 light-years away, and is indeed one of the closer stars. But Barnard's Star is only magnitude 9.5, almost 25,000 times dimmer than Sirius, and it happens to be even closer—only about 5.9 light-years away. So gradually we realized that there was a tremendous range of intrinsic stellar brightnesses.
Knowing that stars have different brightnesses, you might want to investigate whether or not there are certain patterns to which stars are bright and which stars are dim. By 1910, for example, it seemed generally clear that intrinsically dim stars were always cool and red. Bright stars might be hot and blue on one hand, or cool and red on the other, but no one had yet encountered any hot dim stars. If you saw a dim star, some felt, you could be sure it was cool without further debate.
To be sure, the votes weren't all in yet. For those working on the problem, like the American astronomer Henry Norris Russell (1877–1957), it was imperative to get more accurate information on the color and temperature of stars—their spectral class, in other words. Spectral class was a means that astronomers used to classify the temperature and color of stars. The hottest and bluest were the rare class O stars, then B, A, F, G, K, and finally the coolest and reddest class M stars. The sun itself is a middle-of-the-pack, class G star.
Russell needed accurate spectral classes not just for any stars, but nearby stars in particular, because we knew best the distances to these stars and therefore their real, honest-to-goodness brightnesses. It did no good for these purposes to have spectra for distant stars whose distances and intrinsic brightnesses were unknown. In any case, Russell mentioned this need in a meeting with fellow American astronomer Edward Pickering (1846–1919).
Pickering replied that he in fact had much of that data available right in his office, and as a demonstration, he asked Russell to name a star, any star at all. Russell suggested "the faint companion of omicron2 Eridani," a star in the constellation of Eridanus the River which now goes by the name omicron2 Eridani B.
Pickering was quite confident that they would have the star's spectral type, and called down to the office. The assistant there, Willamina Fleming (1857–1911), searched through the records for about a half an hour and called back up, answering that the star in question was of spectral class A.
Now this was a thunderbolt! Spectral class A is reserved for white-hot stars—these stars are hotter than perhaps all but 5 percent of stars. They are typically about 9,000 degrees, whereas the sun is a relatively breezy 6,000 degrees. What's more, the intensity of a star's light varies as the fourth power of temperature. If this star were even as large as the sun, it should have been blazingly bright. In order for its intense light to be seen by us as such a dim point in the sky, that light had to be emitted over a spectacularly tiny area. The only way for that to happen was for omicron2 Eridani B to be very small indeed—possibly as small as the earth! At that point, only Russell, Pickering, and Fleming knew of the existence of what would come to be known as white dwarfs.
They're not all white, as it turns out; there are "white dwarfs" that are blue, yellow, or even red. For better or worse, though, they are named after their first example and the name is here to stay.
This find inspired Russell to plot stars carefully on a graph. On one axis was spectral class, and on the other was intrinsic brightness or absolute magnitude. As expected, there was a main trend of stars from hot and bright to cool and dim, and a secondary branch of some cool and bright stars.
Russell first published the graph shown in Figure 1 in 1913, and such graphs became increasingly popular as a way to represent stellar populations. They were known consequently as Russell diagrams until it was found that the Danish astronomer Ejnar Hertzsprung (1873–1967) had published a paper in an obscure journal in 1911 with almost exactly the same kind of graph. Nowadays both are recognized as inventors of that kind of diagram, but because Hertzsprung published it first—however obscure the journal—it's his name that comes first and we call them Hertzsprung-Russell diagrams, or H-R diagrams for short. (On the other hand, it's likely because of Russell rather than Hertzsprung that we use the diagrams as much as we do.)
In this graph, blue and hot is toward the left and red and cool toward the right; bright stars are at the top and dim stars at the bottom; this is the normal way that H-R diagrams are oriented. Broadly speaking, you can see the general trend: The blue stars are almost uniformly at the top end of the graph, whereas there are two branches toward the red end—one getting steadily brighter, and one getting steadily dimmer. And then, that anomalous white dwarf, omicron2 Eridani B.
Russell's graph is composed almost entirely of stars within about 30 light-years of the earth. What would happen if we plotted the same graph for a completely different community of stars? Figure 2 shows the H-R diagram for the Pleiades cluster in the constellation of Taurus the Bull.
You can see that all of the stars are along the diagonal from upper left (hot and bright) to the lower right (cool and dim). There aren't any stars in the upper right corner—that is, the bright cool stars. Nor are there apparently any white dwarfs.
Now, Figure 3 shows the H-R diagram for M3, a globular cluster in the constellation of Canes Venatici the Hunting Dogs.
In this diagram, only the bottom half of the diagonal remains—the cool and dim half. Instead of the hot bright stars we saw in Figure 2, we see primarily cool bright stars. There is also a "horizontal branch" of stars that slides toward left center, but never mind that for now.
What are we to make of this? One possibility is that these three different stellar communities are just intrinsically different. Although technically possible, many astronomers are resistant to such suggestions. They like to unify. Explaining the differences with one model is preferable to explaining them with three models, one for each stellar community. A singular model is also suggested by the fact that the lower diagonal appears in the same place in all three diagrams—only the top half is different.
This observation led American astronomer Allan Sandage (born 1926), among others, to speculate that all three populations started out the same, but are of different ages, and that the stars in each community move across the H-R diagram as they and the clusters they inhabit get older. It seemed as though all communities started out much like the Pleiades, with all of the stars along the main diagonal, after which the bright hot stars moved first, to the upper right, and as the cluster got older and older, more and more stars would peel off from the diagonal. This was substantiated by other lines of reasoning that suggested that the open clusters like the Pleiades were young, that globular clusters like M3 were old, and that our own neighborhood was a mixture of all different ages.
The case looked strong, but it was all supposition based on circumstantial evidence. What was needed was an independent line of evidence for stars moving across the graph.
Unfortunately, we're hampered in our search by our short lifetimes. A typical human lifetime of 75 years may seem long by ordinary measures, but astronomical measures are anything but ordinary. Seventy-five years out of 5 billion is nothing. The German astrophysicist Rudolf Kippenhahn (born 1926) has likened it to a fruit fly trying to determine the life cycle of humans by observing them over its one-day lifespan.
One clue came from computer simulations of main sequence stars. As it turns out, they predict that stars like the sun aren't precisely constant throughout their stay on the main sequence. Instead, they get slowly but steadily brighter. According to these simulations, the sun was only about 70 percent as bright 4.5 billion years ago, when the earth was created, as it is today. And 4.5 billion years from now, the sun may be about twice as bright as it is today.
This creates a lovely little puzzle. A sun that is 30 percent dimmer may not seem like much, but it would be a climatological catastrophe for all life on earth, for the sun gives us not only light but heat as well, and a 30 percent drop in the heat output of the sun would pretty well put everything in a deep freeze. Any life form of which we're aware needs liquid water to survive, and there would be no chance at all for water to remain liquid given a sun that is only 70 percent or even 80 percent as bright as it is today. This problem has been called the "young cool sun paradox."
One way out of this is to make use of greenhouse gases. The earth has greenhouse gases today: gases like carbon dioxide and water vapor serve to keep the average surface temperature warmer than it would be without them. You can have too much of a good thing; the reason that Venus is as hot and inhospitable as it is is that there has been a runaway greenhouse effect. Venus's atmosphere contains thousands of times more carbon dioxide than the earth, and it's closer to the sun to boot. Some scientists fear that something similar will someday happen to the earth.
But if greenhouse gases can make an already warm planet like Venus lead-melting hot, then it can also make a cold planet warm. It would require an atmosphere several hundred times richer in carbon dioxide than it is today to lift the surface temperature of the early earth to the melting point of water, but nothing prevents that. The earth may have had an insulating blanket of carbon dioxide to keep it warm enough to sustain liquid water and hence life.
The situation in the future may not be so rosy. The young cool sun will someday become an old hot sun, and if the sun continues warming at its current pace, the surface temperature of the earth will exceed the boiling temperature of water in at most a couple of billion years. Ever since the sun was born, the earth has apparently shed carbon dioxide fast enough to keep the temperature hospitable enough for life, but at some point there will be no more blanket to remove—the atmosphere will be completely denuded of greenhouse gases—and still the sun will continue to warm up. We know of no "reverse greenhouse gases" that could lower the temperature suitably to sustain life as we know it under those conditions. Like it or not, within the next couple of billion years, we will have to get off the earth, or perish (presuming that we haven't done that to ourselves before then).
As far as the sun is concerned, however, this gradual warming will be a minor effect. We can follow the evolution of the sun on an H-R diagram like the one in Figure 4. (Adapted from Stars, by James Kaler.)
The sun starts out by condensing from a great cloud of gas. As it condenses, the density at the center becomes greater and greater, and hotter too, as a result. At some point, the temperature and density become sufficiently high to start the hydrogen fusion, and the sun is born.
The heat generated first by condensation and then by fusion causes the surrounding gases to expand, and it is this expansion which keeps the continuing condensation in check. The gravitational condensation is tireless, and eventually it will run out. For the time being, however, the sun burns fairly steadily, and in fact, the hydrogen fusion has a tendency to expand outward a little, and consequently the sun actually gets brighter as previously mentioned, over a period of about 10 billion years.
After 10 billion years, though, the central hydrogen is just about used up. There is no longer any hydrogen left in the center core to fuse; it's essentially all helium down there. As a result, the condensation that was put on hold for those 10 billion years begins once more. The helium core contracts. Meanwhile, there is still some hydrogen in the middle layers of the sun, and this now begins to fuse in an outward-growing shell. For a time, this fusion keeps the sun on a relatively even keel, but only for about a billion years.
At this point, the hydrogen fusing shell is working at a breakneck pace. The extra energy output causes the outermost layers of the sun to expand dramatically, accelerating the sun into full red giant phase. This same shell heats not only the layers above it but the core below it as well. It's possible for the helium in the core to fuse, but helium atoms are even more difficult to stick together than hydrogen atoms. Whereas hydrogen fusion is possible at "only" 40 million degrees, the core has to reach a temperature of nearly 100 million degrees in order to fuse the helium.
By that time, the sun may have grown to over 100 times its current size, nearly all the way out to the earth. The outermost layers are now so distended that they cool down to a dull red, perhaps about 3,500 degrees, but there is so much light-emitting area on the bloated sun that overall it emits about 100 times as much light as it did before swelling.
By 1955, computer simulations had progressed the evolution of the sun to this point, but here they simply ran aground and failed utterly. Matters remained at an impasse until the American astronomer Louis Henyey (1910–1970) developed a completely new method for simulating the interiors of stars in 1959. At first the new method was full of frightful complexities and not even Henyey himself was able to get very useful results out of it, but by 1961 the model was considerably simplified, and the German-born astronomer Martin Schwarzchild (1912–1997) asked him to give a talk on it to the International Astronomical Union at the University of California at Berkeley, where Henyey taught. Kippenhahn was also there at Schwarzchild's behest, and as he relates, he took careful notes on the method, although he didn't fully understand how it worked.
Later, he would share these notes with Schwarzchild, who was able to reconstruct Henyey's revised method from the notes Kippenhahn had taken, and used it to simulate an old-aged star of the sun's size, past the point of hydrogen depletion. At this point, Kippenhahn began to understand the method, and took it back with him to Munich, where he in turn used it to simulate the advanced stages of a star much more massive than the sun. For these contributions and others, Henyey was honored with a crater on the far side of the moon after he died suddenly and tragically of a cerebral hemorrhage in February 1970.
What did Schwarzchild find out about the sun, some 6 billion years from now? As it happens, things get very interesting after the hydrogen runs out. Astronomers knew that the core was now filled with helium, the result of fusing hydrogen, and they also knew that the core was now hot enough for the helium itself to begin fusing into carbon and oxygen. But something else was unexpected.
When the helium fusion finally does come, it comes not with a whimper but with a bang called the helium flash. The helium flash is almost welcome in the sun, for it stabilizes the sun. The helium fuses quietly and efficiently in the dense core. As a result, the sun actually shrinks back down, heating up as it does. With the increased temperature, the sun grows brighter per unit area patch, but because it's so much smaller, its overall brightness is lower than it was before the flash.
However energetic the helium fusion, though, there is only so much helium to fuse and before long, on the cosmological scale, the helium too is depleted. For a second time, fusion shuts down in the core and it starts contracting again, heating up even further. Its outer layers begin expanding once more, except now there are two layers of fusion powering the expansion: the hydrogen-fusing shell that was there before, and directly below it, a helium-fusing shell touched off by the hot core at the center.
These two shells grow outward in a kind of tag-team fusion. The helium fuses into carbon and oxygen, supplying heat that touches off the next round of hydrogen fusion, which in turn supplies the helium for the next round of helium fusion. At the end, the rounds come so fast that the sun will seem almost to pulsate. The famous variable Mira, otherwise known as omicron Ceti in the constellation of Cetus the Whale, is an example of a star in this state.
The sun in this second expansion phase grows even larger than the first time, and it will likely expand beyond the current orbit of the earth. It may not actually swallow the earth, however, and the reason for that is another effect of this expansion: the "surface gravity" of the sun at its maximum size will be so low that the gases that were previously bound are now free to escape. The sun may slough off so much mass in this manner that the earth's orbit will expand just enough to stay outside the sun.
Eventually, the sun may slough off all of its outer layers, leaving just the central core, rich in carbon and oxygen. This core will likely be disproportionately bright for its size, since it was once providing most of the energy in the dying star, but it's hotter and much smaller. It may preserve half its original mass in a volume not much larger than an ordinary planet.
If this is a common fate, then there should be many of these naked cores and we should be able to detect them. None of them are visible to the unaided eye, as it happens, but a modest telescope is enough to see a whole host of them; one of these is omicron2 Eridani B. White dwarfs are simply the naked cores of former red giants. The layers blown off by a white dwarf do not immediately disappear, but are often illuminated to a ghostly red or green or blue by the energetic radiation of the dwarf. We know these as planetary nebulae, and they are among the loveliest of objects in the night sky. Figure 5, below, shows a picture of one of the most famous, the Cat's Eye Nebula, in Draco the Dragon.
Such is the destiny of stars like the sun. Stars much more massive follow a different path, as first determined by Kippenhahn. Their evolution can be followed on an H-R diagram, much like the sun's, but as shown in Figure 6. (Adapted from Stars, by James Kaler.)
A star, say, 10 times as massive as the sun will use up its hydrogen supply not in 10 billion years as the sun does, but in a mere 50 million years. It begins its demise much as the sun does, but after its helium flash, instead of rising one last time to evolve into a white dwarf, it goes through a series of pulsations which take it back and forth, clear across the H-R diagram. The number of times it goes back and forth depends on the mass; the more massive the star, the more times it traverses the graph, although the most massive stars may not go all the way across each time. Inside the star, helium is fusing into carbon and oxygen, just as in the sun, but unlike the sun, which will never get hot enough to fuse the carbon into still heavier elements, the more massive stars are always compressed enough to heat up and fuse whatever is left in the center.
At the end, all that is left in the core is iron. Iron does not fuse readily, because it doesn't produce energy in fusing, but rather sucks in energy. In the final seconds, there is nothing left to prevent the final compression of the core, which happens so violently that there is a rebound effect and the great majority of the star, perhaps 80 or 90 percent, is thrown out in a violent explosion called a supernova. This explosion may outshine for a brief period of time an entire galaxy.
What remains of the 10 or 20 percent after a supernova explosion depends on the original mass and makeup of the star. It is never a white dwarf, whose electrons are still separate from their protons and neutrons; the remaining core is more than 1.4 solar masses, and nothing that large can be sustained as a white dwarf. Instead, the remains are so compressed that all that is left is a massive ball of neutrons, perhaps just 10 kilometers across, so it is called a neutron star. These neutron stars often rotate vigorously, emitting light like a lighthouse as they go. If the light beam happens to intercept earth, what we see is a star that blinks on and off very rapidly, and so these are called pulsars. The first one was discovered by Irish-born astronomer Jocelyn Bell (born 1943) in the Crab Nebula in Taurus, in 1967, and provided dramatic confirmation of stellar theory. (She did not, by the way, win the Nobel Prize for that discovery. That honor went to her adviser Anthony Hewish, who certainly wasn't the first to detect the pulsing. I've never understood the slight, but the Nobel Prize committee isn't under pressure to explain their choices.) [See the Note at the end of my essay, "The Grand Illusion."]
The gravitational force on the surface of these neutron stars is intense; the escape velocity is typically a significant fraction of the speed of light. The larger the original star, the more massive the neutron star, and the greater the force of gravity at its surface. In fact, the neutron star cannot be more massive than about 3 solar masses before another astonishing transformation occurs. The surface gravity becomes so strong that the escape velocity actually exceeds the speed of light. Thus nothing, not even light, can escape from this object, which is termed a black hole. Their very nature makes them difficult to detect, since by definition nothing can be emitted by them. (Actually, this is not strictly true. The English physicist Stephen Hawking (born 1942) has demonstrated that black holes do emit tiny bits of radiation. The amount of radiation that Hawking predicts, however, is so small as to be negligible for any black hole that arose from a supernova explosion.) Although black holes were first conceived of over 200 years ago, and the theory of black holes was first formalized early in the 20th century, only in the last 25 years or so have we seen objects which, it appears, can be nothing other than black holes, but now they seem to be everywhere. They are apparently common in the centers of galaxies, and there seems to be one in the middle of our very own Milky Way Galaxy, weighing in at about 3 million solar masses. Unlike white dwarfs and neutron stars, there is no upper limit to the size of black holes.
And what of a star much lighter than the sun? These are, by some accounts, the dullest stars in the universe, but probably also the most common. A star of, say, one-tenth the mass of the sun, will spend about 20 trillion years on the main sequence. Eventually, it will suffer a fate similar to the sun's, although on a much smaller scale. However, there are no red giants formed from stars so small yet. Since the universe itself is "only" 15 billion years old, none of these small stars have been around long enough to deplete even one tenth of one percent of their hydrogen supply. They will be around long after the sun has become a white dwarf, cooled, and faded into obscurity.
Copyright (c) 2000 Brian Tung