Featured

Ramblings I

I had thought it was time to write a summary of what this blog was all about and, after the summary, write no more. However, after many tries, I could never even make a start and, in addition, I lost all desire to write. Today I had a better idea: just ramble and perhaps find a Zen place. Of course, there are many Zen spaces, none well defined, all possibly controversial, since the concept of a Zen place has no referent and is from a conventional, rational Western point of view meaningless nonsense.

However, the whole point of this blog is that there IS such a space; and from this place all the various Western sciences, arts, and what-have-you’s can be stitched into a meaningful fabric in which each piece gives its body to the other bodies, creating in our minds, at least, a tapestry which goes beyond any of its pieces. And beyond this tapestry there is the pure religion which tells one that one’s life is not meaningless; that our lives, though arising out of nothingness and snuffed out into pure oblivion have some kind of eternal significance, beyond verbal expression.

Actually, Zen is a poor name for this tapestry, this pursuit, this vision, and this religious understanding, but my imagination is too limited to come up with a better name. One problem, as I see it, is that the term Zen, is irredeemably Eastern and irredeemably stitched into Eastern culture at least in the minds of Westerners unfamiliar with it. One seemingly needs an Eastern mind to sense what it might be. Luckily, for me growing up in Hawaii, I somehow came to admire and love an Eastern outlook, Japanese and Chinese, tempered by an underlying Hawaiian vision, and could “aha” an inkling of what this religion, beyond all religions and cultures might be. However, any name for it is misleading, a word for what cannot be named. In attempting to link “IT” to Western culture, I don’t imply that Western culture is in any sense special or beyond any other world culture. I simply feel that there is a big lack in Western culture of a religion that can complete, link and top off the various strands of the culture. Such a religion (identical to Zen) can, in my opinion fill the bill in a way that traditional Western religions cannot.

As you, my ideal reader coming from a Western culture and upbringing, journey to an understanding of this religion I have a couple of thoughts at the moment that could be useful. One concerns the idea of “enlightenment”. In earlier posts I’ve suggested that I prefer the Soto idea of gradual enlightenment rather than the Rinzai ideal of sudden enlightenment. However, I would go a little further, regardless of whether you’re a Rinzai guy or a Soto woman (or vice versa), and use an idea and an image from analytic geometry, a very Western form of thought. This is the idea of an asymptote. Consider a hyperbola, one of the conic sections. This two-dimensional figure has four arms, each one of which, as it travels out forever from the center of the figure approaches closer and closer to a straight line called its asymptote. In my analogy consider enlightenment the asymptote and our journey towards enlightenment a curve which comes closer and closer to the asymptote without ever touching it. At some point one gets so close to the enlightenment asymptote that one more and more senses and takes on its properties without ever touching it. If one takes this idea as an axiom, one never needs to consider whether or not one is enlightened. You’re not and neither am I nor anyone else. Just keep up your meditation and striving, coming nearer and nearer to the ideal.

Another thought concerns the concept of “ego”, “self”, or “me”. Does such exist? Are we “attached” to our ego or self in a Buddhist sense? In our journey, can we lose this attachment? Does it help to realize that one’s ego or self doesn’t really exist? “I” don’t think so. The self may be a fiction; but our attachment is very real, and it’s the attachment, the clinging that is the problem. Consider also that “I”, “you”, “self”, “we” are simply words in a language and we’re dealing with realities which are inexpressible in that language. As we practice over the years we may hope and sense that the attachment is weakening. However, it actually can give one a sense of “peace” to realize that the “attachment” will likely never go away and that one can drop worrying about it. The practice is important, but don’t expect it to do more than, at times, weaken our attachment. Another thought is that these “self” words, though meaningless, are actually useful. When I encounter you, I hope I have the ability to look at you and “see” who you are; see in a deeper sense beyond your persona, your background, education, and the objective facts of your life. I want to see this fictional “you” and have “you” feel that you have been seen and understood.

Ramblings II

In the last post I inadvertently used the epithet “IT” for the name of an imagined new slant on a Zen grounded in Western Culture; or, perhaps not just Western Culture but the complex enriched, modern World Culture which has itself grown out of Western Culture. This new embodiment needs a name. “IT” won’t do, as it just doesn’t have the proper ring, cachet or heft of existing names, Dhyana, Ch’an, Zen. Dhyana, beginning as simply a name for meditation can now be taken as the name of the “almost Zen” of Mahayana Buddhism as it grew under the tutelage of Nagarjuna and his kin. Ch’an is the Chinese name while Zen is the Japanese pronunciation of Ch’an which became the label for the Japanese embodiment. Finding a good label is tough; and I don’t feel that I have the talent for it. Consider the physicist Murray Gell-Mann, who did have that talent. Gell-Mann came up with the name quark as a label for simple particles within the various nucleons, mesons, and resonances of the strong interaction. Independently of Gell-Mann the physicist George Zweig had had the same daring idea, that there were actual “real material” particles belonging to the fundamental triplet representation of the SU3 group. Zweig named his particles “aces”, while Gell-Mann preferred “kworks”. Fooling around in Joyce’s Finnegan’s Wake, Gell-Mann found the phrase “three quarks for Muster Mark” on page 383. Thinking that one of Joyce’s meanings might be a bar order, “three quarts for Mister Mork” Gell-Mann proposed “quark” pronounced “kwork” as a tortured rendition of “quart”. The page 383 is significant since the next higher representation of SU3 has 8 members which are even more “real” than the quarks because they can exist on the “outside” as hadrons and make tracks in a bubble chamber. The “eight-fold way” has been taken over from Buddhism for the SU3 interpretation in which there are 8 particles which can be “seen”. An older, discarded 3-fold Sakata model used the hadrons proton, neutron and Lambda as a fundamental triplet.

Although the likelihood that I have Gell-Mann’s talent for labelling is vanishingly small, I really must give it a try; so, I will propose the Hawaiian pidgin Da Kine. This expression works rather well because it is a corruption of the English “the kind”, but in Pidgin the meaning has changed in that da kine’s reference is deliberately vague or ambiguous. Often the phrase is used when one does not feel like being specific. When I now use “da kine”, maybe it refers to this new kind of Western religiosity, or maybe it’s merely a meaningless redundancy. It could refer to anything. An example of the flavor I’m talking about occurs in William Finnegan’s wonderful, Pulitzer-prize winning memoir, Barbarian Days. The author and his buddy, Bryan, in a vain attempt to keep secret their discovery of a world class surfing spot, Tavarua, in Fiji, never say its name, but refer to it as “da kine”. A problem with da kine is, of course, that is a very common expression in Hawaii and, in fact, there is a company with the name Da Kine. One faces a possible copyright infringement complaint; however, if “Wind Surfer” and “Kleenex” couldn’t defend their copyrights, I doubt that “Da Kine” can either. In any case one might well end up with an even better name than Da Kine.

It is pretty clear to me that we do need a new name. Consider the existing names Dhyana, Ch’an, and Zen. Dhyana has a lofty, abstract, almost philosophical connotation of the jewel in the lotus, while in Chinese culture there is the wonderful idea of taking serious things lightly and light things seriously. This particular sensibility, it seems to me, is missing in Japanese culture; not that there isn’t a wonderful sense of humor in certain Japanese productions. I remember in the late fall of 1974 when my first marriage had dissolved, I was quite distraught, and visited my parents in Honolulu. They lived on Kulamanu Place right around the corner from where William Finnegan lived on his first Hawaiian visit, not that that has any relevance. What does have relevance is that Hawaiian television in those days featured Japanese science fiction cartoons. These were deliberately and deliciously corny, with a wonderful sense of humor. I still remember one in which the villain was named “Blue Electric Eel”. He could take on a human form and when he was in a crowd about to perpetrate some villainy, the scene would move down and show his blue suede shoes, just before all hell broke loose with, if I remember, many sparks and short circuits. With world western culture there are so many strands that I won’t pick out any particular one. The whole culture is da kine. What is clear that this new world culture needs a new word for its penumbra and its specifics, and I’m proposing da kine.

Another, more concrete theme of da kine sensibility is that it originated as an outgrowth of traditional Buddhism. I wonder what sort of novel insight Buddhist thought could confer on Western history, culture, philosophy and religion, to say nothing about contemporary affairs. Going beyond any historical distinctions in Buddhism such as its split into Theravada and Mahayana forms, I think that the idea of “attachment” and a goal of its relaxation or lessening, is curiously underemphasized in our culture. I remember an incident that occurred some years ago when for a time I attended a “Bohm dialogue” group, dedicated to the idea of a selfless descent into creating and following interesting threads of conversion without an agenda, pretty much identical to an eighteenth-century French salon, but perhaps with a higher expectation of generating deep new insights. During a discussion of a topic, now forgotten, one person brought up the idea of “gnosis” which he considered to be knowledge and understanding of a religious doctrine, in a manner so absolute as to be impervious to refutation. What struck me at the time was that this constituted a grasping, so rock solid and with such a diamond hardness that it might well be called adamantine. What struck me even more was that this person implied that he admired this gnosis and that its “knowledge” should be taken seriously , in part, simply because it was held with such mystic conviction. At the time I was quite shocked because I had been immersed in Buddhist thought for some twenty to thirty years and didn’t realize that “grasping” could be taken in a way other than “undesirable”.

A day or so ago to learn more I looked up in Wikipedia the word “gnosis” and read about its considerable history, beginning in ancient Greece as simply a word translated as “knowledge”. Then in later Hellenistic times there were sects which were called gnostic, and in still later times it led in various modern European languages to words for two kinds of knowledge. It would seem that gnosis is a mystic kind of insight into belief, but in no part of this article was there a hint of the idea of “grasping”, a concept seemingly foreign in Western thought if applied to doctrines or ideas.

Of course, the modern scientific revolution, starting somewhere around the early seventeenth century, did implicitly bring in the idea of letting go or “ungrasping”. Scientists are supposed to have convictions, but be willing to change them when evidence rules against them. However, anyone who is at all knowledgeable about scientific history knows that this ideal is far from being followed by scientists in practice. The physicist Planck, who elucidated the first quantum mechanical phenomenon I’ve discussed in earlier posts, was not at all happy with what he discovered and only slowly accepted the idea that he had truly found something revolutionary and new. However, Planck, I think it was, reflecting on the continued opposition of older scientists to his and later discoveries said something to the effect that no amount of experimental evidence would ever cause these physicists to change their opposition; but fortunately, they would eventually die off, leaving the new, field of quantum theory, to be developed by scientists who would pay attention to the experiments which definitively demonstrated the reality of quantum phenomena.

The point is that science does have a way of eventually dealing with adamantine grasping, whether through grudging acceptance or the dying off of stubborn opposing scientists. In recent times, as I’ve discussed before, Karl Popper’s idea of “refutation” has been enormously clarifying for the philosophy of science. “Refutation” directly implies the necessity for ungrasping as a theory is disproved. Popper’s idea has furthermore diffused into areas outside of science as a touchstone of rational thought. However, the idea of refutation becomes muddy as one moves away from science into areas where refutation becomes more and more difficult or impossible. One then needs to grapple with the whole idea of “grasping” especially with what I’ve called adamantine grasping or the grasping implied by “gnosis”, impervious to any change of mind regardless of evidence or argument. For it seems clear that a tendency to grasp beliefs is ingrained in we humans, and likely has some positive survival value in many situations. However, increasingly there does seem to be a need to cope with its negative consequences; a need underappreciated in Western thought.

Weird stuff: Astronomy, cosmology and Zen

In this post I tell another story of how science, in going beyond what we can imagine, stretches our imagination to encompass new possibilities of the “real”. There are several important themes here, mostly implicit in this account. There is the fact that scientific measurement is often boring in the extreme, seemingly meaningless, especially when it consists of large tables of numbers. However, out of these tables come new mind-blowing meanings, as sublime as great poetry. In addition, there is the human side of science: how people are ensnared by the mysteries facing us and pursue the tedious day to day work which mostly goes nowhere. But enough. Let’s to the story.

Over the last one-hundred years or so we as humans have been vouchsafed through science an overarching view of the universe we inhabit. At the beginning of that last century, around 1920, we knew that the stars were incredibly far away, but figured that the entire universe was embodied in the enormity of what we now call our galaxy. Only eighty-two years before that in 1838 had the first accurate distance to a star been calculated using the phenomenon of parallax, a shift in the apparent position of nearer objects relative to those further away when observed from different viewpoints. A simple way to observe and understand parallax, is to hold up a finger at arm’s length in front of one’s nose, close one eye and then the other. The apparent position of one’s finger jumps back and forth relative to a background further away. One half the angle of the shift defines the parallax. One uses half of the shift because the direction straight out from one’s nose towards the outstretched finger defines a base direction for where one is looking. A line to the finger from either eye meets the straight-out line at the parallax angel. Knowing this angle and the distance between one’s eyes, one can calculate the distance to one’s finger using simple trigonometry. That calculation would be pointless; however, one realizes, using the same idea, that instead of the distance between one’s eyes, one can take the distance between opposite sides of the earth’s orbit around the sun and by measuring the apparent shift in position of nearby stars relative to those further away, one can calculate the distance to those nearby stars.

The fact that there should be parallax in the heavens was understood in ancient times, was known to many in the sixteenth century and could be used to calculate the distance to the moon, around 10 times the circumference of the earth. The seminal transitional figure, Tycho Brahe, 1546 – 1601, excited by the new theory of Copernicus (1643), but still under the thrall of the classic Ptolemaic view, realized that only by measuring the angular position of both the “fixed” and the “wandering stars”, called planets, might he be able to tell what was really going on in the heavens. Tycho thus became obsessed with measuring, so was among the first in history to intuit and practice what we now realize lies at the heart of science, careful measurement and observation¹. Tycho was a Danish nobleman and used his own and other money to finance the construction of instruments such as quadrants and sextants, each like a piece of a giant protractor. During his life he measured hundreds of stellar positions as well as those of the planets. The telescope’s invention lay in the future, but Tycho could measure angles to around a minute of arc, one sixtieth of a degree, about thirty times smaller than the moon’s diameter as seen from earth. Probably influenced by ancient Greek ideas of an earth surrounded by crystalline² spheres carrying successively, the moon, the sun, each of the five planets, and then the fixed stars, Tycho imagined that the stars were not much farther away than the planets. If Copernicus was right and the earth had a circular orbit around a fixed sun, Tycho should easily be able to detect the parallax shift in at least a few stars over a six-month period as the earth swung around the sun in its orbit. Finding such a shift would confirm Copernicus and simultaneously give an idea of the distance to the stars in terms of the roughly known distance to the sun.

Over a year’s time Tycho could detect not the slightest parallax in any candidate star. This meant one of two things. Either the earth and the stars were fixed in the cosmos, OR the stars were unimaginably far away. The latter possibility was to Tycho unthinkable so he guessed the former and made up a model in which the five planets circled the sun, while the whole shebang of sun and planets, circled the central, spinning fixed earth and her moon inside the sphere of the fixed stars. Tycho’s theory was messy, but saved at least part of Copernicus’s beautiful picture. Tycho’s guess was wrong, as so many scientific guesses are. In fact, wrong guesses are an important part of science even though they mostly are forgotten and ignored by history. In Tycho’s case although his guess was wrong, his measurements proved crucial to Kepler’s laws of planetary motion, and with the contributions of Galileo and Newton, a Copernican model made more sense, although it took over another 100 years for stellar parallax to be detected and yet another 100 years before it was actually measured by Friedrich Bessel in 1838. Before its detection in the early 1700’s there were still die-hard anti-Copernicans who could use the lack of stellar parallax as the primary evidence for their views. It seemed to them impossible that stars could be so distant. As it turns out, the parallax of the nearest star is less than an arc second, more than 60 times smaller than Tycho Brahe could detect. An arc second is the angle subtended by a quarter 3.3 miles away and the stellar shift is at most about half of that. (See Wikipedia’s article, “Stellar Parallax”.)

It’s worth doing some simple math in a short paragraph to show how the distance to nearby stars is calculated and find its value. (Feel free to skim.) It turns out that one doesn’t even need to use trig, because if the parallax angle is small, one can use the formula s = r times ø, relating the arc length s on a circle to its radius r and the angle ø which s subtends. In the astronomical situation r is the distance to the star, s is the radius of the earth’s orbit around the sun, 93,000,000 miles or so, and ø is the parallax angle. The angle ø needs to be in radians, an angular unit = π/180 times the angle in degrees. These days with a smart phone one can easily grind out the calculation. Let’s take ø to be half a second of arc. We need that half second to be in degrees so we can then multiply by π/180 and have it in radians. So, .5 times 1 /60 x 1/60 = .5/3600 = 0.000138888 degrees. Multiply that by π and divide by 180 and we have our half second as 0.00000242407 radians. Divide 1 by this angle and we find that a nearby star is 413,000 times as distant as our sun; namely 38,400,000,000,000 miles away. Astronomers like to cut these big numbers down to size. If we used an entire second rather than a half as our parallax, the distance would be half as much. Astronomers name this latter distance a parallax second, abbreviated as a parsec, pc. Our hypothetical star is 2 parsecs away and there are, in fact, stars that are that close to us. There are none as close as a parsec. Another distance unit in popular usage is the light year, the distance light goes in a year’s time, traveling 186,000 miles or so each second throughout the year. You can whip out your phone and show that a parsec is about 3.26 lightyears. It is worth contemplating for a moment the magnitude of this distance to our near neighbor stars. Light gets to the moon in a couple of seconds, to the sun in half a minute, but takes 5 years or so to reach nearby stars, 2 parsecs away. We will see below that typical distances in our universe are measured in mega parsecs, a million times as large. As a means for measuring cosmic distances parallax is quite limited. The satellite Hipparcos, aloft 1989 – 1993, could detect a parallax of 0.001 arcseconds (like measuring the diameter of a quarter in New York as observed from San Francisco), so could measure the distance to stars one-thousand parsecs away. Helpful, for stars in our immediate neighborhood, but worthless further out.

As the twentieth century dawned it was clear that most stars were unfathomly far away and that any parallax they possessed was infinitesimal. Enter into our story Henrietta Swan Leavitt, 1867 – 1921. Around 1892 as a college senior she took an astronomy course and became incorrigibly fascinated. As a woman traditional routes to becoming an astronomer were closed to her. Instead she was able to wrangle her way as a volunteer at the Harvard Observatory. Around 1900 photographic plates came into being and were soon put to use in astronomy. The relative brightness of stars could be measured with greater precision on these plates than by naked eye observation and Henrietta, was put to work measuring the brightness of thousands of stars. Imagine the tedium of this work, day after day, year after year, with only a slight inkling of what use this data would ever have. However, in the early 1900’s while measuring the relative brightness of 1777 so-called Cepheid variable stars, Henrietta noticed something; namely, that there was a relationship between the brightness and dimness period of these stars and their relative brightness at its peak. She made a graph of the data and pointed out her finding to her boss, the astronomer, Edward Pickering. As a woman she could not publish her finding, but Pickering could and did in 1912, giving her credit for the discovery. The stars whose brightness she measured were in the large Magellanic Cloud, a nebula, so were at an unknown distance. The brightness was only relative. However, people soon realized that there were nearby Cepheids within parallax range. With an absolute measure of brightness established one could potentially reach out, finding the distance to stars much further away than could previously be measured. Ms. Leavitt pointed this out before she succumbed to breast cancer in 1921. Her finding was easily worth a Nobel prize, but there were three reasons she could not be considered. 1, Nobel’s are only given to living persons; 2, Astronomers were ineligible in those days; and 3, She was a woman.

By 1924, using parallax, the distance to several nearby Cepheids had been measured and the time was ripe for momentous discoveries. The first of these was made by Edwin Hubble using the newly built 100-inch Wilson telescope above Pasadena, California. (When I lived in Pasadena in 1953-4, I would hike up to the observatory on weekends and occasionally be amused by the spectacle of California drivers skidding around in a rare snowfall.) By the end of 1924 Hubble had been able to detect and measure the brightness of several Cepheids in the Andromeda and other nearby “nebulae”. Clearly, the distance to these stars was much greater than to any star in our milky way galaxy and the “nebulae” were, in fact, “island universes”, each consisting of a several hundred billion or so stars. Hubble thus settled a controversy since some influential astronomers at the time thought that the nebulae were simply large star clusters inside our milky way. As a distance measure, astronomers still cling to the parsec, an established convention, but now mostly in the form of a kilo or mega pc, a thousand or million times the distance mentioned in the last paragraph. For example, our nearest neighbor galaxy, according to the Wikipedia article “Andromeda Galaxy”, lies at a distance from us of 778 kpc or 2.54 million light years.

As the 1920’s wore on (remember: this is the time of the quantum revolution, the German hyperinflation and the inexorable growing foundation for Hitler’s rise) Edwin Hubble made another earthshaking discovery, measuring a Doppler shift in the spectra of various galaxies. One experiences a Doppler shift here on earth when an emergency vehicle with “lights and sirens”, passes by. The pitch of the siren suddenly lowers as the vehicle passes. Hubble found that the frequency of light from galaxies lowered (were Doppler shifted towards the red), the amount of shift being directly proportional to the estimated distance of the galaxies. What this meant was that the farther a galaxy was from us, the component of its velocity in our direction was always away and greater. Imagine in your mind being in the middle of all these galaxies. Anywhere you imagine being, you are always in an apparent center (so says general relativity) and all the galaxies are moving away. The number of threads in the fabric of the entire universe is increasing so distance measures are growing. The speculation this situation suggests is that at one time there was a beginning of this spread and that the entire universe exploded out of nothingness. This idea is called “the big bang” theory, “big bang” being an expression coined by Sir Fred Hoyle, a brilliant, creative, quirky British physicist and astronomer, who proposed a rival, steady-state theory of an eternal, expanding universe, kept homogeneous by the rare, occasional creation of a stable elementary particle. Hoyle claimed he was not being pejorative in his term, but with it he was implying that the very idea of a “big bang” was ridiculous. Among other things Hoyle wrote some interesting science-fiction novels, one at least, based on a possible rupture of space-time in the vicinity of earth. (October the First is Too Late.)

Hubble published the paper about his red-shift observations and some of their consequences in 1929. His ideas had been anticipated in greater detail and published in a somewhat obscure journal, two years earlier by Georges Lemaître, a priest, mathematician and physicist, then a part-time lecturer at the Catholic University of Louvain in Belgium, see Wikipedia. Lemaître, rediscovered a metric, predicting the expansion, in the equations of General Relativity. Also, he realized that Einstein’s solution for a static universe was untenable. Then, using red-shift observations in the literature, Lemaître made the first estimate of the Hubble constant (now renamed the Hubble-Lemaître constant). Lemaître was also the first to imagine the “big bang” arising from a densely packed “primeval atom” containing what was to become our entire universe. When Lemaître translated his paper to English in 1931 he left out his section about the Hubble constant because by then Hubble’s 1929 paper had come out and Lemaître figured that his own value was obsolete. Ironically, Hubble’s value was off by a factor of 10 or so. Nowadays we know that the constant(?) is about 70 although at the moment (7/14/2020) there are at least two different values which disagree, with a gap beyond their error estimates. The units of the “70” which I left out of the previous sentence are worth explaining briefly. (70, without units, has the same status as 48, mentioned in Douglas Adams Hitchhikers Guide to the Galaxy as the answer to “life, the universe and all that.”) To understand the Hubble expansion unit, imagine that we “look” out from our earthly center of the universe a megaparsec. We will find that out there, all the galaxies are moving away from us at an average speed of 70 kilometers per second. Go out another mpc and they’re going at 140, etc. The unit is thus a kilometer per second per megaparsec. Incidentally, the variation of galaxy velocities making up this average is small. The universe is incredibly homogeneous, a fact Hoyle could have used, had he known, in his long battle with the big bang.

Until fairly recently Hubble received the credit (for whatever it’s worth) of discovering the red shift and the big bang because of his well-publicized 1929 paper. Hubble did nothing dishonest in accepting his honors and fame, but also did nothing in the way of discouraging such. Why should he? Lemaître remained an obscure figure partly because he was not at all interested in self-promotion and possibly because he was a Catholic priest, with the baggage of being considered anti-science because of his religion.

As the 20th century wore on, the picture suggested in the first third of the century fleshed out. People researched the different kinds of galaxies, realizing in the process that there are 100 billion or so in our universe to say nothing of quasars and “black holes”. Between 1963 and 1965 perhaps the most exciting astronomical discovery of the century occurred. I can remember my excitement in 1965, as a newly installed Associate Professor at Auburn University, when the news came out that a cosmic microwave radiation had been accidentally observed by Penzias and Wilson at a Bell Labs site, using a large so-called horn antenna. The signal was to them, when first detected, unwanted noise, and they tried in vain to get rid of it. Finally, they called Professor Dicke at Princeton, whose design was incorporated in their antenna. According to Wikipedia when Dicke got the call, he said to his team, “Boys, we’ve been scooped”. The radiation had been theoretically predicted and Dicke’s team was about to search for it. As the shape of the radiation spectrum was filled in, it fitted exactly, the formula that Planck had found in 1900, for black body radiation. Link to Black Body discovery. The temperature of the radiation was 2.7 degrees above absolute zero, having cooled from an incredible high temperature through the expansion of space from the time when the universe became transparent to electromagnetic radiation some 200,000 years after the big bang. This observation of cosmic black-body radiation was a striking confirmation of the big bang theory, and, in no way, could be twisted to be compatible with Hoyle’s rival theory. The decline of the Hoyle theory is a good example of Karl Popper’s idea of how science advances as I discussed in an earlier post. However, in science, nothing is ever really settled and continuous creation could easily again rear its [ugly?] head.

Towards the end of the 20th century, as more and better measurements of the cosmic radiation were made, it became clear how remarkably homogeneous it was. How could this be? As the fabric of the universe expands, regions become separated in a way special relativity calls “spacelike”. No signal could pass back and forth to soothe out fluctuations. Thus, unlike a cooling liquid, there is no mechanism to bring about homogeneity. Between 1979 and 1981, a young physicist, Alan Guth, developed a theory of inflation. This was not an economic theory, but, instead, the idea, that in the early instants of the big bang, “negative vacuum pressure” caused a wild, exponential expansion of the infant universe. After the inflationary expansion stopped, the universe was much larger and the ordinary Hubble type of expansion took over. Recent satellite measurements of the remaining non homogeneity agree well with Guth’s theory. I must confess that an intuitive understanding of the math behind this theory is totally beyond me.

In more recent times, deep mysteries concerning dark matter, dark energy and the idea of a multiverse have arisen. At this point I will not talk about them, leaving a possible discussion for later. Instead I will ask, “what is the significance for a thinking, aware human being of what we have found out about the place in which we live?” Many have noted that the history of cosmic discovery is one which displaced the human race further and further from the central significance we thought we had in the scheme of things back in ancient and medieval times, a displacement towards utter insignificance and humiliation. I want to take an almost opposite point of view. I wish to disregard the finer points of the science and look at the universe in the largely non-quantitative way as I have described in the previous paragraphs. I want to consider our picture of the universe as an aesthetic object, an unbelievably magnificent work of art. I want to suggest that this picture of the universe is, as well, a gigantic Zen Mondo, making clear an ultimate religious view beyond any language in which it could be couched.

If I am able to proceed in this direction, I must switch to an entirely different language game. So, in the a later post, possibly the next, I need to go further into the meta-language concept as suggested and developed by Wittgenstein, Kuhn and Meagher.

Footnotes

¹While observation and measurement lie at the heart, “theory” comprises the soul. For science to be a living being, it needs both.

²If you think that the moving crystalline spheres, giving off a kind of music, are unreasonable, consider the “luminiferous ether”, a medium in space, thought necessary, in the late nineteenth century, for carrying light waves. In order for light to have its observed speed, the ether would need to be massless and incredibly rigid yet allow astronomical bodies such as the earth, planets and stars to pass through it in a frictionless way. Perhaps, that is. Allowing the ether to be dragged along close to earth could explain why it was undetected by the Michelson-Morley experiment. As it turns out, the electromagnetic field is perfectly capable of existing in vacuum, unlike all the more familiar waves known at the time. This is another example where a “guess” was wrong and, in this case, adopted by an entire scientific community.

History I

In an earlier post “Physics, Etc.”, as an aside, I threw out the statement that if one wanted to understand “everything”, physics was a good place to start. The implication was that I indeed wanted to understand everything and that such a goal was a worthwhile life pursuit for anyone. Now, I want to consider this idea, using history as an exemplar. One might of course take “understanding everything” as a crazy Zen dictum such as “believe nothing, understand everything”, and indeed it can be so taken. In a later post I will, in fact, consider how a religion of nothingness, fits with and caps off a comprehensive understanding of the particulars of our entire human situation.

Before engaging the main theme of this post, I will consider the somewhat irrelevant theme of why physics is, indeed, a good starting point towards “understanding everything”. This is mainly because physics, as an amalgam of abstract math and real-world experiment, is a difficult subject, an excellent pursuit in one’s youth when mathematical powers are at their peak. No one, not even Einstein, Schrödinger, Feynman, Gell-Mann, or the multitude of current day experimentalists and theorists is really smart enough to do physics well. One really needs an IQ of 4 or 5 hundred or so as well as an incredible imagination, creativity and, very likely, some luck.

Once one wants to leave physics for other fields, many open up. Technical or cross disciplinary engineering or financial fields are wide open, and even sciences such as biology or neurology, seemingly distant, have been served well by ex-physicists. However, the main point here is that if one at first concentrates in an area such as the humanities, arts, or social sciences, then later, tries to go in the other direction, towards physics or other sciences one finds that the difficulties are likely to be overwhelming. In saying this I’m not making any kind of value judgement about the worth of any particular area. In fact, part of the difficulty in moving from humanistic areas towards science, is finding the motivation to do so. Moving from an area in which the glory of the particular is a main attraction to an abstract area where there seems little magic or joy, hardly seems a worthwhile enterprise. Then if one does begin to sense an aesthetic in science or the wonder in the depths of physical being, one confronts mathematics, which is likely to come across as meaningless abstract gibberish. One ends up depending on popular expositions such as my last three posts; and such are unlikely to lead to much depth of understanding. In particular, one is unlikely to realize that physicists themselves, in spite of their great past successes are working in the dark mostly repulsed by the great mystery at the edge of their discipline.

In my own pursuit of understanding in areas outside of science, one crucial insight came to me rather late as I reflected on my experiences in becoming a decent skier. Growing up in Hawaii and then going to college in the Bay Area of California, I hardly even saw any snow until I was 22 years old. At that point I became fascinated with skiing, both downhill and back-country. In those days one used the same equipment for both. Our downhill skis had cable bindings which had side clips near the heels for downhill and by the front bindings for back-country. The latter allowed one’s heels to come up which made striding easier when on the level. For going uphill, we attached skins to the bottom of the skis. Out West where we skied on Mt. Shasta and in the Sierra, real cross country, langlauf, skis were unknown and thus the joys of kick and glide were missing. In those days I could not afford ski lessons so learned pretty much on my own and developed just about every disastrous, bad habit possible. In addition, it turned out that I didn’t really have a great deal of aptitude for the sport. Years later, in Oregon, skiing with real XC or modern downhill skis and realizing I had become a competent, if far from expert, skier in spite of all the obstacles, it became clear that my bad start and lack of aptitude didn’t much matter because I loved being out on skis and put in many enjoyable hours, in spite of many falls and a tendency for bad habits to reassert themselves in difficult situations. On cross country skis I attempted telemark turns in vain for many years finally getting the feel of the skis floating in the soft snow, the turns becoming effortless, as I skied down with an overnight pack from above the Palmer lift on Mt. Hood after climbing the mountain. With downhill skis after unlearning the old Arlberg technique, I finally trained my muscles to do the right thing by constant reminders, “skis together, look downhill, articulate”, but always I was likely to revert to a snowplow and backwards fall when things got tough. The lesson here is that when one tries to learn something new, aptitude doesn’t matter as much as finding meaning and joy in its pursuit. Such, enables the persistence in practice and the discipline which leads towards success, satisfying even when only partial. This lesson is crucial for teachers and professors at all levels and is largely ignored. That, however, is a subject for a different post.

For me personally, history is a wonderful example of confrontation and learning in a field outside of one’s main youthful interests. At first, I found boredom, if not outright hatred, then the glimmerings of interest and a grudging acceptance of some history. Finally, my interest widened to more areas and finally a fascination developed to the point where I could be accused of being a “history buff”. Ultimately, I’ve become interested not simply in the history of specific times and places, but in what doing history involves with the appreciation of the gifts and dedication required of a truly excellent historian. Finally, I’ve come to see how history has expanded to encompass in itself an understanding of everything. To tell this story of my involvement I will now move into “memoir” mode, beginning with how I grew up in Hawaii and found myself living at a time when happenings became history.

Honolulu, Territory of Hawaii, in the 1930’s was far from being the big tourist city it is nowadays. The tallest structure in the city was the Aloha Tower at the harbor, perhaps 10 stories high. The corporate buildings on Bishop Street were perhaps 5 stories high as were the two hotels at Waikiki. I remember that behind Ala Moana, one road cut through the land away from the shore to King and Beretania Streets, and, along the way, there were water-filled rice and taro paddies creating a scene that wouldn’t be out of place in East Asia. I don’t remember any water buffalo in these shallow ponds, but there certainly were such creatures in the paddies on the far, windward side of Oahu.

Around 1935 my parents built a large two-story house at 2244 Aloha Drive, in a still sparsely settled Waikiki neighborhood. The Ala Wai Canal, one block mauka (toward the mountains) from Aloha Drive was built between 1923 and 1928. Before then, the part of Waikiki where we were to live on Aloha Drive was swampy with two streams flowing to the ocean. The Hawaiian dictionary translates Waikiki as “spouting water”. Wai means fresh water as opposed to Kai for salt water, while Kiki can refer to any rapidly flowing water. Perhaps “turbulent fresh water” would be a better translation, but who really knows? Perhaps “spouting” could refer to waves hitting a stream as it flowed into the ocean. In any case Waikiki was a rather sleepy beach with a limited amount of sand and coral filled water off shore, cut off from the main city by the two streams. In the early years of the twentieth century the beach witnessed the resurrection of surfing, notably by Duke Kahanamoku who was also noted for his Olympic swimming medals and world records. By the 1920’s surfing was well established at Waikiki. My Dad’s pictures from that time show a row of surfers with their huge, weighty, ponderous redwood boards standing on the beach as well as pictures of Dad himself in a bathing suit that covered his chest, and, in the background, a pier running out to sea from the Moana Hotel. One aspiration of history is to create the impression of what it would be like to be present in a past time and place. I can imagine how Waikiki was in the 1920’s and how different it had already become by the time I could remember it in the mid 1930’s. (The pier had vanished among other things.) After 1928 the Waikiki streams flowed into the Ala Wai and the land where we were soon to be living was filled to appreciably above sea level. From our house we could walk makai (towards the ocean) down Royal Hawaii or Seaside Avenues five blocks to Kalakaua Avenue and the beach at Waikiki just beyond, where my younger brother George and I could play in the water and where I finally was able to keep my feet from continuously reaching for the bottom and begin to swim at age 6. (On a 1936 trip to the “mainland”; i.e., California and the U.S. beyond, George learned to swim at age 5.)

For some reason which was never at all clear to me, my parents tired of living in Waikiki and my mother, consulting with an architect, designed a new house which was built around the time I turned 11. This new home was located in lower Manoa valley at 2111 Rocky Hill Place, a short lane running uphill from Kakela Drive which began at McKinley Avenue and then rounded a corner and climbed up towards the top of Rocky Hill, an ancient volcanic remnant. (This area is easily found on Google Maps.) Our new house stood at the top of a 20 odd foot rock wall above McKinley and from our living room we had a view of Waikiki and the ocean beyond. This ocean view was partially blocked near the shore by Waikiki’s two hotels, the Moana and the Royal Hawaiian, as well as by lower buildings and the coconut trees between Kalakaua Avenue and the ocean. Later, when I was in high school, I became aware that on occasion we could see white surf break beyond the hotels when Summer storm surf came to the south shore from the great Winter storms south of Tahiti. The sight of such surf was a sign that we should, if at all possible, get down to Waikiki where my brother, my cousin and I could put on fins and swim out a half mile or so to body surf the large waves at First Break. Back in the late 1930’s and early 40’s, however, body surfing lay in the future and we mostly swam near shore and picnicked on the weekends on ivory colored coral sands surrounded by lauhala or ironwood trees on the far side of the island.

It was around this time that I became aware of the news of what was going on in Europe with a crisis concerning Czechoslovakia and the threat that Hitler’s Nazi Germany would start a world war. I was aware that there had been an earlier, very bad, war before I was born, but knew no details. I remember hearing broadcasts, called fireside chats, of our president whose reasonable, friendly, confiding, persuasive voice, capable also of withering scorn, completely won my admiration. I felt a sense of foreboding when war did come in 1939, a feeling which intensified as Poland, Norway, then France fell to the Nazis. I became aware that there was a threat from Japan, who had invaded China and Manchuria, and had joined the “axis” powers. I wondered why they were called “the axis powers”. (In fact, even now when I think I know, I haven’t actually read an explanation so my understanding is really based only on the plausible speculation that Germany and Italy cut through the heart of Europe like an axis around which all else would revolve.) In the summer of 1941 when Hitler invaded Russia, I felt a slight sense of relief after the German invasion slowed and halted, an outcome that had previously never happened. As the Fall came on there was news of worsening relations with Japan.

My folks and their friends observed that there was little threat to us in Hawaii because Navy PBY sea planes patrolled thousands of miles out to sea in all directions where they would detect any sign of Japanese naval activity. (Perhaps the PBY’s weren’t actually fictional, but they obviously did not patrol effectively to the North). My Dad worked for Castle and Cooke, one of the Big Five business firms who dominated much of the islands’ economy in those days. Castle and Cooke and the other Big Five had a controlling investment in Matson Navigation Company who owned 4 passenger ships as well as many freighters which supplied us with goods from the mainland and carried back our sugar and pineapples. Castle and Cooke also served as agents for Matson in Hawaii and I realize now that my Dad, involved with Matson’s island doings, would have been privy to all the scuttlebutt going around town. At that time, I was too ignorant and uncaring to be much aware of such things. What I was aware of were my parents’ friends in the Navy who visited us when in port. One, whose name I regrettably don’t recall, worked in the engine room of the heavy cruiser Houston. In the interest of personifying him, I’ll make up a name for our USS Houston friend, calling him “Sam” after the historic Sam Houston. (Sam might according to tickles in my brain, in fact, have been his real name.) In the Fall of 1941, Sam had heard from talk going around in the navy that we were very close to war with Japan. It could come at any time. In November Sam bid us good-bye as his ship sailed West to the far East. We never saw him again, the Houston being sunk in the early days of the war. Later, after the war, we heard that Sam had survived being in the engine room, but had died in a Japanese prisoner of war camp. Our other friend, Forest Jones, was a petty officer on the battleship West Virginia, stationed at Pearl Harbor when not at sea.

December 7, 1941 was a Sunday. The previous September I had entered the 7th grade at Punahou, a well-known private school, founded in 1841 by missionaries living in the Kingdom of Hawaii. My classes, as always in those days, were somewhat boring, but I endured and learned the material as do most children. Weekends were somewhat of a relief and on that Sunday morning, feeling relaxed and free, I walked out into our yard and looked up to see the entire sky covered with anti-aircraft bursts. I knew what they were because I had frequently seen planes towing targets which were surrounded by eight or ten of these bursts as anti-aircraft gunners practiced. My feeling on seeing the sky covered was one of shock. I knew something was badly amiss, but did not jump to the obvious conclusion. Somehow, war was unthinkable, a feeling shared by all of the Island’s military authorities who should have known better. I went back in the house exclaiming about the bursts to my parents. We went into the living room and looked down to the ocean where two small freighters were coming toward port. Suddenly, two huge columns of white water rose near the ships, making a surreal, impossible image. My parents immediately went across the room to the large Philco radio console and turned it on. After its interminable warm up the radio came to life and the broadcast sounded entirely normal for a Sunday morning. Our feeling of relief did not last long, however, as the program was soon interrupted, with an announcer saying something like, “Folks we don’t know what is going on, but we’ll find out and get back to you as soon we can.” Then the music resumed. The second program break came shortly. “… The Hawaiian Islands are under enemy attack. … The Rising Sun has been seen on the wings of the attacking planes.” Shortly after the second resumption there was a third, announcing that the station was going off the air. Then silence.

I suppose we must have eaten breakfast, but I remember nothing about it. I do remember looking up from our front yard and seeing a formation of white planes high in the sky. Their motors made an entirely different sound than what I had usually heard from airplanes. They were presumably Japanese planes of a second or third wave heading towards Pearl Harbor. Later a single plane flew fairly low over us and dropped a bomb that hit harmlessly near a home at the top of the steep slope rising across Manoa Valley. Since that area was barely visible from our house, this incident most likely happened after we had left the house and walked up on Rocky Hill where there was a good view to the South and West. I felt very frightened. It had occurred to me that if a Japanese pilot had seen us, he would have mercilessly strafed us. There were no more planes near us, but we all kept in mind that there were nearby Kiawe trees under which we could hide had any appeared. Looking West toward Pearl Harbor all we could see was the crater of Punch Bowl blocking the view. Behind Punch Bowl rose a huge cloud of black-grey smoke. I figured that the Japanese had bombed the two or three large fuel tanks that lay on the shore of Pearl Harbor near Pearl City. I was wrong. Luckily for us, the Japanese had placed the fuel tanks which could have easily been destroyed with 2 or 3 bombs each, into a lower priority, which they decided not to exercise after their successful attack. Admiral Nagumo felt that leaving as fast as possible was better than pushing his luck by refueling planes for further attacks on the lower priority targets. I heard recently that the entire fuel supply for the Pacific Fleet was in those tanks and that it would have crippled our fleet for months had they been incinerated. Instead the attack concentrated on military airfields and the ships in Pearl Harbor, destroying most planes on the ground and sinking many ships. At one point as the morning wore on, I was able to look through some borrowed binoculars at the entrance to Pearl Harbor which we could see. There were ships moving out to sea with bomb spouts rising near them. Around 11 am we went back home just in time to see a big fire burn some buildings and homes about half way down towards Waikiki. We thought the cause was a final departing plane which had dropped a bomb, but in reality, we might only have imagined that we saw such a plane.

After Pearl Harbor came the bleak early days of the war which became a total disaster for U.S. and its allies in the Pacific. There has been much history written about WWII in all its multiple theaters, but one relevant fact not sufficiently emphasized, in my opinion, is the feeling one has of being in a “Total War” such as WWII. It is a feeling of constant underlying stress, like being in a tight athletic contest whether on the field or on the bench, but much more intense and much more prolonged. When will this war ever end? This feeling of underlying dread is not always in one’s consciousness, but lurks, waiting to spring into awareness. Life is not normal because there is a feeling that the whole world is awry and disaster seems never far away even if the war is being won.

In the days after Pearl Harbor, we wondered about Forest Jones on the West Virginia. As it turned out Forest survived Pearl Harbor and the entire war in the Pacific. In 1991 there was a 50-year anniversary commemoration of Pearl Harbor with Japanese participants in the attack joining Americans who had been there. My family had kept in touch with Forest during and after the war. During the war he had visited us frequently when his ship was in port and we had heard his stories of Pearl Harbor and beyond. Forest participated in the 50th anniversary commemoration though he was far from ever forgiving the Japanese for what he had endured in the war. He wrote up an account of his experiences at Pearl Harbor for the Naval Archives, and sent a copy to my Dad. By 1991 I was an unabashed history buff so saw to it that I had a copy of the Forest Jones account. It is worth quoting excepts from it.

“Forest M. Jones, LCDR, USN (Ret) November, 1991

“I was a 1st Class Petty Officer aboard the U. S. S. West Virginia (BB48) when the Japanese attacked Pearl Harbor. I was on the forward Fire Control Platform, above the Navigation Bridge, and saw the Japanese planes coming down the shipyard channel. Three other Fire Controlmen on the platform and I immediately manned our battle stations in the two anti-aircraft gun directors located on the Fire Control Platform. We had the two directors manned with skeleton crews, before the General Quarters alarm was sounded…
…….
“From our vantage point on the upper deck, we could see that some of the starboard guns were being manned by their crews. There was no attempt to man the guns on the port side due to continued torpedo strikes, fire and debris in the vicinity of the gun area.

“We were unable to obtain power to permit operation of the Gun Directors and were also unable to establish communications with the anti-aircraft guns. …

Forest writes that he descended with some of his fire control crew to the lower deck where crewmen were setting up the guns so he and his friend, Joe Paul, another fire controlman, went to the nearby Ready Service boxes and began to remove 5” shells used by the guns.

“While Joe Paul and I were removing ammunition from a Ready Service Box we were suddenly engulfed in a cloud of kapok from the life jacket locker above the Ready Service Boxes. We later discovered, after the attack, that one of the 16” armor piercing shells that the Japanese had modified to be used as a high level bomb had struck the top of the Forward Cage Mast and was deflected by the heavy metal coaming of the Signal Bridge. Were it not deflected it could have been a direct hit on the Ready Service Box where we were working. The shell penetrated to a lower level but failed to explode. The lives of Joe Paul and I were spared twice in the matter of a second. First when the bomb was deflected and then by its failure to explode almost directly under our position on the anti-aircraft deck.

“After unloading all of the available ammunition, I went to the Navigation Bridge where Captain Bennion was sitting against the forward metal shield on the wing of the bridge. He had been fatally wounded but was still alive. Unfortunately, there was nothing we could do to alleviate his suffering. He had suffered a massive mid-torso injury by a fragment from a high level bomb that had struck the Number 2 Turret of the Tennessee.

Forest Jones and two other enlisted men then helped around 30 men to emerge from an escape tube, the only remaining route to safety from the main battery fire control room on a lower level deck.

“By this time the ship had taken a decided list to port due to underwater torpedo damage that led to extensive flooding. It was about this time I witnessed large explosions within the Arizona, which was directly astern the West Virginia. It was necessary for us to take cover in the protected areas of the bridge because of the great amount of flying debris. At this moment, I witnessed the Oklahoma, directly ahead of us, roll over to port due to heavy torpedo damage below the waterline. Within a few minutes all of the superstructure and decks were submerged and it came to rest with only part of the bottom visible.

(400 or so seamen were trapped inside the Oklahoma. In the days after Pearl Harbor we heard about a few who made their way up to the hull being rescued when their tapping was heard. All of the others perished.)

“The smoke from the burning Arizona was very heavy. Fortunately, there was no fire in the bridge area of the West Virginia. Although we were being subjected to numerous strafing attacks, we had no hits in the bridge area. During a lull in the attack I checked the Signal Bridge and Fire Control levels to make sure there were no wounded crewmen left in those areas.

“Apparently the heavy list to port was being remedied by counter flooding. The ship was gradually settling to the bottom on an even keel and finally came to rest with water to the Main Deck level. The word was passed on the upper decks and bridge levels to abandon ship (source unknown).

“Most of the crew abandoned ship in the vicinity of the starboard bow. Joe Paul and I were among this group. There were several motor launches moored in the area between the bows of the West Virginia and Tennessee. Joe Paul and I, along with an unknown fireman, manned a 40’ motor launch and made several trips to Hospital Point with wounded and other personnel who had been in the water and were heavily coated with fuel oil. We also towed floating bodies to the Hospital Point site. …

(Forest Jones mentioned to us during one of his wartime visits that after he abandoned ship, he had had to dive under burning oil to reach the launches. This detail was omitted from his report.)

“The West Virginia was raised, repaired, modernized and returned to Combat Operations in 1943. She was the only ship in Tokyo Bay during the signing of the surrender terms which had been at Pearl Harbor.”

End of the excepts from Forest Jones’ account.

After Pearl Harbor Forest was assigned to the carrier Enterprise where he saw much action especially in the Battle of Midway. Later he rejoined the West Virginia where he saw much more action, recounting to us how the battleship fired its 8” guns to create water spouts in the hope of downing kamikaze planes as they attacked the ship.
……………………
In the early days after Pearl Harbor Hawaii was put under martial law. (Surely one motivation for this was that our largest ethnic group in Honolulu at the time consisted of people who had come to Hawaii from Japan during the previous few generations.) Our lives resumed some sense of normalcy. We went down to Waikiki and swam, making our way through a passage in the rolls of barbed wire strung along the beach. The city was totally blacked out and we learned to move about our house after dark, feeling walls and remembering where doors were. Later we used cardboard and tape to blackout the windows of many of our rooms so had some light.

During the early months of the war before June, 1942, my parents had to decide whether or not to flee to the mainland rather than risk an Island invasion. They decided that the risk was small enough to be worth taking. However, some of my Punahou classmates disappeared. Under martial law our Punahou campus had been taken over by the Army Corps of Engineers. We 7th grade students held our classes in an open pavilion near the University of Hawaii Campus.

Except for following the news, hearing about our Guadalcanal Solomon Island invasion and the Battle of the Coral Sea, my memories of the time during early 1942 are quite vague. I do remember that outside of our school pavilion was a yard where we all played a game involving a football. Someone would grab the ball and run, while everyone else would chase after, tackle and pile on the runner. My memory is vague on one point, but I think the girls in our class did not sit out this game. Although I was one of the smaller kids, I thoroughly enjoyed this activity. I remember nothing about the Battle of Midway at the time except for my mother describing how she went to downtown Honolulu in early June and found the city almost entirely deserted. The grapevine had apparently informed people that something big was going on.

The “something big” was, of course, the Battle of Midway. See Incredible Victory: The Battle of Midway by Walter Lord for a fascinating full account. With our blithe assumption of American superiority, we did not realize at the time that we were taking on, arguably, the world’s best navy and naval air force who had vastly superior forces on the scene. We won the battle through luck, some vital decrypting of Japanese naval codes, some skill, and the incredible heroic sacrifice of our torpedo plane pilots. Something I found out later, probably while working at the Naval Ordnance Test Station, was a fact not mentioned in the book: until two or three years into the war the US had no torpedoes that could survive being dropped from a plane. Nevertheless, our torpedo plane pilots who must have known they were doomed, attacked the Japanese carriers once located. The lumbering torpedo planes became sitting ducks for Zero fighters who wiped out close to 100 percent of them. The Zero also played havoc with our outclassed fighter planes. After the Japanese “turkey shoot”, the planes involved needed refueling and, thinking, because of some miscommunications, that there were no American carriers anywhere nearby, Admiral Nagumo ordered a mass refueling. Thus, most of the Japanese fighter planes were helpless on their carrier decks when our largely unopposed dive bombers arrived on the scene. We sank all four carriers in their group and turned around the course of the war. The loss of their prime carriers was bad enough, but according to Saburo Sakai, a Japanese fighter ace, it was the loss of their highly accomplished fighter pilots that was even more of a disaster.

I don’t remember whether or not our victory at Midway was felt as an immediate relief in Hawaii. I do remember that one day we went down to Waikiki and the barbed wire was gone. It would be an interesting historical fact to know exactly when this happened, but as far as my memory is concerned it could have been as early as July of 1942 or as late as the end of 1943. I know that the blackout was lifted in 1944 when there really WAS no longer the possibility of a Japanese attack.

In the eighth and ninth grades we moved from our open pavilion down to a genuine classroom building at the University of Hawaii. Relevant to my rising distaste for History over the next few years was one or two social studies courses and a senior year American History course in which we seemingly, several times covered the American colonies before the Revolutionary War and nothing much beyond. At this time as I began to develop a fascination with math, the relevant history was being made right at our doorstep, and for me there was a total disconnect and irrelevance to the jumble of meaningless dates and events thrown out in the history classes.

In 1947 I became a Freshman at Stanford University. In those days, a notable course, required of all Freshman, was The History of Western Civilization. The course consisted of readings from the time being covered, followed by a presentation of the history with the relevance of the readings thrown in. This was actually an effective way of teaching history with the readings providing a flavor of the times that a mere description would lack. We had an excellent teacher, humorous, quizzical, unserious in manner, whose name I absolutely forget. Nevertheless, I and my roommate, the brilliant, creative Roger Shepard, in the same section of the course, pretty much goofed off. I skipped most of the readings, Plato’s Phaedo, being an exception. Roger and I did play close attention in the class so as to get some kind of a grade out of it and accordingly, some of the content must have rubbed off. What did begin to kindle my interest in my freshman or sophomore year was stumbling across a book in the Stanford library. The book was Germany Enters the Third Reich, written in 1933 by Calvin B. Hoover, a young economist who had received a grant to study the Soviet economy, after which he traveled to Germany in 1932 and witnessed the rise of Hitler first hand. Mr. Hoover had no access to the economics of Germany’s rise, but was a firsthand witness to the joy, relief, and passion aroused by Hitler. This book made the personal connection that began my transition to history buff. I took the WWII battles in the Pacific as personally meaningful, to say nothing of the advent of the atomic bomb and my feeling with the rise of the cold war that I was unlikely to make it to 30 years of age. At Stanford in those days, undergraduates were not allowed into the stacks and I have no memory of how I came to find the Hoover book. Perhaps it was among the books on a cart waiting to be shelved, sitting outside the stacks, where I could browse through the books.

I became interested in WWI and read about its horrors. The Great War 1914 -1918 was more of a slaughter than a war. It’s prime nightmare, for the British anyway, was the Battle of Passchendaele, along a tiny fraction of the Western Front in Belgium near the town of Ypres. See https://www.britannica.com/event/Battle-of-Passchendaele for an account. In an area turned into mud by early Fall rains, full of water-filled artillery craters, the British soldiers charged the German machine guns whose bullets tore through their bodies, while artillery, some from “friendly” fire, blew to pieces those who the machine guns missed. Daily casualties on the British side were as high as 17,000, while those for the entire engagement were some 250,000 or so. (There is still controversy). All of these casualties occurred between August and December of 1917. The battle was no picnic for the Germans either, their casualty estimate being 220,000. The ground gained by the British was minimal and they later withdrew.

Although the Allies won the war, their spirits were devastated by it, and rightly so. Though the Germans were definitely defeated, the myth arose that they had been “betrayed”, and that the shame of losing was undeserved. When Hitler came to power, his propaganda minister, Joseph Goebbels, developed the effective propaganda technique of endlessly repeating “a big lie”, which his audience was largely inclined to believe. This propaganda technique also tends to convince skeptics against their better judgement and it seems to work universally if not met by counter-propaganda. Simple truth seems to be a none too effective antidote.

I became interested in just how WWI started and read a few interesting books about how nationalistic rivalries intensified and how leaders were blind to the weapons developments that made the war so terrible. The attitude of these Kings, Chancellors, Prime Ministers and others in power arose from the knowledge that there had been a long European peace with the few threatening crises resolved by diplomacy, combined with the feeling that war historically hadn’t been all that bad and might well simply “clear the air.” Then there was the blindness of European leaders to the chauvinism that had arisen throughout the peoples of all the European nations. Nationalistic patriotism had become extreme and many were spoiling for a fight. The powder keg was, of course, the Balkans area where the empires of Austria-Hungary, Russia, and Turkey became rivals to each other, all trying to suppress the wishes for independence of their subject peoples.

During the Cold War with its emphasis on avoiding appeasement, I feared that everyone was forgetting the lessons of what led to WWI. Fortunately, through luck and the perception of what a new total war would involve, we have avoided catastrophe so far, though the threat of nuclear annihilation still lurks.

I had become interested enough in history by my junior year at Stanford that I had the disheartening experience of “The High Middle Ages” mentioned in an earlier post. Also, at this time I was still totally uninterested in American History. How irrelevant it seemed. In later years I have of course found American History at least as fascinating as any other.

Returning to the theme of this post, namely “understanding everything”, I will point out that, at least in the case of History, becoming an addict, is not sufficient for the kind of understanding that would satisfy me. One needs to get behind the output of the historian or journalist to understand how history is done. What does being an historian involve? What are the required gifts that make a great historian? What are the paradigms of historical studies?

One distinction that historians make is between “primary” and “secondary” sources. A primary source is an unfiltered, first-hand account, perhaps a newspaper article, correspondence, private papers, memories elucidated through interviews, contemporary government papers or other such material. A secondary account is the story a historian or journalist creates from a selection of primary and other secondary sources. The write-up above by Forest Jones is an example of a primary source as are my memories in the memoir paragraphs above.

A first reason one needs secondary accounts is because primary sources are incomplete and unreliable in a variety of ways. The Forest Jones account above is dramatic but is unclear on many fronts. Among other things, one needs a map of Pearl Harbor showing where the battleships were moored in order to understand why the Port rather than Starboard sides of the ships were devastated. One needs to understand the structure of the old pre-war battleships moored in Pearl Harbor. In order to give a coherent account of the attack, one needs to actually consult the various archives scattered about. Secondary sources such as Wikipedia or books about the attack have information close at hand, but often mistakes persist in the secondary literature, and if one is conscientious, one needs to actually accomplish the tedious work of traveling to archives and going through the file folders or the reading of the original newspaper accounts.

I learned a little bit about archives first hand because a friend in Eugene, Oregon was Dean of Libraries at the University of Oregon, and, knowing of my scientific background, asked me to go through the papers of Aaron Novick, who had founded the Institute of Molecular Biology at the University. When Dr. Novick passed away, his office contents were put into 27 boxes and placed in the basement of the Special Collections department of the library. I accepted this challenge and boned up a bit on molecular biology reading The Eighth Day of Creation, an account of the early days when the structure of DNA was found and its workings elucidated. I certainly didn’t understand all of the material in that account, but got the general drift and learned the names of the main actors.

Going through the papers, letters and other materials was generally tedious, but from time to time very rewarding. I could read letters from Nobel winning scientists and others, some of whom perhaps should have won the big prize. I could follow the careers of students and post docs who later became distinguished scientists. Much material was redundant and I had to make judgements about what could be safely discarded. The final result was, if I remember correctly, 23 boxes of papers organized into somewhat coherent Series with a Finding Aid which gave a rough idea of what might be in each box. I could get an idea of an historian’s work, reading through papers in file folders in search of a relevant bit of key information, hoping that whoever made the Finding Aid didn’t botch the process.

What a historian faces in trying tell a story which is interesting, coherent and enlightening, a story which brings also a new insight into the understanding of the past or present, is typically an overabundance of not only primary material, but also many previous secondary works. The gifts one needs are first, a prodigious memory, second, the persistence to immerse oneself in the mass of material to the point where one gets a deep, intuitive understanding of the time and place of interest, and finally the ability as a writer to condense, redact and present in compelling prose an interesting, meaningful story.

I will now consider an example or two based on my recent reading and the thoughts they give rise to. These examples show how history can become an attempt to “understand everything”.

Traditionally, history has been the story of politics and war. I am reading a book right now by a historian who wrote this traditional kind of history; namely, The March of Folly by Barbara W. Tuchman. The history in this book may be traditional, but the idea of the book is to examine through history a particular question, perhaps new: Why have governments of all kinds repeatedly throughout history adopted policies that are totally destructive to their own interests and then persist in these policies when their stupidity has become obvious? What we have here is history as inquiry. Ms. Tuchman is careful to limit her examples to a particular kind of misgovernment; namely, folly or perversity. I quote:

“To qualify as folly for this inquiry, the policy adopted must meet three criteria: it must have been perceived as counter-productive in its own time, not merely by hindsight.”

After commenting lucidly on this first criteria, Ms. Tuchman moves on.

“Secondly a feasible alternative course of action must have been available. To remove the problem from personality, a third criterion must be that the policy in question should be that of a group, not an individual ruler, and should persist beyond any one political lifetime. Misgovernment by a single sovereign or tyrant is too frequent and too individual to be worth a generalized inquiry.”

In her long, fascinating introductory section Ms. Tuchman mentions many possible instances of unfortunate outcomes that could be studied and lists several of the rare occasions when governments were actually competent and successful. Then, in the remaining body of the book she concentrates on four more situations occurring throughout history from the ancient world to the US involvement in Vietnam. The section I’m immersed in is an examination of how England came to lose her American colonies, concentrating on the 20 years between 1763 and 1783.

One notable feature of Ms. Tuchman’s work is that she includes interesting material that doesn’t bear directly onto her inquiry. One gets a flavor of what it would be like to live in the England of that particular time. Society was highly stratified and parliament dominated by men from the ennobled, wealthy class. Excesses of high living were rampant with gout a common ailment. It was not only King George III who had mental problems. Many other ministers and notable members of parliament were subject to bouts of insanity and incapacitating ill health. Of course, it was the supposedly sane ones who were responsible for the acts of government blindness (almost insanity) which brought about the American revolution. Because of the richness of her story, I as a reader could make connections outside of the immediate story. The social partying and visiting among the great English estates persisted throughout the nineteenth century and were celebrated in the early twentieth century before the Great War by Saki (Hector Hugh Munro), a master of exquisite prose, in his short stories, full of understated British humor and a delicious presentation of human frailty. (Mr. Munro was another victim of the war, killed on the Western front.)

The richness of Ms. Tuchman’s story invites commentary in at least two areas. For one, it shows why elementary history courses are apt to be exercises in deadly boredom. Much of the interest of history lies in the incidentals which give a rich, colorful, complex picture, making a time and place come alive. Abstract this richness from history, leaving merely the dates of events thereby rendered meaningless, and the joy of history is gone. It’s as if one is attempting to give an emotionless machine the mastery of history by logically erecting a scaffolding which can later be filled in with the details. In a later post I can perhaps suggest alternative ways of teaching not simply history but other subjects, such as mathematics and physics, whose teaching falls prey to the same fallacy. (Actually, I don’t need to do this. One merely has to read Whitehead’s The Aims of Education, to get the picture.)

A second observation is that Ms. Tuchman does not stray too far afield from her main subject. One does not get a broader picture of what was going on, even in Europe, at the same time. For example, Mozart was born in 1756 and died in 1791, flourishing during the period of Ms. Tuchman’s study. Beethoven was born in 1770. The great chemist Lavoisier was born in 1743 and did his revolutionary work in chemistry around 1778 before being guillotined in a later, political revolution. Ms. Tuchman does mention how David Hume, the philosopher, was involved in the politics of the time, but there is no mention of James Watt who pushed through his crucial modification of the steam engine to success after 10 years of effort in 1776, enabling an irresistible quickening of the industrial revolution. During the same time period Captain James Cook explored the Pacific, mapping New Zealand, Australia’s East coast and discovering the Sandwich Islands where I was to be born some 151 years later. Adam Smith’s The Wealth of Nations was published in 1776. Then there is literature and art. Interestingly, Ms. Tuchman, at the end of each section of her inquiry, includes a portfolio of paintings and documents relevant to her story. One can look at these, taking in the appearance of the main actors and by reading the captions see which artists were active at the time. As a masterful historian with an intimate knowledge of the times, Ms. Tuchman has the judgement of where to draw the line between too much and too little detail, herself painting an interesting historical picture without covering up what is vital to her inquiry. She supposes a reader, familiar enough with the history of the times, who can make connections beyond her immediate story.

Moving on, I note the dates in the last paragraph. In learning history dates should probably be kept to a minimum. How about 4000 BCE, 600 BCE, 1 CE, 618 CE (Tang dynasty), 1066 CE, 1492 CE and 1776 CE for starters? Dates, besides designating the linear flow of history, afford us the possibility of moving sideways in space and subject area so that we can appreciate what is contemporary during a given time period. This is what I’ve done in the last paragraph. With dates one can also move in new directions. In 8th grade math, one can bring in history. For example, most of us, at least in the past have learned about Roman Numerals. There are I, V, X, L, C, D and M. But wait. What is the Roman Numeral for 0? Of course, there is none. Neither is there a year 0. Dates skip zero going directly from 1 BCE to 1 CE. In an earlier post I talked about the invention of zero. One can easily do a little research online these days and find that zero was slow of establishment in Western culture. Adoption was quite uneven. This is part of the reason for its omission in our date line. There is little possibility of fixing this defect because it would mean our records of exact dates would need altering. Such is even more unlikely to happen than would be the reforming of our QWERTY keyboards. Once established, conventions are very difficult to change.

Although History with a capital H has traditionally been about politics and war, there are histories of almost any reality one can imagine. Looking at the paragraph above, one realizes that there are histories of music, chemistry, philosophy, technology, exploration, economics and art to name a few. However, there has been very little tendency for such histories to broaden themselves by moving sideways. If one is to start trying to understand everything through history one has a great deal of reading on one’s hand and has then a huge job of correlation. Of course, if this task is fun, why not fool around with it in a leisurely manner? I do own a book entitled The Timetables of History by multiple authors. The book consists of tables consisting of horizontal columns for a given date and vertical columns for History Politics, Literature Theater, Religion Philosophy Learning, Visual Arts, Music, Science Technology Growth, and Daily Life. There is a fascinating foreword by Daniel Boorstin, former librarian of congress, who has fascinating comments on what history is all about. I lack the sufficient viscerals of an historian to exhaustively engage in this book, most of the entries seeming to me of little import. Did you know that in the year 518 Sigmund, son of Gundobad became king of Burgundy, while in 1920 the Nobel prize in physics went to Eduarde Guillaume for discoveries of anomalies in nickel-steel alloys? (Actually, one might get interested enough in the times of Sigmund to wonder if Burgundy was already producing decent wine, another subject worthy of historical study.) There are interesting little gems scattered about in this book. During the interval from -2500 to -2001 Equinoxes and Solstices were determined in China, while in 1776 David Hume died. One can check out the mathematical inconsistency I’ve harped on earlier by checking that Augustus, the first emperor of Rome reigned from -30 to +14. Since 14 – (-30) is 44 as our 8th grade students have just learned, one sees he reigned for 43 years after one accounts for the missing zero. Of course, the book is a wonderful research tool.

Besides histories of the various subjects mentioned above and those specializing in particular time periods and various peculiarities, there has arisen in recent times a genre called “big” history or the history of everything. In an earlier post I commented briefly on Yuval Harari’s Sapiens: A Brief History of Humankind and I have just interrupted my reading of Barbara Tuchman’s book to read Origin Story: A Big History of Everything by David Christian, an historian who worked mostly in Sydney, Australia, specializing in Russia both imperial and soviet, before becoming interested in “big” history around 1989. Dr. Christian starts his history with the “big bang” whose time occurred, according to the latest reckoning, 13.82 billion years ago. I had this book on hold at our library and when it became available, I downloaded it for a three-week loan and am now waiting expectantly to come back to Ms. Tuchman’s insights on the folly of Vietnam. Fortunately, I own The March of Folly so can put it aside for now.
Big history begins with the modern analogue of creation myth now called modern origin story because it is based on scientific, anthropological and historical evidence with the aspiration of being as non-fictional as possible. Several physicists have told the cosmological part of this story, but their accounts often lack the kind of human interest an historian can bring to the subject. The interest now does not concern the physics but the meaning of this history for us as human beings who live in this unbelievable universe, a meaning formerly brought to so-called primitive societies by their creation myths. Dr. Christian has created a timeline of significant “events” some of which embody a generalized form of the physical concept of “emergence” in which a startling new complexity can arise out of simplicity. These emergence events he calls thresholds, of which there have been eight so far in the reckoning of Dr. Christian. Threshold 1 is the Big Bang; 2, the first stars glow, 600 million years of so later; 3,4,5, include the first life on our planet; 6, the first evidence of our species, homo sapiens, 200,000 years ago; 7, ice ages end, farming begins, 10,000 years ago; 8, the fossil fuels revolution, 200 years ago. Fifty or so years ago begins an event, not a threshold, called the Great Acceleration; humans land on the moon and begin to have a geological impact on our planet. Dr. Christian optimistically includes a threshold 9, estimated to be 100 years in the future, A Sustainable World Order. Of course, this latter threshold might well not occur; instead not only “big” history, but, for us anyway, “all” history might come to an end.

Big history is interesting in at least a couple of ways. For one, it gives us a cosmic perspective, leaving out what is traditionally considered the flesh and blood of history, the wars, the politics, the human creations of art and empire. Thus, in its own way it creates an abstraction of history similar to that of traditional histories which also leave out most history. The second way in which I, anyway, find it interesting is that “big” history aspires to be a history of our entire human perception, encompassing our entire human adventure. History has become a “master” discipline expanding its role to subsume any and all other disciplines as it may require. It has become a way of “understanding everything”, requiring the aspiring master historian to not only find meaning in the usual historical written records but to move into many other fields which provide a setting for traditional history and allow an expanded meaning of human significance and human folly. “Understanding everything” in this sense requires one to understand a specialty and then move into other areas, struggling with new paradigms and expanding one’s intuition and awareness. Perhaps this expanded awareness can ultimately reach the emptiness outside of all existence or perhaps it can’t. In any case life becomes richer, more meaningful and more significant.

Decoherence

In this post I delve into the current view of what happens to a wave function as it interacts with its environment and tell the story of how I anticipated the idea of this view around 1971 or 1972 some 10 years before a crucial paper was published in 1991. If you have a non-technical background, I hope you can skim through without too much puzzlement. In the next post I will revert to writing which is entirely non mathematical.

Back around 1970, when I first became interested in the “collapse of the wave function”, I noticed at some point while thinking about the situation, that this collapse entailed more than simply the materialization of, say, a particle in accord with its probability distribution. For the wave function is more than a probability distribution. It contains, in addition, information which allows it to be transformed into a new “representation” in which it gives a probability distribution for a different physical quantity. For example, if we have a wave function from which we can find a probability for a particle’s position, we can transform this wave function into a new form from which we can find the distribution for the particle’s energy. With the “collapse”, however, one loses the information that would allow such transformations. One loses the “phases” of the wave function. To understand what are meant by phases I need to point out that a complex number can be viewed as a little arrow, lying in a plane. The length of the arrow can represent a positive real number, i.e. a probability. The arrow, lying in its plane, can point in any 360-degree direction and the angle at which it points is called its “phase”. A wave function consists of many complex numbers, each of which can be looked upon as a little arrow with magnitude and phase. Looking at an entire collection of these little arrows, one can consider their lengths (actually length squared) as a probability distribution for one physical quantity, and the pattern of their phases as additional information about other physical quantities.

Collapse occurs when a quantum system interacts with its environment. With the “collapse”, one of the probabilities becomes realized; and ALL of the phases simply disappear from the record. The information associated with the phases’ pattern goes missing. These days people have realized something I missed back in the 1970’s: the information contained in the phases doesn’t actually go missing, but leaks into the environment where it can show up, giving us information about the quantum system of interest. People no longer talk much about collapse, concentrating on the disappearance of a system’s phase pattern, which may or may not actually be linked to collapse. The modern buzz word for this possible way-station to collapse is “decoherence”. The phase pattern is “coherent” and when it goes away, we have “quantum decoherence”. Back in 1971, long before the word “decoherence” had ever appeared in this context, I wondered if there might be a way of calculating how the phases go away as a quantum system interacts with its environment, and, through blind luck, came to realize that there was indeed the possibility of such a calculation. In reading various papers about “measurement theory” I came across an essay by Eugene Wigner, a Nobel prize winning theorist, who pointed out that a quantum expression called “the density matrix” might possibly throw some light on the whole “measurement-collapse” situation because with the density matrix phases went away. Wigner said, however, that this possibility was of no use, because the density matrix belongs not to a single quantum system, but always to an “ensemble”. An ensemble is a collection of a number of similar systems, while the “collapse” happens with a single system. So, the essay’s conclusion was: forget about the density matrix as being of any help in understanding what was going on. I noted what Wigner had said and thought no more about it until I was browsing in a quantum text by Lev Landau and Evgeny Lifshitz, translated ten or so years earlier from the Russian. There on pages 35 – 38 was a definition and discussion of the density matrix; and the definition was definitely for a single system interacting with its environment. I remembered that Lev Landau had independently defined the density matrix along with von Neumann in 1927. Perhaps Landau’s version had simply been forgotten. In any case, being defined for a single system, to me it showed great promise for calculating how wave function phases could disappear. (See Landau and Lifshitz, Quantum Mechanics: Non-Relativistic Theory, First English Edition, 1958.)

Lev Landau was still another of the geniuses associated with the development of quantum mechanics. Born in June, 1908, in Baku, Azerbaijan, of Russian parents, he was enough younger than the Pauli – Heisenberg generation that he missed out on the first 1925 – 1926 wave of the quantum revolution. By the time he was 19 or so he had caught up enough to independently define a version of the density matrix. Later he spent time in Europe, visiting the Bohr institute on several occasions between 1929 and 1931. A wonderful book about that time period is Faust in Copenhagen: A Struggle for the Soul of Physics by Geno Segrè. Dr. Segrè is a neutrino physicist who is also a talented writer. Warning! If you’re not a physics buff by now, this book might well make you into one. Geno Segrè’s uncle was Emilio Segrè, a famous member of Fermi’s group in Italy and later one of the atomic bomb developers. Talking about Landau, known by his nickname, Dau, Segrè says, “Dau, who became Russia’s greatest theoretical physicist and one of the twentieth century’s major scientific figures was never intimidated by anybody, …”. “As the Dutch physicist, Casimir remembered, ‘Landau’s was perhaps the most brilliant and quickest mind I have ever come across.’ This is high praise from someone who knew well both Heisenberg and Pauli.”

With the Landau Lifshitz definition in hand I tried to see if I could prove that the right sort of environmental interaction could make the phases of the wave function fade away. The density matrix for discrete states is a square with the real probabilities running down the main diagonal from upper left to lower right. The off-diagonal elements are complex and contain the relevant phase information. (The matrix is Hermitian, though that fact is somewhat irrelevant in the context of interest here.)

About the time I started working on the matrix there was a talented graduate student, Yashwant Shitoot from India, at Auburn who needed a thesis topic so I suggested that he work on the problem for his Master’s thesis which he did. Shitoot and I came up with somewhat different approaches to the problem. Yashwant observed that in practice the environment potentials could not be exactly specified and thus the off-diagonal elements of the matrix could be considered to be a probability distribution arising from the many unknown environmental potentials. Citing the “central limit theorem” he argued that these distributions were normal distributions and would vanish over time. (See Yashwant Anant Shitoot Theory of Measurement, M.S. Thesis, Auburn University, March, 1973.) The probabilities in Shitoot’s approach are classical probabilities arising from our ignorance; not quantum probabilities arising from the “mind of God”. In my approach I visualized the wave function in a Stern-Gerlach experiment. The classic Stern-Gerlach experiment passes a beam of silver atoms in vacuum between unsymmetrical poles of a magnet. Such poles generate a non-uniform magnetic field which exerts a force on a silver atom which has a magnetic moment due to the spin of its outer electron. A sliver atom wave function splits into a superposition of two spatially separated parts representing the two spin possibilities spin-up or spin down. (This splitting is similar to what occurs with Schrödinger’s unhappy cat.) After passing through the magnet poles the silver beam can either impinge on a barrier where it forms two spots of silver or, instead come to a barrier with a slit positioned where, say, the upper the silver dots would be. In the latter case some of the silver atoms form a dot below and others pass through the slit. The atoms that pass through the slit all have their spin up when passed through a second pole piece oriented like the first; or confirm the way that spin ½ works if the second pole piece is tilted. My interest, however, was not with the spin of the silver atoms, but instead, with a calculation of how the superposition changes as one part of it impinges on the atoms of the barrier. To attack the calculation, I considered a silver atom as the “system” and the atoms of the barrier as the “environment”. In quantum mechanics there are not only representations, but “pictures”. In the Schrödinger picture, the time dependence is carried by the wave function (state vector) while in the Heisenberg picture the time dependence is carried by the quantum mechanical operators. Furthermore, there is a third picture called the interaction picture where one ends up with the time dependence in the interaction part when a system and its environment interact. Using the interaction picture and a model potential consisting of a series of step functions to simulate the atoms of the barrier, I could easily show that the off-diagonal elements of the density matrix “gradually” went to zero. Of course, I’m being facetious in using the word “gradually” because the time involved here is of the order of 10⁻¹⁴ seconds. However, in one’s imagination one can split this time into thousands or millions of increments. Then the change is indeed gradual. Or one can imagine a different physical situation where a quantum particle traveling through an imperfect vacuum encounters the field from a stray atom from time to time. The essential point is that the quantum decoherence in not instantaneous and one can imagine situations where the time interval is experimentally significant. (See below.)

There are two problems with my approach. First, I failed to find a proof that used a realistic interaction potential. Nevertheless, what I did was highly suggestive and over the years gave me the satisfaction of feeling that I understood what was happening whenever I encountered quantum puzzles involving collapse. In particular, the model calculation showed how an interaction of one piece of a superposition would affect another piece where there was no interaction. The second problem I had at the time was how to interpret the physical situation when the off-diagonal elements of the density matrix had gone only part way to zero. In particular, what was the physical meaning of the situation when a particle passed by a weak interaction potential into an area free from interaction so that any decoherence was only partial. I kept thinking about this second difficulty over the years and at some point, an answer dawned on me. (See below.)

In spite of these difficulties, around 1973 I wrote up a paper and sent it to the Physical Review where it was summarily rejected because I had pointed out no ramifications of the calculation which could be experimentally tested. I didn’t follow up for a number of reasons: I had no answer to the second difficulty mentioned above, I was and am somewhat lazy, and my life was falling apart at the time. I left Auburn in 1974 and my only copy of the paper has disappeared.

Currently, quantum decoherence is of interest because it is highly relevant to quantum computing. In a quantum computer a collection of “qubits” which act like spin ½ particles are put into a quantum state where they carry out a calculation provided that they do not “decohere” during the time necessary for the calculation to take place. This means that the qubit collection must be as isolated as possible from any stray potentials. However, it is likely to be impossible to completely isolate the collection. What happens during a partial decoherence? Here is my answer. During an encounter with a stray potential the off-diagonal terms of the density matrix of the system are slightly smaller. One can get a handle on this situation by splitting the density matrix into a linear superposition of two density matrices, one with zero off-diagonal elements and a second with diagonal elements somewhat reduced. Let the two coefficients of the superposition be c₁ and c₂. Then c₁*c₁ is the probability that decoherence has occurred and c₂*c₂ is the probability that the calculation is OK. I have applied a probability interpretation to the situation, a satisfying idea where quantum physics is concerned. In many cases a quantum calculation seeks an answer which takes too long to find with a conventional computer, but which is easily tested if found. With a quantum computer subject to decoherence one simply repeats the calculation until the answer shows up. Provided the isolation of the system is good, this should not require many repeats. Whether or not my ideas about partial decoherence are valid, it is clear that the entire situation about quantum measurement and decoherence will become clear as quantum computers are developed.

To close this post, I want to consider my conscious motivations in talking about quantum decoherence and my engagement with it. One motivation is that this is an interesting story which goes a long way towards answering the puzzles of quantum measurement, decoherence and collapse. I believe that this history makes clear that the long-standing difficulties in this area which have led to much controversy, are puzzles in the Kuhnian sense and require no radical revolution involving quantum mechanics. A second motivation is personal. Although I certainly deserve no credit whatsoever in the story of how quantum decoherence came into being, I did have an understanding of the situation before the march of science explicated it and it gives me satisfaction to make my involvement public. A final motivation involves my hopes for this blog. I hope the story of my involvement with physics makes clear that I was a hard headed, skeptical practitioner of a basic science and that in promoting Western Zen I’m dedicated to a superstition-free insight that provides a unifying sub-structure for all of Western, and indeed, non-Western World thought.

QM 1

Before completing this post, I need to acknowledge that my goal in writing about modern physics was to create a milieu for more talking about Western Zen. However, as I’ve proceeded, the goal has somewhat changed. I want you, as a reader, to become, if you aren’t already, a physics buff, much in the way I became a history buff after finding history incredibly boring and hateful throughout high school and college. The apotheosis of my history disenchantment came at Stanford in a course taught by a highly regarded historian. The course was entitled “The High Middle Ages” and I actually took it as an elective thinking that it was likely to be fascinating. It was only gradually over the years that I realized that history at its best although based on factual evidence, consists of stories full of meaning, significance and human interest. Turning back to physics, I note that even after more than a hundred years of revolution, physics still suffers a hangover from 300 years of its classical period in which it was characterized by a supposedly passionless objectivity and a mundane view of reality. In fact, modern physics can be imagined as a scientific fantasy, a far-flung poetic construction from which equations can be deduced and the fantasy brought back to earth in experiments and in the devices of our age. When I use the word “fantasy” I do not mean to suggest any lack of rigorous or critical thinking in science. I do want to imply a new expansion of what science is about, a new awareness, hinting at a “reality” deeper than what we have ever imagined in the past. However, to me even more significant than a new reality is the fact that the Quantum Revolution showed that physics can never be considered absolute. The latest and greatest theories are always subject to a revolution which undermines the metaphysics underlying the theory. Who knows what the next revolution will bring? Judging from our understanding of the physics of our age, a new revolution will not change the feeling that we are living in a universe which is an unimaginable miracle.

In what follows I’ve included formulas and mathematics whose significance can be easily be talked about without going into the gory details. The hope is that these will be helpful in clarifying the excitement of physics and the metaphysical ideas lying behind. Of course, the condensed treatment here can be further explicated in the books I mention and in Wikipedia.

My last post, about the massive revolution in physics of the early 20th century, ended by describing the situation in early 1925 when it became abundantly clear in the words of Max Jammer (Jammer, p 196) that physics of the atom was “a lamentable hodgepodge of hypotheses, principles, theorems, and computational recipes rather than a logical consistent theory.” Metaphysically, physicists clung to classical ideas such as particles whose motion consisted of trajectories governed by differential equations and waves as material substances spread out in space and governed by partial differential equations. Clearly these ideas were logically inconsistent with experimental results, but the deep classical metaphysics, refined over 300 years could not be abandoned until there was a consistent theory which allowed something new and different.

Werner Heisenberg, born Dec 5, 1901 was 23 years old in the summer of 1925. He had been a brilliant student at Munich studying with Arnold Sommerfeld, had recently moved to Göttingen, a citadel of math and physics, and had made the acquaintance of Bohr in Copenhagen where he became totally enthralled with doing something about the quantum mess. He noted that the electron orbits of the current theory were purely theoretical constructs and could not be directly observed. Experiments could measure the wavelengths and intensity of the light atoms gave off, so following the Zeitgeist of the times as expounded by Mach and Einstein, Heisenberg decided to try make a direct theory of atomic radiation. One of the ideas of the old quantum theory that Heisenberg used was Bohr’s “Correspondence” principle which notes that as electron orbits become large along with their quantum numbers, quantum results should merge with the classical. Classical physics failed only when things became small enough that Planck’s constant h became significant. Bohr had used this idea in obtaining his formula for the hydrogen atom’s energy levels. In various “old quantum” results the Correspondence Principle was always used, but in different, creative ways for each situation. Heisenberg managed to incorporate it into his ultimate vector-matrix construction once and for all. Heisenberg’s first paper in the Fall of 1925 was jumped on by him and many others and developed into a coherent theory. The new results eliminated many slight discrepancies between theory and experiment, but more important, showed great promise during the last half of 1925 of becoming an actual logical theory.

In January, 1926, Erwin Schrödinger published his first great paper on wave mechanics. Schrödinger, working from classical mechanics, but following de Broglie’s idea of “matter waves”, and using the Correspondence Principle, came up with a wave theory of particle motion, a partial differential equation which could be solved for many systems such as the hydrogen atom, and which soon duplicated Heisenberg’s new results. Within a couple of months Schrödinger closed down a developing controversy by showing that his and Heisenberg’s approaches, though based on seemingly radically opposed ideas, were, in fact, mathematically isomorphic. Meanwhile starting in early 1926, PAM Dirac introduced an abstract algebraic operator approach that went deeper than either Heisenberg or Schrödinger. A significant aspect of Dirac’s genius was his ability to cut through mathematical clutter to a simpler expression of things. I will dare here to be specific about what I’ll call THE fundamental quantum result, hoping that the simplicity of Dirac’s notation will enable those of you without a background in advanced undergraduate mathematics to get some of the feel and flavor of QM.

In ordinary algebra a new level of mathematical abstraction is reached by using letters such as x,y,z or a,b,c to stand for specific numbers, numbers such as 1,2,3 or 3.1416. Numbers, if you think about it, are already somewhat abstract entities. If one has two apples and one orange, one has 3 objects and the “3” doesn’t care that you’re mixing apples and oranges. With algebra, If I use x to stand for a number, the “x” doesn’t care that I don’t know the number it stands for. In Dirac’s abstract scheme what he calls c-numbers are simply symbols of the ordinary algebra that one studies in high school. Along with the c-numbers (classic numbers) Dirac introduces q-numbers (quantum numbers) which are algebraic symbols that behave somewhat differently than those of ordinary algebra. Two of the most important q-numbers are p and s, where p stands for the momentum of a moving particle, mv, mass times velocity in classical physics, and s stands for the position of the particle in space. (I’ve used s instead of the usual q for position to try avoid a confusion with the q of q-number.) Taken as q-numbers, p and s satisfy

ps – sp = h/2πi

which I’ll call the Fundamental Quantum Result in which h is Planck’s constant and i the square root of -1. Actually, Dirac, observing that in most formulas or equations involving h, it occurs as h/2π, defined what is now called h bar or h slash using the symbol ħ = h/2π for the “reduced” Planck constant. If one reads about QM elsewhere (perhaps in Wikipedia) one will see ħ almost universally used. Rather than the way I’ve written the FQR above, it will appear as something like

pqqp = ħ/i

where I’ve restored the usual q for position. What this expression is saying is that in the new QM if one multiplies something first by position q and then by momentum p, the result is different from the multiplications done in the opposite order. We say these q-numbers are non-commutative, the order of multiplication matters. Boldface type is used because position and momentum are vectors and the equation actually applies to each of their 3 components. Furthermore, the FQR tells us exact size of the non-commute. In usual human sized physical units ħ is .00…001054… where there are 33 zeros before the 1054. If we can ignore the size of ħ and set it to zero, p and q, then commute, can be considered c-numbers and we’re back to classical physics. Incidentally, Heisenberg, Born and Jordan obtained the FQR using p and q as infinite matrices and it can be derived also using Schrödinger’s differential operators. It is interesting to note that by using his new abstract algebra, Dirac not only obtained the FQR but could calculate the energy levels of the hydrogen atom. Only later did physicists obtain that result using Heisenberg’s matrices. Sometimes the deep abstract leads to surprisingly concrete results.

For most physicists in 1926, the big excitement was Schrödinger’s equation. Partial differential equations were a familiar tool, while matrices were at that time known mainly to mathematicians. The “old quantum theory” had made a few forays into one or another area leaving the fundamentals of atomic physics and chemistry pretty much in the dark. With Schrödinger’s equation, light was thrown everywhere. One could calculate how two hydrogen atoms were bound in the hydrogen molecule. Then using that binding as a model one could understand various bindings of different molecules. All of chemistry became open to theoretic treatment. The helium atom with its two electrons couldn’t be dealt with at all by the old quantum theory. Using various approximation methods, the new theory could understand in detail the helium atom and other multielectron atoms. Electrons in metals could be modeled with the Schrödinger’s equation, and soon the discovery of the neutron opened up the study of the atomic nucleus. The old quantum theory was helpless in dealing with particle scattering where there were no closed orbits. Such scattering was easily accommodated by the Schrödinger equation though the detailed calculations were far from trivial. Over the years quantum theory revealed more and more practical knowledge and most physicists concentrated on experiments and theoretic calculations that led to such knowledge with little concern about what the new theory meant in terms of physical reality.

However, back in the first few years after 1925 there was a great deal of concern about what the theory meant and the question of how it should be interpreted. For example, under Schrödinger’s theory an electron was represented by a “cloud” of numbers which could travel through space or surround an atom’s nucleus. These numbers, called the wave function and typically named ψ, were complex, of the form a + ib, where i is the square root of -1. By multiplying such a number by its conjugate a – ib, one gets a positive (strictly speaking, non-negative) number which can perhaps be physically interpreted. Schrödinger himself tried to interpret this “real” cloud as a negative electric change density, a blob of negative charge. For a free electron, outside an atom, Schrödinger imagined that the electron wave could form what is called a “wave packet”, a combination of different frequencies that would appear as a small moving blob which could be interpreted as a particle. This idea definitely did not fly. There were too many situations where the waves were spread out in space, before an electron suddenly made its appearance as a particle. The question of what ψ meant was resolved by Max Born (see Wikipedia), starting with a paper in June, 1926. Born interpreted the non-negative numbers ψ*ψ (ψ* being the complex conjugate of the ψ numbers) as a probability distribution for where the electron might appear under suitable physical circumstances. What these physical circumstances are and the physical process of the appearance are still not completely resolved. Later in this or another blog post I will go into this matter in some detail. In 1926 Born’s idea made sense of experiment and resolved the wave-particle duality of the old quantum theory, but at the cost of destroying classical concepts of what a particle or wave really was. Let me try to explain.

A simple example of a classical probability distribution is that of tossing a coin and seeing if it lands heads or tails. The probability distribution in this case is the two numbers, ½ and ½, the first being the probability of heads, the second the probability of tails. The two probabilities add up to 1 which represents certainty, in probability theory. (Unlike the college students who are trying to decide whether to go drinking, go to the movies or to study, I ignore the possibility that the coin lands on its edge without falling over.) With the wave function product ψ*ψ, calculus gives us a way of adding up all the probabilities, and if they don’t add up to 1, we simply define a new ψ by dividing by the sum we obtained. (This is called “normalizing” the wave function.) Besides the complexity of the math, however, there is a profound difference between the coin and the electron. With the coin, classical mechanics tells us in theory, and perhaps in practice, precisely what the position and orientation of the coin is during every instant of its flight; and knowing about the surface the coin lands on, allows us to predict the result of the toss in advance. The classical analogy for the electron would be to imagine it is like a bb moving around inside the non-zero area of the wave function, ready to show up when conditions are propitious. With QM this analogy is false. There is no trajectory for the electron, there is no concept of it having a position, before it shows up. Actually, it is only fairly recently that the “bb in a tin can model” has been shown definitively to be false. I will discuss this matter later talking briefly about Bell’s theorem and “hidden” variable ideas. However, whether or not an electron’s position exists prior to its materialization, it was simply the concept of probability that Einstein and Schrödinger, among others, found unacceptable. As Einstein famously put it, “I can’t believe God plays dice with the universe.”

Max Born, who introduced probability into fundamental physics, was a distinguished physics professor in Göttingen and Heisenberg’s mentor after the latter first came to Göttingen from Munich in 1922. Heisenberg got the breakthrough for his theory while escaping from hay fever in the spring of 1925 walking the beaches of the bleak island of Helgoland in the North Sea off Germany. Returning to Göttingen, Heisenberg showed his work to Born who recognized the calculations as being matrix multiplication and who saw to it that Heisenberg’s first paper was immediately published. Born then recruited Pascual Jordan from the math department at Göttingen and the three wrote a famous follow-up paper, Zur Quantenmechanik II, Nov, 1925, which gave a complete treatment of the new theory from a matrix mechanics point of view. Thus, Born was well posed to come up with his idea of the nature of the wave function.

Quantum Mechanics came into being during the amazingly short interval between mid-1925 and the end of 1926. As far as the theory went, only “mopping” up operations were left. As far as the applications were concerned there was a plethora of “low hanging fruit” that could be gathered over the years with Schrödinger’s equation and Born’s interpretation. However, as 1927 dawned, Heisenberg and many others were concerned with what the theory meant, with fears that it was so revolutionary that it might render ambiguous the meaning of all the fundamental quantities on which both the new QM and old classical physics depended. In 1925 Heisenberg began his work on what became the matrix mechanics because he was skeptical about the existence of Bohr orbits in atoms, but his skepticism did not include the very concept of “space” itself. As QM developed, however, Heisenberg realized that it depended on classical variables such as position and momentum which appeared not only in the pq commutation relation but as basic variables of the Schrödinger equation. Had the meaning of “position” itself changed? Heisenberg realized that earlier with Einstein’s Special Relativity that the meaning of both position and time had indeed changed. (Newton assumed that coordinates in space and the value of time were absolutes, forming an invariable lattice in space and an absolute time which marched at an unvarying pace. Einstein’s theory was called Relativity because space and time were no longer absolutes. Space and time lost their “ideal” nature and became simply what one measured in carefully done experiments. (Curiously enough, though Einstein showed that results of measuring space and time depended on the relative motion of different observers, these quantities changed in such an odd way that measurements of the speed c of light in vacuum came out precisely the same for all observers. There was a new absolute. A simple exposition of special relativity is N. David Mermin’s Space and Time in Special Relativity.)

The result of Heisenberg’s concern and the thinking about it is called the “Uncertainty Principle”. The statement of the principle is the equation ΔqΔp = ħ. The variables q and p are the same q and p of the Fundamental Quantum Relation and, indeed, it is not difficult to derive the uncertainty principle from the FQR. The symbol delta, Δ, when placed in front of a variable means a difference, that is an interval or range of the variable. Experimentally, a measurement of a variable quantity like position q is never exact. The amount of the uncertainty is Δq. The uncertainty equation above thus says that the uncertainty of a particle’s position times the uncertainty of the same particle’s momentum is ħ. In QM what is different from an ordinary error of measurement is that the uncertainty is intrinsic to QM itself. In a way, this result is not all that surprising. We’ve seen that the wave function ψ for a particle is a cloud of numbers. Similarly, a transformed wave function for the same particle’s momentum is a similar cloud of numbers. The Δ’s are simply a measure of the size of these two clouds and the principle says that as one becomes smaller, the other gets larger in such a way that their product is h bar, whose numerical value I’ve given above.

In fact, back in 1958 when I was in Eikenberry’s QM course and we derived the uncertainty relation from the FQR, I wondered what the big deal was. I was aware that the uncertainty principle was considered rather earthshaking but didn’t see why it should be. What I missed is what Heisenberg’s paper really did. The equation I’ve written above is pure theory. Heisenberg considered the question, “What if we try to do experiments that actually measure the position and momentum. How does this theory work? What is the physics? Could experiments actually disprove the theory?” Among other experimental set-ups Heisenberg imagined a microscope that used electromagnetic rays of increasingly short wavelengths. It was well known classically by the mid-nineteenth century that the resolution of a microscope depends on the wavelength of the light it uses. Light is an electromagnetic (em) wave so one can imagine em radiation of such a short wavelength that it could view with a microscope a particle, regardless of how small, reducing Δq to as small a value as one wished. However, by 1927 it was also well known because of the Compton effect that I talked about in the last post, that such em radiation, called x-rays or gamma rays, consisted of high energy photons which would collide with the electron giving it a recoil momentum whose uncertainty, Δp, turns out to satisfy ΔqΔp = ħ. Heisenberg thus considered known physical processes which failed to overturn the theory. The sort of reasoning Heisenberg used is called a “thought” experiment because he didn’t actually try to construct an apparatus or carry out a “real” experiment. Before dismissing thought experiments as being hopelessly hypothetical, one must realize that any real experiment in physics or in any science for that matter, begins as a thought experiment. One imagines the experiment and then figures out how to build an apparatus (if appropriate) and collect data. In fact, as a science progresses, many experiments formerly expressed only in thought, turn real as the state of the art improves.

Although the uncertainty principle is earthshaking enough that it helped confirm the skepticism of two of the main architects of QM, namely, Einstein and Schrödinger, one should note that, in practice, because of the small size of ħ, the garden variety uncertainties which arise from the “apparatus” measuring position or momentum are much larger than the intrinsic quantum uncertainties. Furthermore, the principle does not apply to c-numbers such as e, the fundamental electron or proton charge, c, the speed of light in vacuum, h, Planck’s constant. There is an interesting story here about a recent (Fall, 2018) redefinition of physical units which one can read about on line. Perhaps I’ll have more to say about this subject in a later post. For now, I’ll just note that starting on May 20, 2019, Planck’s constant will be (or has been) defined as having an exact value of 6.626070150×10¯³⁴ Joule seconds. There is zero uncertainty in this new definition which may be used to define and measure the mass of the kilogram to higher accuracy and precision than possible in the past using the old standard, a platinum-iridium cylinder, kept closely guarded near Paris. In fact, there is nothing muddy or imprecise about the value of many quantities whose measurement intimately involves QM.

During the years after 1925 there was at least one more area which in QM was puzzling to say the least; namely, what has been called “the collapse of the wave function.” Involved in the intense discussions over this phenomenon and how to deal with it was another genius I’ve scarcely mentioned so far; namely Wolfgang Pauli. Pauli, a year older than Heisenberg, was a year ahead of him in Munich studying under Sommerfeld, then moved to Göttingen, leaving just before Heisenberg arrived. Pauli was responsible for the Pauli Exclusion Principle based on the concept of particle spin which he also explicated. (see Wikipedia) He was in the thick of things during the 1925 – 1927 time period. Pauli ended up as a professor in Zurich, but spent time in Copenhagen with Bohr and Heisenberg (and many others) formulating what became known as the Copenhagen interpretation of QM. Pauli was a bon vivant and had a witty sarcastic tongue, accusing Heisenberg at one point of “treason” for an idea that he (Pauli) disliked. In another anecdote Pauli was at a physics meeting during the reading of a muddy paper by another physicist. He stormed to his feet and loudly said, “This paper is outrageous. It is not even wrong!” Whether the meeting occurred at a late enough date for Pauli to have read Popper, he obviously understood that being wrong could be productive, while being meaningless could not.

Over the next few years after 1927 Bohr, Heisenberg, and Pauli explicated what came to be called “the Copenhagen interpretation of Quantum Mechanics”. It is well worth reading the superb article in Wikipedia about “The Copenhagen Interpretation.” One point the article makes is that there is no definitive statement of this interpretation. Bohr, Heisenberg, and Pauli each had slightly different ideas about exactly what the interpretation was or how it worked. However, in my opinion, things are clear enough in practice. The problem QM seems to have has been called the “collapse of the wave function.” It is most clearly seen in a double slit interference experiment with electrons or other quantum particles such as photons or even entire atoms. The experiment consists of a plate with two slits, closely enough spaced that the wave function of an approaching particle covers both slits. The spacing is also close enough that the wavelength of the particle as determined by its energy or momentum, is such that the waves passing through the slit will visibly interfere on the far side of the slit. This interference is in the form of a pattern consisting of stripes on a screen or photographic plate. These stripes show up, zebra like, on a screen or as dark, light areas on a developed photographic plate. On a photographic plate there is a black dot where a particle has shown up. The striped pattern consists of all the dots made by the individual particles when a large number of particles have passed through the apparatus. What has happened is that the wave function has “collapsed” from an area encompassing all of the stripes, to a tiny area of a single dot. One might ask at this point, “So what?” After all, for the idea of a probability distribution to have any meaning, the event for which there is a probability distribution has to actually occur. The wave function must “collapse” or the probability interpretation itself is meaningless. The problem is that QM has no theory whatever for the collapse.

One can easily try to make a quantum theory of what happens in the collapse because QM can deal with multi-particle systems such as molecules. One obtains a many particle version of QM simply by adding the coordinates of the new particles, which are to be considered, to a multi-particle version of the Schrödinger equation. In particular, one can add to the description of a particle which approaches a photographic plate, all the molecules in the first few relevant molecular layers of the plate. When one does this however, one does not get a collapse. Instead the new multi-particle wave function simply includes the molecules of the plate which are as spread out as much as the original wave function of the approaching particle. In fact, the structure of QM guarantees that as one adds new particles, these new particles themselves continue to make an increasingly spread out multi-particle wave function. This result was shown in great detail in 1929 by John von Neumann. However, the idea of von Neumann’s result was already generally realized and accepted during the years of the late 1920’s when our three heroes and many others were grappling with finding a mechanism to explain the experimental collapse. Bohr’s version of the interpretation is simplicity itself. Bohr posits two separate realms, a realm of classical physics governing large scale phenomena, and a realm of quantum physics. In a double slit experiment the photographic plate is classical; the approaching particle is quantum. When the quantum encounters the classical, the collapse occurs.

The Copenhagen interpretation explains the results of a double slit experiment and many others, and is sufficient for the practical development of atomic, molecular, solid state, nuclear and particle physics, which has occurred since the late 1920’s. However, there has been an enormous history of objections, refinements, rejections and alternate interpretations of the Copenhagen interpretation as one might well imagine. My own first reaction could be expressed as the statement, “I thought that ‘magic’ had been banned from science back in the 17th century. Now it seems to have crept back in.” (At present I take a less intemperate view.) However, one can make many obvious objections to the Copenhagen interpretation as I’ve baldly stated it above. Where, exactly, does the quantum realm become the classic realm? Is this division sharp or is there an interval of increasing complexity that slowly changes from quantum to classical? Surely, QM, like the theory of relativity, actually applies to the classical realm. Or does it?

During the 1930’s Schrödinger used the difficulties with the Copenhagen interpretation to make up the now famous thought experiment called “Schrödinger’s Cat.” Back in the early 1970’s when I became interested in the puzzle of “collapse” and first heard the phrase “Schrödinger’s Cat”, it was far from famous so, curious, I looked it up and read the original short article, puzzling out the German. In his thought experiment Schrödinger uses the theory of alpha decay. An alpha particle confined in a radioactive nucleus is forever trapped according to classical physics. QM allows the escape because the alpha particle’s wave function can actually penetrate the barrier which classically keeps it confined. Schrödinger imagines a cat imprisoned in a cage containing an infernal apparatus (hollenmaschine) which will kill the cat if triggered by an alpha decay. Applying a multi-particle Schrödinger’s equation to the alpha’s creeping wave function as it encounters the trigger of the “maschine”, its internals, and the cat, the multi-particle wave function then contains a “superposition” (i.e. a linear combination) of a dead and a live cat. Schrödinger makes no further comment leaving it to the reader to realize how ridiculous this all is. Actually, it is even worse. According to QM theory, when a person looks in the cage, the superposition spreads to the person leaving two versions, one looking at a dead cat and one looking at a live cat. But a person is connected to an environment which also splits and keeps splitting until the entire universe is involved.

What I’ve presented here is an actual alternative to the Copenhagen Interpretation called “the Many-worlds interpretation”. To quote from Wikipedia “The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual ‘world’ (or ‘universe’).” The many-worlds interpretation arose in 1957 in the Princeton University Ph.D. dissertation of Hugh Everett working under the direction of the late John Archibald Wheeler, who I mentioned in the last post. Although I am a tremendous admirer of Wheeler, I am skeptical of the many-worlds interpretation. It seems unnecessarily complicated, especially in light of ideas that have developed since I noticed them in 1972. There is no experimental evidence for the interpretation. Such evidence might involve interference effects between the two versions of the universe as the splitting occurs. Finally, if I exist in a superposition, how come I’m only conscious of the one side? Bringing in “consciousness” however, leads to all kinds of muddy nonsense about consciousness effects in wave function splitting or collapse. I’m all for consciousness studies and possibly such will be relevant for physics after another revolution in neurology or physics. At present we can understand quantum mechanics without explicitly bringing in consciousness.

In the next post I’ll go into what I noticed in 1971-72 and how this idea subsequently became developed in the greater physics community. The next post will necessarily be somewhat more mathematically specific than so far, possibly including a few gory details. I hope that the math won’t obscure the story. In subsequent posts I’ll revert to talking about physics theory without actually doing any math.

Physics, Etc.

In telling a story about physics and some of its significance for a life of awareness I’ll start with an idea of the philosopher Immanuel Kant (1724 – 1804). Kant, in my mind, is associated with impenetrable German which translates into impenetrable English. To find some clarity about Kant’s ideas one turns to Wikipedia, where the opening paragraph of the Kant entry explains his main ideas in an uncharacteristically comprehensible way. One of these ideas is that we are born into this world with our minds prepared to understand space, time, and causality. And with this kind of mental conditioning we can make sense of simple phenomena, and, indeed, pursue science. This insight predates Darwin’s theory of evolution which offers a plausible explanation for it, by some sixty-odd years, and was thus a remarkable insight on the part of Kant. Another Kant idea that is relevant to our story is his distinction between what he calls phenomena and noumena. Quoting from Wikipedia, “… our experience of things is always of the phenomenal world as conveyed by our senses: we do not have direct access to things in themselves, the so-called noumenal world.” Of course, this is only one aspect of Kant’s thought, but the aspect that seems to me most relevant to what might be meant by physical reality. Kant was a philosopher’s philosopher, totally dedicated to deepening our understanding of what we may comprehend about the world and morality by purely rational thought. He was born in Königsberg, East-Prussia, at the time a principality on the Baltic coast east of Denmark and north of Poland-Lithuania; and died there 80 years later. Legend has it that during his entire life he never traveled more than 10 miles from his home. The Wikipedia article refutes this slander: Kant actually traveled on occasion some 90.1 miles from Königsberg.

The massive extent of Kant’s philosophy leaves me somewhat appalled, particularly since I understand little of it and because what I perhaps do understand seems dubious at best and meaningless at worst. What Kant may not have realized is the idea that the extent and nature of the noumenal world is relative to the times in which one lives. Kant was born 3 years before Isaac Newton died, so by the date of his birth the stage was well set for the age of classical physics. During his life classical mechanics was developed largely by two great mathematicians, Joseph-Louis Lagrange (1736 – 1813) and Pierre-Simon Laplace (1747 – 1849). Looking back from Kant’s time to the ancient world one sees an incredible growth of the phenomenal world, with the Copernican revolution, a deepening understanding of planetary motion, and Newton’s Laws of mechanics. In the time since Kant lived laws of electricity and magnetism, statistical mechanics, quantum mechanics, and most of present-day science were developed. This advance raises a question. Does the growth of the phenomenal world entail a corresponding decrease in the noumenal world or are phenomena and noumena entirely independent of one another? Of course, I’d like to have it both ways, and can do so by imagining two senses of noumena. To get an idea of the first original idea, I will tell a brief story. In the early 1970’s we were visited at Auburn University by the great physicist, John Archibald Wheeler, who led a discussion in our faculty meeting room. I was very impressed by Dr. Wheeler. To me he seemed a “tiger”, totally dedicated to physics, his students, and to an awareness of what lay beyond our comprehension. At one point he pointed to the tiles on the floor and said to us physicists, something like, “Let each one of you write your favorite physics laws on one of these tiles. And after you’ve all done that, ask the tiles with their equations to get up and fly. They will just lie there; but the universe flies.” Wheeler had doubtless used this example on many prior occasions, but it was new to me and seems to get at the meaning of noumena as a realm independent of anything science can ever discover. On the other hand, as the realm of phenomena that we do understand has grown, we can regard noumena simply as a “blank” in our knowledge, a blank which can be filled in as science, so to speak, peels back the layers of an “onion” revealing the understanding of a larger world, and at the same time, exposing a new layer of ignorance to attack. This second sense of the word in no way diminishes the ultimate mystery of the universe. In fact, it appears to me that the quest for ultimate understanding in the face of the great mystery is what gives physics (and science) a compulsive, even addictive, fascination for its practitioners. Like compulsive gamblers experimental physicists work far into the night and theorists endlessly torture thought. Certainly, the idea that we could conceivably uncover ever more specifics into the mystery of ultimate being is what drew me to the area. That, as well as the idea that if one wants to understand “everything”, physics is a good place to start.

In my understanding, the story of physics during my lifetime and the 30 years preceding my birth is the story of a massive, earthshaking revolution. Thomas Kuhn’s The Structure of Scientific Revolutions, mentioned in earlier posts is a story of many shifts in scientific perception which he calls revolutions. In his terms what I’m talking about here is a “super-duper-revolution”, a massive shift in understanding whose import is still not fully realized in our society at large at the present time. Most of the ”revolutions” that Kuhn uses as examples affect only scientists in a particular field. For example, the fall of the phlogiston theory and the rise of Oxygen in understanding fire and burning was a major revolution for chemistry, but had little effect on the culture of society at large. Similarly, in ancient times the rise of Ptolemaic astronomy mostly concerned philosophers and intellectuals. The larger society was content with the idea that gods or God controlled what went on in the heavens as well as on earth. The Copernican revolution, on the other hand, was earth shaking (super-duper) for the entire society, mainly because it called into question theories of how God ran the universe and because it became the underpinning of an entirely new idea of what was “real”. Likewise, the scientific revolution of the 16th and 17th centuries was earthshaking to the entire society, which, however, as time wore on into the 18th and 19th centuries became accustomed to it and assumed that the classical, Newtonian “clockworks” universe was here to stay forever however uncomfortable it might be to artists and writers, who hoped to live in a different, more meaningful world of their own experience, rejecting scientific “reality” as something which mattered little in a spiritual sense. Who could have believed that in the mid 1890’s after 300 years (1590 – 1890, say) of continued, mostly harmonious development the entire underpinning of scientific reality was about to be overturned by what might be called the quantum revolution. Yet that is what happened in the next forty years (1895 – 1935) with continuing advances and consolidation up to the present day. (From now on I’ll use the abbreviation QM for Quantum Mechanics, the centerpiece of this revolution.) Of course, as with any great revolution, all has not been smooth. Many of the greatest scientists of our times, most notably Albert Einstein and Erwin Schrödinger, found the tenets of the new physics totally unacceptable and fought them tooth and nail. In fact, there is at least one remaining QM puzzle epitomized by “Schrödinger’s Cat” about which I hope to have my say at some point.

It is my hope that readers of this blog will find excitement in the open possibilities that an understanding of the revolutionary physical “reality” we currently live in suggests. In talking about it I certainly don’t want to try “reinvent the wheel” since many able and brilliant writers have told portions of the story. What I can do is give references to various books and URL’s that are with few exceptions (which I’ll note) great reading. I’ll have comments to make about many of these and hope that with their underpinning, I can tell this story and illuminate its relevance for what I’ve called Western Zen.

The first book to delve into is The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught us to Love Uncertainty by Robert P. Crease and Alfred Scharff Goldhaber. Robert Crease is a philosopher specializing in science and Alfred Goldhaber is a physicist. The book, which I’ll abbreviate as TQM, tells the history of Quantum Mechanics from its very beginning in December, 1900, to very near the present day. Copyrighted by W.W. Norton in 2014 it is quite recent, today as I write being early November, 2018. The story this book tells goes beyond an exposition of QM itself to give many examples of the effects that this new reality has had so far in our society. It is very entertaining and well written though, on occasion it does get slightly mathematical in a well-judged way in making quantum mechanics clearer. A welcome aspect of the book for me was the many references to another book, The Conceptual Development of Quantum Mechanics by Max Jammer. Jammer’s book (1966) is out of print and is definitely not light reading with its exhaustive references to the original literature and its full deployment of advanced math. Auburn University had Jammer in its library and I studied it extensively while there. I was glad to see the many footnotes to it in TQM, showing that Jammer is still considered authoritative and that there is no more recent book detailing this history. Recently, I felt that I would like to own a copy of Jammer so found one, falling to pieces, on Amazon for fifty odd dollars. If you are a hotshot mathematician and fascinated by the history of QM, you will doubtless find Jammer in any university library.

The quantum revolution occurred in two great waves. The first wave, called the “old quantum theory” started with Planck’s December, 1900, paper on black body radiation and ended in 1925 with Heisenberg’s paper on Quantum Mechanics proper. From 1925 through about 1932, QM was developed by about 8 or so geniuses bringing the subject to a point equivalent to Newton’s Principia for classical mechanics development. Besides the four physicists of the Quantum Moment title, I’ll mention Louis de Broglie, Wolfgang Pauli, PAM Dirac, Max Born, Erwin Schrödinger. And there were many others.

A point worth mentioning is that The Quantum Moment concentrates on what might be called the quantum weirdness of both the old quantum theory and the new QM. This concentration is appropriate because it is this weirdness that has most affected our cultural awareness, the main subject of the book. However, to the physicists of the period 1895 – 1932, the weirdness, annoying and troubling as it was, was in a way a distraction from the most exciting physics going on at the time; namely, the discovery that atoms really exist and have a substructure which can be understood, an understanding that led to a massive increase in practical applications as well as theoretical knowledge. Without this incredible success in understanding the material world the “weirdness” might have well have doomed QM. As we will mention below most physicists ignore the weirdness and concentrate on the “physics” that leads to practical advances. Two examples of these “advances” are the atomic bomb and the smart phone in your pocket. In the next few paragraphs I will fill in some of this history of atomic physics with its intimate connection to QM.

The discovery of the atom and its properties began in 1897 as J.J. Thomson made a definitive breakthrough in identifying the first sub-atomic particle, the lightweight, negatively charged electron (see Wikipedia). Until 1905, however, many scientists disbelieved in the “reality” of atoms in spite of their usefulness as a conceptual tool in understanding chemistry. In the “miracle year” 1905 Albert Einstein published four papers, each one totally revolutionary in a different field. The paper of interest here is about Brownian motion, a jiggling of small particles, as seen through a microscope. As a child I had a very nice full laboratory Bausch and Lomb microscope, given by my parents when I was about 7 years old. In the 9th grade I happened to put a drop of tincture of Benzoin in water and looked at it through the microscope, seeing hundreds of dancing particles that just didn’t behave like anything alive. I asked my biology teacher about it and after consulting her husband, a professor at the university, she told me it was Brownian motion, discovered by Robert Brown in 1827. I learned later that the motion was caused because the tiny moving particles are small enough that molecules striking them are unbalanced by others, causing a random motion. I had no idea at time how crucial for atomic theory this phenomenon was. It turns out that the motion had been characterized by careful observation and that Einstein showed in his paper how molecules striking the small particles could account for the motion. Also, by this time studies of radioactivity had shown emitted alpha and beta particles were clearly sub-atomic, beta particles being identical with the newly discovered electrons and the charged alpha particles turning into electrically neutral helium as they slowed and captured stray electrons.

Einstein’s other 1905 papers were two on special relativity and one on the photoelectric effect. As strange as special relativity seems with its contraction of moving measuring sticks, slowing of moving clocks, simultaneity dependent upon the observer to say nothing of E = mc², this theory ended up fitting comfortably with classical Newtonian physics. Not so with the photoelectric effect.

Planck’s Discovery of a Black Body Formula

In December, 1900, Max Planck started the quantum revolution by finding a physical basis for a formula he had guessed earlier relating the radiated energy of a glowing “black body” to its temperature and the frequencies of its radiation. A “black body” is made of an ideal substance that is totally efficient in radiating electro-magnetic waves. Such a body could be simulated experimentally with high accuracy by measuring what came out of a small hole in the side of an enclosed oven. To find the “physics” behind his formula Planck had turned to statistical mechanics, which involves counting numbers of discrete states to find the probability distribution of the states. In order to do the counting Planck had artificially (he thought) broken up the continuous energy of electromagnetic waves into chunks of energy, hν, ν being the frequency of the wave, denoted historically by the Greek letter nu. (Remember: the frequency is associated with light’s color, and thus the color of the glow when a heated body gives off radiation) Planck’s plan was to let the “artificial” fudge-factor h go to zero in the final formula so that the waves would regain their continuity. Planck found his formula, but when he set h = 0, he got the classical Raleigh-Jeans formula for the radiation with its “ultra-violet catastrophe”. The latter term refers to the Raleigh-Jeans formula’s infinite energy radiated as the frequency goes higher. Another formula, guessed by Wien, gave the correct experimental results at high frequencies but was off at lower frequencies where the Raleigh-Jeans formula worked just fine. To his dismay what Planck found was that if he set h equal to a very small finite value, his formula worked perfectly for both low and high frequencies. This was a triumph but at the same time, a disaster. Neither Planck nor anyone else believed that these hν bundles could “really” be real. Maybe the packets came off in bundles which quickly merged to form the electromagnetic wave. True, Newton had thought light consisted of a stream of tiny particles, but over the years since his time numerous experiments showed that light really was a wave phenomenon, with all kinds of wave interference effects. Also, in the 19th century physicists, notably Fraunhofer, invented the diffraction grating and with it the ability to measure the actual wave length of the waves. The Quantum Moment (TQM) has a wonderfully complete detailed story of Planck’s momentous breakthrough in its chapter “Interlude: Max Planck Introduces the Quantum”. TQM is structured with clear general expositions followed by more detailed “Interludes” which can be skipped without interrupting the story.

Einstein’s 1905 photoelectric effect paper assumed that the hν quanta were real and light actually acted like little bullets, slamming into a metal surface, penetrating, colliding with an atomic electron and bouncing it out of the metal where it could be detected. It takes a certain energy to bounce an electron out of its atom and then past the surface of the metal. What was experimentally found (after some tribulations) was that energy of the emerging electrons depended only on the frequency of the light hitting the surface. If the light frequency was too low, no matter how intense the light, nothing much happened. At higher frequencies, increasing the intensity of the light resulted in more electrons coming out but did not increase their energy. As the light frequency increased the emitted electrons were more energetic. It was primarily for this paper that Einstein received his Nobel Prize in 1921.

A huge breakthrough in atomic theory was Ernest Rutherford’s discovery of the atomic nucleus in the early years of the 20th century. Rather than a diffuse cloud of electrically positive matter with the negatively charged electrons distributed in it like raisins (the “plum pudding” model of the atom) Rutherford found by scattering alpha particles off gold foil that the positive charge of the atom was in a tiny nucleus with the electrons circling at a great distance (the “fly in the cathedral model”). There was a little problem however. The “plum pudding” model might possibly be stable under Newtonian classical physics, while the “fly in the cathedral” model was utterly unstable. (Note: Rutherford’s experiment, though designed by him, was actually carried out between 1908 and 1913 by Hans Geiger and Ernest Marsden at Rutherford’s Manchester lab.) Ignoring the impossibility of the Rutherford atom physics plowed ahead. In 1913 the young Dane Niels Bohr made a huge breakthrough by assuming quantum packets were real and could be applied to understanding the hydrogen atom, the simplest of all atoms with its single electron circling its nucleus. Bohr’s model with its discrete electron orbits and energy levels explained the spectral lines of glowing hydrogen which had earlier been discovered and measured with a Fraunhofer diffraction grating. At Rutherford’s lab it was quickly realized that energy levels were a feature of all atoms, and the young genius physicist, Henry Moseley, using a self-built X-ray tube to excite different atoms refined the idea of the atomic number, removing several anomalies in the periodic table of the time, while predicting 4 new chemical elements in the process. At this point World War I intervened and Moseley volunteered for the Royal Engineers. One among the innumerable tragedies of the Great War was the death of Moseley August 10, 1915, aged 27, in Gallipoli, killed by a sniper.

Brief Interlude: It is enlightening to understand the milieu in which the quantum revolution and the Great War occurred. A good read is The Fall of the Dynasties – The Collapse of the Old Order: 1905 – 1922 by Edmond Taylor. Originally published in 1963, the book was reissued in 2015. The book begins with the story of the immediate cause of the war, an assassination in Sarajevo, Bosnia, part of the dual monarchy Austria-Hungary empire; then fills in the history of the various dynasties, countries and empires involved. One imagines what it would be like to live in those times and becomes appalled by the nationalistic passions of the day. While explicating the seemingly mainstream experience of living in the late 19th and early 20th century, and the incredible political changes entailed by the fall of the monarchies and the Great War, the aspects of the times, which we think of, these days, as equally revolutionary are barely mentioned. These were modern art with its demonstration that aesthetic depth lay in realms beyond pure representation, the modern novel and poetry, the philosophy of Wittgenstein which I’ve discussed above and perhaps most revolutionary of all, the fall of classic physics and rise of the new “reality” of modern physics which we are talking about in this post. (With his deep command of the relevant historical detail for his story the author does, however, get one thing wrong when he briefly mentions science. He chooses Einstein’s relativity of 1905 but calls it “General Relativity” putting in an adjective which makes it sound possibly more exciting than plain “relativity”. The correct phrase is “Special Relativity” which indeed was quite exciting enough. General Relativity didn’t happen until 1915.)

Unlike the second world war the first was not a total war and research in fundament physics went on. The mathematician turned physicist Arnold Sommerfeld in Munich generalized Bohr’s quantum rules by imagining the discrete electron orbits as elliptical rather than circular and taking their tilt into account, giving rise to new labels (called quantum numbers) for these orbits. The light spectra given off by atoms verified these new numbers with a few discrepancies which were later removed by QM. During this time and after the war ended, physicists became concerned about the contradiction between the wave and particle theories of light. This subject is well covered in TQM. (See the chapter “Sharks and Tigers: Schizophrenia”. It is easy to see the problem. If one has surfed or even just looked at the ocean, one feels or sees that a wave carries energy along a wide front, this energy being released as the wave breaks. This kind of energy distribution is characteristic of all waves, not just ocean waves. On the other hand, a bullet or billiard ball carries its energy and momentum in a compact volume. Waves can interfere with each other, reinforcing or canceling out their amplitudes. So, what is one to make of light which makes interference patterns when shined through a single or double slit but acts like a particle in the photoelectric effect or, even more clearly, like a billiard ball collision when a light quantum, called a photon, collides with an electron, an effect discovered by Arthur Compton in 1923. To muddy the waters still further, in 1922 the French physicist Louis de Broglie reasoned that if light can act like either a particle or wave depending on circumstances, by analogy, an electron, regarded hitherto as strictly a particle, could perhaps under the right conditions act like a wave. Although there was no direct evidence for electron waves at the time, there was suggestive evidence. For example, with the Bohr model of the hydrogen atom if one assumed the lowest, “ground state” orbit was a single electron wave length, one could deduce the entire Bohr theory in a new, simple way. By 1924 it was clear to physicists that the “old” quantum mechanics just wouldn’t do. This theory kept classical mechanics and classical wave theory and restricted their generality by imposing “quantum” rules. With both light and electrons being both wave and particle, physics contained an apparent logical contradiction. Furthermore, though the “old” theory had successes with its concept of energy levels in atoms and molecules, it couldn’t theoretically deal at all with such seemingly simple entities as the hydrogen molecule or the helium atom which experimentally had well defined energy levels. The theory was a total mess. It was in 1925 that the beginnings of a completely new, fundamental theory made its appearance leading shortly to much more weirdness than had already appeared in the “old quantum” theory. In the next post I’ll delve into some of the story of the new QM.

Reality

Reality is what we all know about as long as we don’t think. It’s not meant to be thought about but reacted to; as threats, awareness of danger; bred into our bones by countless years of evolution. But now, after those countless years, we have a brain and a different kind of awareness that can wonder about such things. Is such wonder worthless? Who knows. Worthless or not, I’m stuck with it because I enjoy ruminations and trying to understand what we take for granted, finding as I think harder, nothing but mystery. In this post I will begin to talk about “reality” and try to clarify the idea somewhat, bringing in Zen, which may or may not be relevant.

In thinking about “reality” I will take it as a primitive, attempting no definition. One may try to get at reality by considering “fiction”, perhaps a polar opposite. In this consideration one notes that Aristotelean logic doesn’t apply. There is a middle one can’t exclude, because, in this case, the middle is larger and more important than the ends of the spectrum.

One can begin to work into this middle by considering the use of the word “fiction” in Yuval Harari’s Sapiens: A Brief History of Humankind, where “fiction” is applied to societal conventions and laws. Sapiens is a fascinating book, but Harari’s use of the word “fiction” for “convention” rubbed me the wrong way. Although laws and conventions are, strictly speaking, fictions, they have one property popularly attributed to “reality”. A common saying is: “One doesn’t have to believe in reality. It will up and bite you whether you believe in it or not.” The same applies to laws and convention. If one is about to be executed for “treason”, it doesn’t matter that the law is really a “fiction”, compared perhaps with physical reality. In fact, most “realities” whether physical or societal possess a large social component. This area of social agreement comes up when one judges whether another human is sane or crazy. The sine qua non of insanity is its defiance of reality as it is conceived by we “sane ones.” Unfortunately, it is all too easy to forget that conventions are a product of society and take them as absolutes. Teenagers are notorious for wanting to be “in” with their crowd even when the fashions of the crowd are highly dubious. But many so-called grown-ups are equally taken in by the conventions of society. Most of the time it is easy and harmless to go along with the conventions, but one should always realize that they are, in fact, made up and vary from society to society. Presumably that is what Harari was trying to emphasize.

Then there are questions of the depth of realities. In many cultures there is a claim for “levels of reality” beyond everyday physical realities like streets, tile floors, buildings, weather, and the world around us. Hindu mystics consider the “real” world Maya, an illusion. Modern physics grants the reality of the everyday world, but has found a world of possibly deeper reality behind it. There are atoms, molecules, elementary particles, all governed by the “reality” of quantum mechanics which lies behind what one might be tempted to call the “fiction” of classical mechanics. No physicist “really” considers classical mechanics a fiction, though perhaps many would claim there is a wider and possibly deeper reality behind it. Most physicists would leave such questions to philosophers and would consider serious thought about them, a waste of time. Physics first imagined the reality of molecules in the nineteenth century, explaining concepts and measurements of heat related phenomena. For example, temperature is the mean kinetic energy of molecular motion related to what we measure with a thermometer by Boltzmann’s constant. In the early 20th century there were very reputable scientists skeptical of the existence of atoms and molecules. Most of them were convinced of the atom’s reality by Einstein’s theory of Brownian motion (1905). As the 20th century wore on the entire basis of chemistry was established in great detail by quantum theories of electron states in atoms and molecules. In the twenties and thirties cosmology came into being. Besides explaining the genesis of atomic elements, cosmology, using astronomical observations and theory, finds a universe consisting of 10’s of billions of galaxies, each consisting on average of 10’s of billions of stars, all of which originated in a “big bang” some 13.6 billion years ago. In a later post I’ll consider the current situation physics finds itself in, with dark matter, dark energy, string theory, and ideas of a multi-verse. If one considers these as realities, one should not hold such a belief too firmly. History teaches us that physics is subject to revolutions which alter the very “facts” of physical reality. Besides the lurking revolutions of the future one notes that the “realities” of physics and chemistry lie in their theories which have proved essential for the “reality” of our modern technologies. One might claim however, that these are theories of reality, rather than a more immediate impingement of reality in our lives. I hope to say more about “physical reality” in the next post.

Leaving the physical world, one asks, “What about myth, an admitted fiction?” If a myth has a deep meaning and lesson for our lives, doesn’t that entail a certain kind of reality of more importance than a trivial sort of physical reality? Consider “myth” vs. “history”. Reality for history depends on “primary sources”, written records. The “written” record might be that of an oral interview when recent history is concerned; but the idea is that there is a concrete record of some kind that relates directly to the happenings that history is reporting. Consider the stories about Pythagoras I wrote about in the last post. These stories were based on “secondary sources”, accounts written hundreds of years after Pythagoras’s death, relying on hearsay or vanished primary sources with no way of telling which was which. They form the basis for the shallow kind of myth that gives “myth” its common pejorative connotation. We dismiss the myths about Pythagoras’s golden thigh, his flying from place to place, where he may appear simultaneously, not simply because these claims conflict with our present scientific world view, but because they have no relevance to facts about Pythagoras which matter to us in considering his contributions to the history of mathematics. The myths about Pythagoras can be considered “trivial” myths which discredit the very idea of myth. But what about deeper myths? Most religions tell stories about their founders and contributors which have a high mythic content. I ask in this context, “Does distinguishing between myth and historical reality in matters of religious history, really matter, or matter at all?” Buddhists are notorious for being unfazed when various historical stories are proven fictional by historians. I would baldly state their attitude as: “The religious importance of the story is what matters; not the factual truth of every so-called fact in the canon.” Getting closer to home, I might ask, “Suppose the facts about Jesus’s physical existence were convincingly proved to be completely fictional. Would it matter to Christianity?” I would guess that it WOULD be devastating to believers, but that, in fact, it SHOULDN’T be. What matters in Christianity is the insight that feelings of love are deeply embedded in the universe and that Jesus, whether a fictional person or not, is responsible for bringing this “fact” to life, to showing that in the deep mystery one might call “God”, there is a forgiveness of the animal brutishness of humans. If through an active nurture of love in ourselves we experience this deep truth and express it in the way we act towards others, we redeem ourselves, and potentially, all of humanity. The stories, “myths” if you will, help us towards this experiential realization, a realization that is utterly unrelated to “belief”, a realization which could be called “Christian Satori”. The uniqueness of Christianity, as far as I can tell, is this emphasis on “love”. Unfortunately, the methodology of Christianity, with its historical emphasis on grasping ever harder at “belief”, is deeply flawed, leading backwards to the brutishness, rather than forward to love. Certain Christian thinkers, Thomas Merton for example, seem to have realized that Zen practice can be helpful in reaching a deeper understanding of their religion. One aspect of a Western Zen would be its applicability to a Western religious practice of a more deeply realized Christianity. Actually, whether or not “love” is embedded in the universe, we, as humans are susceptible to it, and can choose to base our lives on realizing its full depths in our beings.

Getting back to “reality”, I’ll consider possible insights from traditional Eastern Zen. So far in talking about Zen I’ve emphasized the Soto school of Japanese Zen and have tried to show how various Western ideas are susceptible to a deeper understanding by means of what might be called Western Zen. Actually, I claim that the insights of Zen lie below any cultural trappings; and that for a complete understanding, particularly as such might relate to “reality”, one should consider Zen in all its manifestations. The Rinzai Japanese school is the one we typically find written about in the US. It is the school which perhaps (I’m pretty ignorant about such matters) has deeper roots in China where Zen originated and the discipline of concentrating on Koans came into being. An excellent introduction to this school is the book Zen Comments on the Mumonkan, by Zenkei Shibayama, Harper and Row, 1974. The Chinese master Wu-men, 1183-1260, collected together 48 existing Koans and published them in the book, Wu-wen kuan. In Japan Wu-wen is called “Mumon” and his book is called the Mumonkan.

During the late 1960’s and early 1970’s I attended an annual conference of what was then called the Society for Religion in Higher Education. Barbara, my wife at the time, as a former Fulbright scholar, was an automatic member of this Society. As her husband I could also attend the conference. The meetings of the Society were always very interesting with deeply insightful discussions going on, day and night. These discussions never much concerned belief in anything, but concentrated on questions of meaning and values. In fact, the name of the Society was later changed to the Society for Values in Higher Education. During one of the last meetings I attended, possibly in 1972, there was much discussion about a new Zen book that Kenneth Morgan, a member of the Society was instrumental in bringing into being. Professor Morgan had arranged for the Japanese Master Zenkei Shibayama to give Zen presentations of the Mumonkan at Colgate University. The entire Mumonkan had been translated into English by Sumiko Kudo, a long-time acolyte at Master Shibayama’s monastery and was soon to be published. Having committed to understanding Zen, I was very interested in all of this and looked forward to seeing the book. After moving to Oregon in 1974 I kept my eyes open for it and immediately bought it when it first appeared at the University of Oregon bookstore. Later, I developed a daily routine of doing some Yoga after breakfast and then reading one of the Koans.

The insights that the Koans are to help one realize are totally beyond language. The Koans may be considered to be a kind of verbal Jiujitsu, which when followed rationally will throw one momentarily out of language thinking into an intuitive realization of some sort. I had encountered various Koans before working through the Mumonkan and had found little insight, but, as a student of physics and mathematics, thought of them as fascinating problems to be enjoyed and solved. I realized that in working on a difficult problem in math or physics, the crucial break-through often comes via intuition. One has a sudden insight, and even before trying to apply it to the problem, one realizes that one has found a solution. In a technical area one’s insight can be attached to mathematical or scientific language and the solution is a concrete expression which solves a concrete problem. I realized that with Zen, one might have a similar kind of intuitive insight even if it could not be expressed in ordinary language, but, perhaps, could be stated as an answering Koan to the one posed. Another metaphor besides the Jiujitsu one, is the focusing of an optical instrument, such as a microscope, telescope or binoculars. Especially when trying to focus a microscope one can be too enthusiastic in turning the focusing wheel and turn right past the focus, seeing that for an instant one had it, but that it was now gone. With a microscope one can recover the focus. With a Zen Koan the momentary insight is usually lost and efforts at recovery hopeless.

A somewhat better example of this focusing metaphor occurred when I was a professor at Auburn University. One quarter I taught a lab for an undergraduate course in electricity and magnetism. This was slightly intimidating as I was a theoretical physicist with little background in dealing with experimental apparatus. One afternoon the experiment consisted of working with an ac (alternating current) bridge similar to a Wheatstone bridge for direct current, but with a complication arising from the ac. Electrical bridges were developed in the nineteenth century to measure certain electrical quantities which are these days more easily measured by other means. Nowadays the bridges mainly have pedagogical value. With a Wheatstone bridge one achieves a balance in the bridge by adjusting a variable resistor until the current across the bridge, measured by a delicate ammeter, vanishes. One can then deduce the value of an unknown resistor in the circuit. With ac there is not only resistance but also a quantity called reactance, which arises because a magnetic coil or capacitor will pass an ac current. To adjust an ac bridge, one twiddles not only a variable resistance but a variable magnetic coil (inductor) which changes the reactance. In the lab there were about 5 or 6 bridges to be set up, each tended by a pair of students. The students put their bridges together with no difficulties; but then, after about 10 minutes, it became clear that none of the student teams had been able to balance their bridge. The idea was to adjust one of the two adjustable pieces until there was a dip in the current through the ammeter. Then adjust the other until the dip increased, continuing in this back and forth manner until the current vanished or became very small. It turned out that no matter what the students did, the current though the ammeter never dipped at all. Of course, the students turned to their instructor for help in solving their problem and I was on the spot. The experience the students had is quite similar to dealing with a Koan. No matter what one does, how much one concentrates, or how long one works at it, the Koan never comes clear. With the ac bridge the students could actually have balanced it by a systematic process, but this would have taken a while. I should have suggested this, but didn’t think of it. Instead I had a pretty good idea of some of the quantities involved in the circuit, whipped out my slide rule (no calculators in those days), and suggested a setting for the inductor. This setting was close enough that there was a current dip when the resistor was adjusted and all was well. The reason that balancing an ac bridge is so difficult is that the two quantities concerned, the resistance R and the reactance X, are in a sense, at right angles to each other, even though they are both quantities measured by an electrical resistance unit, ohms, which is not spatial at all. Nevertheless, even though non-spatial, they satisfy a Pythagorean kind of equation

R² + X² = Z²

where Z is called the Impedance in an ac circuit. The quantities R and X can be plotted at right angles to each other and a triangle made with Z as the hypotenuse. If one adjusts either R or X separately, one is reducing the contribution towards the impedance of one leg of the triangle which does not greatly affect the impedance, at least not enough to noticeably change the current through the ammeter of an ac bridge. Incidentally, what I’ve just explained is a trivial example of a tremendously important idea in theoretical physics and mathematics called isomorphism, in which quantities in wildly different contexts share the same mathematical structure.

I hope that the analogies of verbal Jiujitsu and getting things into focus make somewhat clearer the problem of dealing with Koans. One might well ask if such dealing is worth the trouble and, on a personal note, what kind of luck I’ve had with them, especially as they might throw some light on the nature of “reality”. First, I must say that I have found that engaging the Koans of the Mumonkan is very worthwhile even though most of them remain completely mysterious to me. Moreover, even though I have had epiphanies when reading some of the Koans or the comments about them, there is no way for me to tell whether or not I have really understood what, if anything, they are driving at. Nevertheless, after spending some years with them, off and on, in a very desultory, undisciplined manner, I feel that they have helped indirectly to make my thinking clearer. My approach when I first spent a year going through Zen Comments was to do a few minutes of Yoga exercises, with Yoga breathing and meditation, attempting to clear my mind. Then I would carefully read the Koan and the comments, not trying to understand at all, while continuing meditation. Typically, at that point, I would have a peaceful feeling from the meditation but no epiphany or understanding. I would then put the book aside and go about the business of the day until I repeated this exercise with the next Koan the next day. Sometimes I would skip a day and sometimes I would go back and look at an earlier Koan. This reading was very pleasant as an exercise. I tried to develop the attitude of indifference towards whether I understood anything or not and avoided getting wrought up in trying to break through. My feeling about this kind of exercise is that it does lead to some kind of spiritual growth whether or not the Koans make any sense. As for “enlightenment”, I think it is a loaded word and best ignored. A Western substitute might be “clarity of thought”. Whether or not meditation, studying Koans or just thinking has anything to do with it, I have, on occasion, been unexpectedly thrown into a state of unusual clarity, in which puzzles which once seemed baffling seemed to come clear. As for the Zen Comments I might make a few suggestions especially as they relate to “reality”. Consider, for example, Koan 19, “Ordinary Mind is Tao”, towards which the metaphor above, of finding a focus, might be relevant. If you haven’t heard about the concept of Tao, pick up and read the Tao Te Ching, Lao Tzu’s fundamental Chinese classic. Tao may be loosely translated as “Deep Truth Path”. Koan 19, as translated by Ms. Kudo reads as follows:

“Joshu once asked Nansen, ‘What is Tao?’ Nansen answered, ‘Ordinary mind is Tao.’ ‘Then should we direct ourselves towards it or not?’ asked Joshu. ‘If you try to direct yourself toward it, you go away from it,’ answered Nansen. Joshu continued, ‘If we do not try, how can we know that it is Tao?’ Nansen replied, ‘Tao does not belong to knowing or not knowing. Knowing is illusion; not knowing is blankness. If you really attain to Tao of no-doubt, it is like the great void, so vast and boundless. How then can there be right or wrong in the Tao?’ At these words Joshu was suddenly enlightened.”

Mumon Commented. This comment is very relevant.

“Questioned by Joshu, Nansen immediately shows that the tile is disintegrating, the ice is dissolving, and no communication whatsoever is possible. Even though Joshu may be enlightened, he can truly get it only after studying for thirty more years.”

I picked this particular Koan because it is one of the few that I feel I actually understand (although I may need another thirty years to really get it). Of course, I can in no way prove this. You must NOT be naïve and think that I understand anything. Furthermore, there is no real explanation of the Koan I can give. I can make a few remarks which should be considered as random twiddles of dials that may chance to zero the impedance in your mind.

First, the whole thing is a logical mess. On the one hand there is nothing special or esoteric about “deep truth path”. It is just the ordinary world (reality) that we sense. On the other hand, when we get “it”, the ordinary world dissolves and we feel an overwhelming sense of the infinite ignorance and non-being which surrounds the small island of knowledge we have attained in our human history so far. In fact, both the ordinary and the transcendent are simultaneously present to our awareness and one cannot be considered more significant than the other.

Note that this Koan is superstition free. There are no claims of esoteric knowledge. There are no contradictions of any scientific or historical claims to knowledge. There are no contradictions of anything we might consider superstitions. There is no contradiction of the doctrines of any religion. One might say that the Koan is empty of content. Of verbal content that is.

There is an implicit criticism of Aristotelean logic with its excluded middle. As I’ve already pointed out more than once in this blog, logic has a limited applicability. Part of the “game” of science is to accept only statements to which logic DOES apply. I may later go into stories from the history of physics of the difficulties of playing this exciting game of science, keeping logic intact, when experimental evidence seems to deny it. However, the “game” of physics or any other science is not all of life; and, in fact, Aristotelian logic has been, as I’ve called it in earlier blogs, “the curse of Western Philosophy” and an impediment to a deeper understanding of realities outside of science.

There is more to say about the Mumonkan, but I will leave such to a later blog post. As to differences between Soto and Rinzai Zen I wonder how serious these really are. Koan 19 seems to embody the Rinzai idea of instantaneous enlightenment until one sees Mumon’s comment about another 30 years being required for Joshu to really get it. The Soto doctrine is of gradual enlightenment and a questioning of the very “reality” of the enlightenment concept. A metaphor for either view is the experience of trying to get above a foggy day in a place like Eugene, Oregon, where, when the winter rain finally stops, the clear weather is obscured by a pea-soup fog. One climbs to a height such as Mt. Pisgah or Spencer’s Butte and often finds that though the fog is thinner with hints of blue sky, it is still present. But then there is perhaps a partial break and one sees through a deep hole towards a clear area beyond the fog. This vision may be likened to an epiphany or even to the “Satori” of Rinzai Zen. If we imagine we could wait on our summit for years until, after many breaks, the fog completely clears away, that would be full enlightenment.

Leaving any further consideration of Koan 19, I will end this post on a personal note. If indeed I’ve had a deep enough epiphany to consider it as Satori, this breakthrough has helped reveal that I have a healthy ego, lots of “ego strength”, a concept that Dr. Carr, head of the physics department at Auburn came up with. Experimental physicists, such as Dr. Carr, like to measure things. “Having a lot of ego strength” was his amusing term for people who are overly wrapped up in themselves. My possible Zen insights have not diminished my ego at all. Rather, they have helped to reveal it. I’ve learned not to be too exuberant about insights which as a saying goes, “leave one feeling just as before about the ordinary world except for being two inches off the ground.” If I get too exuberant, I wake up the next day, feeling “worthless”, in the grip of depression. This is a reaction to an unconscious childhood ego build-up in the face of very poor self-esteem. Part of spiritual growth is perhaps not losing one’s ego, but lessening the grip it has on one. I hope that further practice helps me in this regard. Perhaps, some psychological considerations can be the subject of a later post. I will now, however, work on the foundations for such a post by attempting to clarify the “reality” status of scientific theories.

Funny Numbers

During the century between about 600 BCE to 500 BCE, the first school of Greek philosophy flourished in Ionia. This, arguably, is the first historical record of philosophy as a reasoned attempt to explain things without recourse to the gods or out-and-out magic. But where on earth was Ionia? Wherever it was it’s now long gone. Wikipedia, of course, supplies an answer. If one sails east from the body of Greece for around 150 miles, passing many islands in the Aegean Sea, one reaches the mainland of what is now Turkey. Along this coast at about the same latitude as the north coast of the Peloponnesus (37.7 degrees N) one finds the island of Samos, a mile or so from the mainland; and just to the north is a long peninsula poking west which in ancient times held the city-state of Ionia. Wikipedia tells us that this city-state, along with many others along the coast nearby formed the Ionian League, which in those days, was an influential part of ancient Greece, allying with Athens and contributing heavily, later on, to the defeat of the Persians when they tried to conquer Greece. One can look at Google Earth and zoom in on these islands and in particular on Samos, seeing what is now likely a tourist destination with beaches and an interesting, rocky, green interior. On the coast to the east and somewhat south of Samos was the large city of Miletus, home to Thales, Anaximander, Heraclitus and the rest of the Ionian philosophers. At around 570 BCE on the Island of Samos Pythagoras was born. Nothing Pythagoras possibly might have written has survived, but his life and influence became the stuff of conflicting myths interspersed with more plausible history. His father was supposedly a merchant and sailed around the Mediterranean. Legend has it that Pythagoras traveled to Egypt, was captured in a war with Babylonia and while imprisoned there picked up much of the mathematical lore of Babylon, especially in its more mystical aspects. Later freed, he came home to Samos, but after a few years had some kind of falling out with its rulers and left, sailing past Greece to Croton on the foot of Italy which in those days was part of a greater Greek hegemony. There he founded a cult whose secret mystic knowledge included some genuine mathematics such as how musical harmony depended on the length of a plucked string and the proof of the Pythagorean theorem, a result apparently known to the Babylonians for a thousand years previously, but possibly never before proved. Pythagoras was said to have magic powers, could be at two places simultaneously, and had a thigh of pure gold. This latter “fact” is mentioned in passing by Aristotle who lived 150 years later and is celebrated in lines from the Yeats poem, Among School Children:

Plato thought nature but a spume that plays

Upon a ghostly paradigm of things;

Solider Aristotle played the taws

Upon the bottom of a king of kings;

World-famous golden-thighed Pythagoras

Fingered upon a fiddle-stick or strings

What a star sang and careless Muses heard:

 

Yeats finishes the stanza with one more line summing up the significance of these great thinkers: “Old clothes upon old sticks to scare a bird.” Although one may doubt the golden thigh, quite possibly Pythagoras did have a birthmark on his leg.

I became interested in Ionia and then curious about its history and significance because I recently wondered what kind of notation the Greeks had for numbers. Was their notation like Roman numerals or something else? I found an internet link, http://www.math.tamu.edu/~dallen/history/gr_count/gr_count.html which explained that the “Ionian” system displaced an earlier “Attic” notation throughout Greece, and then went on to explain the Ionian system. In the old days when a classic education was part of every educated person’s knowledge, this would be completely clear as an explanation. Although I am old enough to have had inflicted upon me three years of Latin in high school, since then I had been exposed to no systematic knowledge of the classical world so was entirely ignorant of Ionia, or at least of its location. I had heard of the Ionian philosophers and had dismissed their philosophy as being of no importance as indeed is the case, EXCEPT for their invention of the whole idea of philosophy itself. And, of course, without the rationalism of philosophy, it is indeed arguable that there would never have been the scientific revolution of the seventeenth century in the West. (Perhaps that revolution was premature without similar advances in human governance and will yet lead to disaster beyond imagining in our remaining lifetimes. Yet we are now stuck with it and might as well celebrate.)

The Ionian numbering system uses Greek letters for numerals from 1 to 9, then uses further letters for 10, 20, 30 through 90, and more letters yet for 100, 200, 300, etc. The total number of symbols is 27, quite a brain full. The important point about this notation along with that of the Egyptian, Attic, Roman and other ancient Western systems is that position within a string of numerals has no significance except for that of relative position with Roman numerals. This relative positioning helps by reducing the number of symbols needed in a numeric notation, but is a dead end compared to an absolute meaning for position which we will go into below. The lack of meaning for position in a string of digits is similar to written words where the pattern of letters within a word has significance but not the place of a letter within the word, except for things like capitalizing the first letter or putting a punctuation mark after the last. As an example of the Ionian system, consider the number 304 which would be τδ, τ being the symbol for 300 and δ being 4. There is no need for zero, and, in fact, these could be written in reverse order δτ and carry the same meaning. In thinking about this fact and the significance of rational numbers in the Greek system I came to understand some of the long history with the sparks of genius that led in India to OUR numbers. In comparison with the old systems ours is incredibly powerful but with some complexity to it. I can see how with unenlightened methods of teaching, trying to learn it by rote can lead to early math revulsion and anxiety rather than to an appreciation of its remarkable beauty, economy and power.

In the ancient Western systems there is no decimal point and nothing corresponding to the way we write decimal fractions to the right of the decimal point. What we call rational numbers (fractions) were to Pythagoras and the Greeks all there was. They were “numbers”, period, and “obviously” any quantity whatever could be expressed using them. Pythagoras died around 495 BCE, but his cult lived on. Sometime during the next hundred years, one of his followers disproved the “obvious”, showing that no “number” could express the square root of 2. This quantity, √2, by the Pythagorean theorem, is the hypotenuse of a right triangle whose legs are of length 1, so it certainly has a definite length, and is thus a quantity but to the Greeks was not a “number”. Apparently, this shocking fact about root 2 was kept secret by the Pythagoreans, but was supposedly betrayed by Hippasus, one of them. Or perhaps it was Hippasus who discovered the irrationality. Myth has it that he was drowned (either by accident or deliberately) for his impiety towards the gods. The proof of the irrationality of root 2 is quite simple, nowadays, using easy algebra and Aristotelian logic. If a and b are integers, assume a/b = √2. We may further assume that a and b have no common factor, because we may remove them all, if any. Squaring and rearranging, we get a²/2 = b². Since b is an integer, a²/2 must also be an integer, and thus “a” itself is divisible by 2. Substituting 2c for a in the last equation and then rearranging, we find that b is also divisible by 2. This contradicts our assumption that a and b shared no common factor. Now we apply Aristotelian logic, whose key property is the “law of the excluded middle”: if a proposition is false, its contrary is necessarily true, there is no “weaseling” out. In this case where √2 is either a fraction or isn’t, Aristotelian logic applies, which proves that a/b can’t be √2. The kind of proof we have used here is called “proof by contradiction”. Assume something and prove it false. Then by the law of the excluded middle, the contrary of what we assumed must be true. In the early twentieth century, a small coterie of mathematicians, called “intuitionists”, arose who distrusted proof by contradiction. Mathematics had become so complex during the nineteenth century that these folks suspected that there might, after all, be a way of “weaseling” out of the excluded middle. In that case only direct proofs could be trusted. The intuitionist idea did not sit well with most mathematicians who were quite happy with one of their favorite weapons.

Getting back to the Greeks and the fifth century BCE one realizes that after discovering the puzzling character of √2, the Pythagoreans were relatively helpless, in part because of inadequacies in their number notation. I haven’t tried to research when and how progress was made in resolving their conundrum during the 25 centuries since Hippasus lived and died, but WE are not helpless and with the help of our marvelous number system and a spreadsheet such as Excel, we can show how the Greeks could have possibly found some relief from their dilemma. The answer comes by way of what are called Pythagorean Triplets, three integers like 3,4,5 which satisfy the Pythagorean Law. With 3,4,5 one has 3² + 4² = 5². Other triplets are 8,15,17 and 5,12,13. There is a simple way of finding these triplets. Consider two integers p and q where q is larger than p, where if p is even, q is odd (or vice-versa) and where p and q have no common factor. Then let f = q² + p², d = q² – p², and e = 2pq. One finds that d² + e² = f². Some examples: p = 1, q = 2 leads to 3,4,5; p = 2, q = 3 leads to 5,12,13. These triplets have a geometrical meaning in that there exist right triangles who sides have lengths whose ratios are Pythagorean triplets. Now consider p = 2, q = 5 which leads to the triplet 20,21,29. If we consider a right triangle with these lengths, we notice that the sides 20 and 21 are pretty close to each other in length, so that the shape of the triangle is almost the same as one with sides 1,1 and hypotenuse √2. We can infer that 29/21 should be less than √2 and 29/20 should be greater than √2. Furthermore, if we double the triangle to 40,42,58, and note that 41 lies halfway between 42 and 40, the ratio 58/41 should be pretty darn close to √2. We can check our suspicion about 58/41 by using a spreadsheet and find that the 58/41 is 1.41463 to 5 places, while √2 to 5 places is 1.41421. The difference is 0.00042. The approximation 58/41 is off by 42 parts in 100,000 or 0.042%. The ancient Greeks had no way of doing what we have just done; but they could have squared 58 and 41 to see if the square of 58 was about twice the square of 41. What they would have found is that 58² is 3364 while 2 X 41² is 3362, so the fraction 58/41 is indeed a darn good approximation. Would the Greeks have been satisfied? Almost certainly not. In those days Idealism reigned, as it still does in modern mathematics. What is demanded is an exact answer, not an approximation.

While there is no exact fraction equal to √2, we can find fractions that get closer, closer and forever closer. Start by noticing that a 3,4,5 triangle has legs 3,4 which though not as close in length as 20, 21, are only 1 apart. Double the 3,4,5 triangle to 6,8,10 and consider an “average” leg of 7 relative to the hypotenuse of 10. The fraction 10/7 = 1.428 to 3 places while √2 = 1.414. So, 10/7 is off by only 1.4%, remarkably close. Furthermore, squaring 10 and 7, one obtains 100, 49 while 2 = 100/50. The Pythagoreans could easily have found this approximation and might have been impressed though certainly not satisfied.

I discovered these results about a month or so ago when I began to play with an Excel spread sheet. Playing with numbers for me is relaxing and fun; and is a pure game whether or not I find anything of interest. I suspect that this kind of “playing” is how “real” mathematicians do find genuinely interesting results, and if lucky, may come up with something worthy of a Fields prize, equivalent in mathematics to a Nobel prize in other fields. While my playing is pretty much innocent of any significance, it is still fun, throws some light on the ancient Greek dilemma, and for those of you still reading, shows how a sophisticated idea from modern mathematics is simple enough to be easily understood.

With spreadsheet in hand what I wondered was this: p,q = 1,2 and p,q = 2,5 lead to approximations of √2 via Pythagorean triplets. Are there other p,q’s that lead to even better approximations? To find such I adopted the most powerful method in all of mathematics: trial and error. With a spreadsheet it is easy to try many p,q’s and I found that p = 5, q = 12 led to another, even better, approximation, off by 1 part in 100,000. With 3 p,q’s in hand I could refine my guesswork and soon came up with p = 12, q = 29. I noticed that in the sequence 1,2,5,12,29,… successive pairs gave increasingly better p,q’s. This was an “aha” moment and led to a question. Could I find a rule and extend this sequence indefinitely?

In my life there is a long history of trying to find a rule for sequences of numbers. In elementary school at Hanahauoli, a private school in the Makiki area of Honolulu, I learned elementary arithmetic fairly easily, but found it profoundly uninteresting if not quite boring. Seventh grade at Punahou was not much better, but was interrupted part way through the year by the Pearl Harbor attack of December 7, 1941. The Punahou campus was taken over by the Army Corps of Engineers and our class relocated to an open pavilion on the University of Hawaii campus in lower Manoa Valley. I mostly remember enjoying games of everyone trying to tackle whoever could grab and run with a football even if I was one of the smaller children in the class. Desks were brought in and we had classes in groups while the rain poured down outside the pavilion. Probably, it was during this year that we began to learn how fractions could be expressed as decimals. In the eighth grade we moved into an actual building on the main part of the University campus and had Miss Hall as our math teacher. The math was still pretty boring, but Miss Hall was an inspiring teacher, one of those legendary types with a fierce aspect, but a heart of gold. We learned how to extract square roots, a process I could actually enjoy, and Miss Hall told us about the fascinating things we would learn as we progressed in math. There would be two years of algebra, geometry, trigonometry and if we progressed through all of these, the magic of “calculus”. It was the first time I had heard the word and, of course, I had no idea of what it might be about, but I began to find math interesting. In the ninth grade we moved back to the Punahou campus and our algebra teacher was Mr. Slade, the school principal, who had decided to get back to teaching for a year. At first, we were all put off a bit by having the fearsome principal as a teacher, but we all learned quickly that Mr. Slade was actually a gentle person and a gifted teacher. As we learned the manipulations of algebra and how to solve “word problems”, Mr. Slade would, fairly often, write a list of numbers on the board and ask us to find a formula for the sequence. I thoroughly enjoyed this exercise and learned to take differences or even second differences of pairs in a sequence. If the second differences were all the same, the expression would be a quadratic and could easily be found by trial and error. Mr. Slade also tried to make us appreciate the power of algebra by explaining what was meant by the word “abstraction”. I recall that I didn’t have the slightest understanding of what he was driving at, but my intuition could easily deal with an actual abstraction without understanding the general idea: that in place of concrete numbers we were using symbols which could stand for any number. Later when I did move on to calculus which involves another step up in abstraction, I at first had difficulty in the notation f(x), called a “function” of x, an abstract notation for any formula; or indeed a representation of a mapping that could occur without a formula. I soon got this idea straight and had little trouble later with a next step of abstraction to the idea used in quantum mechanics of an abstract “operator” that changes one function into another.

Getting back to the sequence 1,2,5,12,29,… I quickly found that taking differences didn’t work; the differences never seemed to get much smaller because the sequence turns out to have an exponential character. I soon discovered, however, using the spreadsheet that quotients worked: take 2/1, 5/2, 12/5, 29/12, all of which become more and more similar. Then multiplying 29 by the last quotient, I got 70.08. Since 29 was odd, I needed an even number for the next q so 70 looked good and indeed I confirmed that the triplet resulting from 29, 70 was 4059, 4060, 5741 with an estimate for √2 that was off by only 1 part in a 100 million. After 70 I found the next few members of the sequence, 169, 408, 985. The multiplier to try for the next member seemed to be closing in on 2.4142 or 1 + √2. At this point I stopped short of trying for a proof of that possibility, both because I am lazy and because the possible result seemed uninteresting. What is interesting is that the sequence of p,q’s goes on forever and that approximations for √2 by using the resulting triplets will converge on √2 as a limit. The ideas of a sequence converging to a limit was only rigorously defined in the 19th century. Possibly it might have provided satisfaction to the ancient Greeks. Instead, the idea of irrational numbers that were beyond fractions became clear only with the invention by the Hindu’s in India of our place based numerical notation and the number 0.

Place based number notation was developed separately in several places, in ancient Babylon, in the Maya civilization of Central America, in China and in India. A place based system with a base of 10 is the one we now use. Somewhere in one’s education one has learned about the 1’s column just to the left of a decimal point, then the 10’s column, the 100’s column and so forth. When the ancient Hindu’s and the other civilizations began to develop the idea of a place based system, there was no concept of zero. Presumably the thought was the idea that symbols should stand for something. Why would one possibly need a symbol that stood for nothing? So, one would begin with symbols 1 through 9 and designate 10 by ”1·”. The dot “·” is called a “place holder”. It has no meaning as a numeral, serving instead as a kind of punctuation mark which shows that one has “10”, not 1. Using the place holder in the example above of Ionian numbers, the τδ would be 3·4, the dot holding the 10’s place open. The story with “place holders” is that the Babylonians and Mayans never went beyond, but the Hindu’s gradually realized the dot could have a numerical meaning within its own right and “0” was discovered (invented?). Recently on September 13 or 14th, 2017, there was a flurry of reports that carbon dating of an ancient Indian document, the Bakhshali manuscript revealed that some of its birch bark pages were 500 years older than previously estimated, dating to a time between 224 – 383 AD. The place holder symbol occurring ubiquitously in the manuscript was called shunya-bindu in the ancient Sanskrit, translated in the Wikipedia article about the manuscript as “the dot of the empty place”. (Note that in Buddhism shunyata refers to the “great emptiness” a mystic concept which we might take as the profound absence of being logically prior to the “big bang”) A readable reference to the recent discovery is https://www.smithsonianmag.com/smart-news/dating-ancient-indian-text-gives-new-timeline-history-zero-180964896/. According to the Wikipedia article the Bakhshali manuscript is full of mathematics including algebraic equations and negative numbers in the form of debts. As a habitual skeptic I wondered when I first heard about the new dating whether Indian mathematicians with their brilliant intuition hadn’t immediately realized the numerical meaning of their place holder. Probably they did not. An easy way to see the necessity of zero as a number is to consider negative numbers as they join to the positives. In thinking and teaching about math I believe that using concrete examples is the best road leading to an abstract understanding. The example of debts is a compelling example of this. At first one might consider one’s debts as a list of positive numbers, amounts owed. One would also have another list of positive numbers, one’s assets, amounts owned. The idea might then occur of putting the two lists together, using “-“ signs in front of the debts. As income comes in one’s worth goes, for example, -3, then -2, -1. Then what? Before going positive, there is a time when one owes nothing and has nothing. The number 0 signifies this time before the next increment of income sends one’s worth to 1. The combined list would then be …, -3, -2, -1, 0, 1, 2, 3, … . Doing arithmetic, using properly extended arithmetic rules, when one wants to combine various sources of debt and income becomes completely consistent, but only because 0 was used.

If the above seems as if I’m belaboring the obvious, let me then ask you why when considering dates, the next year after 1 BCE is not 0, but 1 AD? Our dating system was made up during an early time before we had adopted “0” in the West. Historians have to subtract 1 when calculating intervals in years between BCE and AD and centuries end in hundreds, not 99’s. This example is a good one for showing that if one gets locked in to a convention, it becomes difficult if not impossible to change. I was quietly amused at the outcry as Y2K, the year 2000 came along with many insistent voices pointing out the ignorance of we who considered the 21st century to have begun. The idea of zero is not obvious and I hope I’ve shown in considering the Pythagorean’s and their dilemma with square roots, just how crippled one is trying to get along without it.