Top Post (no longer)

This Post is a mini-introduction to Hoalablog. It will be made “sticky” so it appears as the top post on the blog and it will remain so for the time being. “Sticky” removed, 10/31/2025.

Note the menu of posts on the upper right (scroll up or click here if it’s not showing) and the Menu of Links (called Categories for now) on the left. Below this Top Post are all the published posts in the blog. The latest post is the first one below this. Below it are all the posts, latest to earliest since the blog began on April 28, 2016 with the publication of the first post, Comfortable Belief.

Although the posts are somewhat scrambled, there are several themes reflecting the “why” of my bothering to write this blog. A summary of the main theme is the link More Thoughts I. This link will appear in a new tab. To get back here, simply delete the new tab or click on this one. This methodology allows you, as reader, to continue in the new tab and its links if you wish.

If one looks at the posts in chronological order by clicking “From the Start” in the menu on the left one can get a feel for what’s in the blog. (The posts will appear below this “sticky” one.) For a more coherent look click on the categories, “Spiritual Journey”, “Philosophy and Western Zen”, or “QM and Science”. At the moment six other items appear in the left menu. “Introduction” gives more details about the blog; “Home” brings up this main page; “About Ho`ala Blog” more specifically covers what I thought, at various times, the blog was about. “Contact” sends me an email (at least I hope it does.) If one comments on a post or page, I’m notified. “Memoir“, just added, links a series of pages and post excerpts, into a possible memoir.

If you are new to this blog, I recommend selecting Introduction or About in the Links menu at the left. Back to Top

Supreme Fiction

Between 1993 and 2009 Microsoft published a multimedia Encyclopedia called Encarta which could be accessed from various early versions of Windows. During the mid 90’s while not actually programming I enjoyed computer games and read extensively in Encarta. Practically everything I read has vanished from my memory except for one striking prose essay by Wallace Stevens in which he mused about the idea of “supreme fiction.” Stevens must have written this at about the same time that he was composing his poetic masterpiece, Notes Toward a Supreme Fiction, published in 1942. In an earlier post, Into the Morass, Part I, I quoted some lines from the poem, lines which almost always lead me into an epiphany as they have, just now as I reread the post. (link) At the time of that post I mentioned that Stevens thought deeply about the idea of a supreme fiction and that I would myself consider this idea at a later time. Of course, at the time of the earlier post, October, 2016, I was recollecting the Stevens essay which would be a basis for any further thoughts on my part. But now that the time has come for my thoughts, the Stevens essay is missing and I’m on my own except for the poem and my imagination.

It seems that by 2009 Encarta was overwhelmed by the popularity and the sheer mass of articles in Wikipedia, the online encyclopedia which revolutionizes research in this day and age. Unfortunately, when Encarta was put to death, the Stevens essay did not make a transition to Wikipedia. I unsuccessfully searched the internet for the essay and then wondered if archaeological remnants of Encarta could be dug up and rendered meaningful. I found that Encarta, if it exists at all, cannot be accessed by Windows 10 to say nothing about Apple’s iOS. Resurrection is doubtless possible, but I no longer have either the time, temper, energy, will or ability for such an enterprise. I searched Amazon and found a book of Wallace Stevens essays, which I bought. The crucial essay is not there and there is nothing about fiction, supreme or otherwise. Literary critics have made the lame suggestion that poetry at large is the supreme fiction Stevens is driving towards. However, the title of the poem, “Notes Toward …” indicates to me that the actual images in the lines of the poem itself point the way to a wordless ecstasy of meaning for the phrase.

For me, reasons for considering “supreme fiction” go beyond the ideas of Wallace Stevens. The attraction is in the craziness of the locution itself, a Zen like phrase which comes out of the West and accordingly, in my mind anyway, allows a different vista of freedom from the milieu of the East.

”Supreme” and “fiction” jar against each other. Although they are certainly not exact opposites, they seem to have an Aristotelian kind of opposition allowing no softening middle between them. If something is “Supreme”, doesn’t that ultimate pinnacle argue for its “reality”. However, I would suggest that being “fiction” exalts the “supreme” beyond any possible mundane reality into a reality that is transcendent.

In the midst of the unbounded expanse of time in which our awareness exists, each day we have a “today” which we attempt to pinpoint in that unimaginable vastness by a formula such as July 29, 2024. On that today I hiked west on the Park Meadow trail from its trailhead on the Three Creeks road south of Sisters, Oregon. The forest along this trail was destroyed by the Pole Creek fire a few years back. Around two and a half miles in I came to a runnel of water called Snow Creek. Being alone and 95, I walked up and down the creek seeking a safe crossing, but found no passage I would risk in my lackadaisical mood. On the way back, about a mile from the trailhead, there was an area where, amid the devastation, the fire left a few clusters of trees, “swags of pines”, as Stevens puts it, mostly Lodgepole. I stopped for a rest and noticed that the tree next to me had the five needle packets of a rare, surviving White Bark Pine. The tree was tall and healthy: a beautiful specimen. I think that this tree will have to be the supreme fiction for today. Perhaps I’ll find another tomorrow. Back to Top

Time and Tide I

What is the significance, if any, of being briefly alive and aware in the early twenty-first century?

The immediate focus of this question is on our time right now; but in imagining this significance, it is essential to place ourselves in the context of history, beginning with the creation 13.8 billion years ago, out of an unknown nothingness, something we call our universe and within it a ticking clock embodying time and thus the possibility of history, the trace time leaves in space. This history has many aspects: the early physical aspect involving the creation of forces, fields, and particles; the coalescing of particles into simple atoms as the primordial plasma cools and transparency ensues; the accumulation of hydrogen, helium and a trace of lithium into stars. Following is the area of cosmology, the story of star generations with their life and supernova deaths creating complex atoms beyond the first three kinds and their galaxies revolving around massive black holes. Then comes geology, considering the ages of our planet with the creation of life, stratigraphy, continental drift, and various eras as life becomes complex and finally the accident of early forms of pre-humans. Then anthropology studies the evidence left by early forms of humanity until ultimately writing develops and traditional history begins.

Traditional history has countless themes, all possible subjects of future consideration: the wars, the technology, the growth of culture, the empires, the human urge to create art and music, the understanding of how it was in the past; and finally, that we are here looking back at all of this history, knowing the wonder of the whole story, but knowing also that we are still fundamentally animals with our emotions controlling our formidable intellects. Perhaps the significance of our time is that the universe could throw up a being whose very existence could be, for us, anyway, an event of significance in the universe.

As a first example of our human story and its possible significance I’ll consider how we humans have used our inventive brains to increase the carrying capacity of our environment to the extent that the environment itself becomes endangered by our very numbers and activities. In considering how any animal’s population fluctuates in response to changes in the ecosystem in which it lives one encounters many simplified stories of overpopulation, followed by a crash, followed by a recovery and a new overpopulation. Such life stories of a single species are actually the exception, however. And even in a so-called typical case one is apt to find that there is a complexity within the apparently simple story. The bio-sciences are not as straightforward as the physical sciences I’m more accustomed to, which is, in fact, to me one of their main attractions. Thus, it is not surprising that the story of human population growth is not simple. Even a broad-brush narrative such as I am giving here, has its somewhat paradoxical wrinkles. I’ll start this account by considering an interesting book. In this part of hoalablog I will often make a practice of bringing up and talking about books I’ve read, not so much reviewing them as using them as a launching pad for discussion or as a source of relevant ideas.

When I first began to write this piece, I remembered reading this book, but I had completely forgotten the name of the author or the title of the book. The book concerned the story of the early industrial revolution, invention, and the steam engine which was its centerpiece. It was a readable book and I knew it would be relevant. In a “what-the-hell” moment I googled “industrial revolution”, in what seemed to me a futile quest. To my surprise one of Google’s featured books was The Most Powerful Idea in the World: A Story of Steam, Industry and Invention by William Rosen. I read a review or two in Amazon, and upon dipping into the book via Amazon’s “look inside” feature, it became clear that this was indeed the book I was trying to remember. Idle curiosity then led me to figure out when I had read it. I checked out the Bend Library and it wasn’t there. I didn’t own it and it wasn’t in my Kindle library so I must have found it in the Eugene library and read it before 2013, the year we moved to Bend. The book was published in 2010, so this thought is plausible.

The main theme of Rosen’s book is the story of how humanity escaped what has been called “the Malthusian Trap”. This phrase refers to the theory expounded in Robert Malthus’s influential book, An Essay on the Principle of Population, published in 1798. Malthus’s idea is that population growth is inherently exponential while food supplies will only increase linearly. The result is that any increase in food supply is overwhelmed by population growth which means that a large portion of the population will always be on the brink of starvation. (An example of exponential growth is a doubling series – 1, 2, 4, 8, 16, 32, … while the corresponding linear series is – 0, 1, 2, 3, 4, 5, … .) I remember learning about Malthus as I sloughed through Stanford University’s famous course A History of Western Civilization1 in the academic year 1947- 48. At that time, I was lazy and bored by history so I skipped most of the reading and simply took notes and paid close attention in class to the young, charismatic professor, managing to pass the course and come away with some knowledge. Besides Plato’s Phaedo, one selection I did read was an excerpt from Malthus’s book. His idea was new to me and I was impressed by his logic.

Going back to the early history of homo sapiens, starting some 2 to 3 hundred thousand years ago I suspect that Malthus’s ideas don’t apply to our situation as hunter-gatherers during that era. In a hunter-gatherer society the population is kept under control, more or less, in the same manner as for any other animal. We have abundance and starvation, war with other tribes and always accidents and disease, keeping a largely static population and an undisturbed environment. We did cause a few environmental problems during this era; notably the extinction of some megafauna, but though this is probably regrettable, these extinctions posed no massive environmental threat. With the advent of agriculture some 6 or 7 thousand years ago we gained control of our food supply and fell into a Malthusian trap. Rosen estimates that in the 7600 years between 6000 B.C and 1600 A.D world population increased from 5 to 500 million, with a doubling every thousand years or so. Although this population increase is large in absolute terms, the annual fractional increase is only about 0.000606. (I won’t get into the details of how one comes up with that number. God forbid that one brings serious math into history; or, for that matter, history into physics, chemistry or engineering. However, see a brief appendix at the end of this piece for details.) During this era of agriculture being the dominant economic activity, there was a food supply that could support a slowly increasing population. Although life was miserable for most people as per Malthus, a surplus of food, unevenly distributed, gave rise to occupations besides farming. Over these thousands of years, the food surplus allowed technology and culture to grow. Finally, in the west the scientific revolution occurred during the late 16th century onwards. However, this revolution did not allow us to escape the Malthus dictum. Life for most remained marginal and precarious.

Rosen’s Most Powerful Idea goes into the history of the steam engine in great detail telling of Thomas Newcomen’s first commercially successful engine of 1712 used for pumping water from coal mines, followed by decades of incremental improvements until in 1764 James Watt made a huge breakthrough which brought on the industrial revolution’s full flower. The engine was used in factories and most notably in steam locomotives, enabling railways to spread in England, the United States, and the Continent. Still later in the century steam largely replaced sail on ocean vessels. Rosen considers that the “most powerful idea” of the times during which the steam engine was developed and applied, was the patent system which democratized invention, adopted in England during the mid-1700’s. With the patent system both the incentive and the mechanism were in place for anyone to reap the rewards of invention. According to Rosen it was the flourishing of invention which lay behind the industrial revolution and ultimately expanded the economy, breaking us out of the Malthusian trap.

It should be noted that this breaking of the trap was not immediate. In the early years of the 1800’s as textile factories replaced skilled workers with the less skilled who could produce more under miserable working conditions with lower wages, the Luddite movement arose. Later, after parliament passed a reform act in 1832 which did not extend voting rights to those without property, the chartist movement came into being. Nevertheless, as the nineteenth century wore on, a tremendous number of new niches appeared in the ecology of the economy and a significant middle class arose in Europe and America. Between around 1860 to 1940 while the population grew rapidly, this growth did not seriously impact the environment. I can recall the life my parents lived in the late 1930s, a time, at least in Hawaii, when a peak was reached in enlightened employment practices. My Dad went to work as an accountant in the morning at 8 am. He had a half-hour off for lunch and finished the day at 4 pm, giving him ample time to swim at Waikiki for an hour or so before dinner with his wife and kids. He and my mother could afford to build two homes before the war came to us on Dec. 7, 1941. Around 1944 in school, I learned that the world population was now 2 billion, so there had been a substantial increase during the 344 years since1600 when the population was around 500 million.

Of course, after the war there was a population explosion and by now in 2024 there are numerous worrisome environmental impacts. We can summarize human population growth in a “big picture” way by saying that after the agricultural revolution, human numbers grew slowly with numerous ups and downs for 7000 years or so, subject to Malthus’s Law. The histories of this period neglect the miserable condition of the great mass of humanity, concentrating on the wars, the culture, and the innovations of a small fraction. With the industrial revolution, at least in the West, there was a breakout from the Malthusian Trap, which allowed a better life for many within a rapidly increasing population, driven by invention, but without the danger of the imminent extinction of humanity if not all of life. After World War II, the population grew frightfully fast with an increasingly better life for those in “first world” countries; but with the invention of the atomic bomb and the sheer numbers of people threatening the environment we now do face threats of extinction.

Some numbers substantiate the story above. I’ve already pointed out that during the agricultural era the world population increased from 5 to 500 million with an average doubling time of 1000 years and a fractional growth rate of 0.000606 per year. From 1600 to 1944 the world population increased from 500 million to 2 billion. Population doubled twice in those 344 years, giving a doubling time of 172 years and an annual fractional rate of increase of 0.00402. Finally, in the 80 years since 1944, population has again doubled twice, reaching 8 billion in 2023, giving a doubling time of 40 years and an annual rate of 0.017. To obtain an annual percent increase, commonly found in contemporary studies of population growth, one simply multiplies these annual fractional rates by 100.

Fortunately, the situation is not quite as dire as the last figures would suggest. It seems that one assumption made by Malthus is incorrect: as people’s affluence increases in a modern society, the extra wealth does not result in population growth. Instead, the cost of raising children goes up, the child mortality rate goes down, and birth control becomes available. There is also in many areas the possibility of abortion. As a result, the rate of population growth has gone negative in many first world countries and even in many third world countries the rate, though positive, is going down. The situation since WWII has been extensively considered and one can find many results on the internet. For example, the link

https://www.macrotrends.net/global-metrics/countries/WLD/world/population-growth-rate

has graphs of world population and the percentage growth rate from 1950 to 2024 and projections into the future. According to the year-by-year growth rate, the highest growth of 2.2% occurred in 1963, while in 1991 the decreasing rate dropped below the average, 1.7%, I calculated. The current growth rate, according to this study, is 0.9%.

Projections into the future are, of course, controversial. This particular estimate is about average for the few studies I’ve looked at. It projects the growth rate for the world to go negative in 2086 with a maximum world population of 10.43 billion at that time.

This study and others however, neglect the numerous threats our current times provide. A recent New Yorker article (June 10, 2024) examines a University of Chicago course entitled “Are we Doomed?”, which considers various scenarios of disaster. After many guest lectures and discussions, students in the course considered that the most serious threat would be the break-out of nuclear war. Climate change they concluded had the main effect of increasing the risk of nuclear war. There are many other threats. I’ve just read an interesting book, The Blue Machine: How the Ocean Works, by Helen Czerski, a physicist turned oceanographer. Most of the book simply considers the workings of the “blue machine” as far as we know them; but the last chapter considers the threats oceans face, using the knowledge explored earlier. Collapse of ocean life support systems, however, is only one of many environmental threats. I could doubtless write an entire piece about threats to our existence using Czerski’s book as a touchstone as well as another favorite, The Ministry for the Future by Kim Stanley Robinson. Perhaps I will.

It seems clear to me that the story of human world population growth, which I’ve told here, is indeed one matter of significance for an aware person of the early twenty-first century.

Appendix – Calculating Population Growth Rates

The basic equation for annual population growth is

Pnext year = (1 + r) times Pthis year,

Where the P’s are population numbers and r is the rate of annual increase per year.

This equation has to be applied hundreds or thousands of years at a time when one considers population growth over long periods of time. The calculation is similar to that of investing when one wants to know the compound interest rate, knowing what one’s principal amount is at the beginning and end of a time period. With interest, one compounds daily and uses an Excel spread sheet with its automated repeat feature. With the population equation above, the built-in compounding time is one year.

Fortunately, there is a simple highly accurate approximation in any case of compound growth; namely, the exponential growth equation

Ptime t = Ptime 0 ert

Where P is the population, r is the rate, t is the time period and e the base of nature logarithms with a value of 2.718… . Note that the quantity rt is dimensionless, r being a rate and t being a time; e.g. a rate is a fraction per year and t is a time in years. One can, in fact argue that the exponential growth rate equation defines r rather than the equation at the top.

To use the growth equation, make an algebraic rearrangement and then take the natural logarithm, ln, of both sides, obtaining

            rt = ln (Pt / Pt zero)

In our first calculation the ratio of the P’s is 100 with a natural log of 4.605.  (In one’s calculator turn it sideways to bring up the ln function.) Enter 100 and tap ln. Then divide by 7600, the number of years to get an r of 0.000606. Back to Top

  1. See Post History I ↩︎

Ramblings II

In the last post I inadvertently used the epithet “IT” for the name of an imagined new slant on a Zen grounded in Western Culture; or, perhaps not just Western Culture but the complex enriched, modern World Culture which has itself grown out of Western Culture. This new embodiment needs a name. “IT” won’t do, as it just doesn’t have the proper ring, cachet or heft of existing names, Dhyana, Ch’an, Zen. Dhyana, beginning as simply a name for meditation can now be taken as the name of the “almost Zen” of Mahayana Buddhism as it grew under the tutelage of Nagarjuna and his kin. Ch’an is the Chinese name while Zen is the Japanese pronunciation of Ch’an which became the label for the Japanese embodiment. Finding a good label is tough; and I don’t feel that I have the talent for it. Consider the physicist Murray Gell-Mann, who did have that talent. Gell-Mann came up with the name quark as a label for simple particles within the various nucleons, mesons, and resonances of the strong interaction. Independently of Gell-Mann the physicist George Zweig had had the same daring idea, that there were actual “real material” particles belonging to the fundamental triplet representation of the SU3 group. Zweig named his particles “aces”, while Gell-Mann preferred “kworks”. Fooling around in Joyce’s Finnegan’s Wake, Gell-Mann found the phrase “three quarks for Muster Mark” on page 383. Thinking that one of Joyce’s meanings might be a bar order, “three quarts for Mister Mork” Gell-Mann proposed “quark” pronounced “kwork” as a tortured rendition of “quart”. The page 383 is significant since the next higher representation of SU3 has 8 members which are even more “real” than the quarks because they can exist on the “outside” as hadrons and make tracks in a bubble chamber. The “eight-fold way” has been taken over from Buddhism for the SU3 interpretation in which there are 8 particles which can be “seen”. An older, discarded 3-fold Sakata model used the hadrons proton, neutron and Lambda as a fundamental triplet.

Although the likelihood that I have Gell-Mann’s talent for labelling is vanishingly small, I really must give it a try; so, I will propose the Hawaiian pidgin Da Kine. This expression works rather well because it is a corruption of the English “the kind”, but in Pidgin the meaning has changed in that da kine’s reference is deliberately vague or ambiguous. Often the phrase is used when one does not feel like being specific. When I now use “da kine”, maybe it refers to this new kind of Western religiosity, or maybe it’s merely a meaningless redundancy. It could refer to anything. An example of the flavor I’m talking about occurs in William Finnegan’s wonderful, Pulitzer-prize winning memoir, Barbarian Days. The author and his buddy, Bryan, in a vain attempt to keep secret their discovery of a world class surfing spot, Tavarua, in Fiji, never say its name, but refer to it as “da kine”. A problem with da kine is, of course, that is a very common expression in Hawaii and, in fact, there is a company with the name Da Kine. One faces a possible copyright infringement complaint; however, if “Wind Surfer” and “Kleenex” couldn’t defend their copyrights, I doubt that “Da Kine” can either. In any case one might well end up with an even better name than Da Kine.

It is pretty clear to me that we do need a new name. Consider the existing names Dhyana, Ch’an, and Zen. Dhyana has a lofty, abstract, almost philosophical connotation of the jewel in the lotus, while in Chinese culture there is the wonderful idea of taking serious things lightly and light things seriously. This particular sensibility, it seems to me, is missing in Japanese culture; not that there isn’t a wonderful sense of humor in certain Japanese productions. I remember in the late fall of 1974 when my first marriage had dissolved, I was quite distraught, and visited my parents in Honolulu. They lived on Kulamanu Place right around the corner from where William Finnegan lived on his first Hawaiian visit, not that that has any relevance. What does have relevance is that Hawaiian television in those days featured Japanese science fiction cartoons. These were deliberately and deliciously corny, with a wonderful sense of humor. I still remember one in which the villain was named “Blue Electric Eel”. He could take on a human form and when he was in a crowd about to perpetrate some villainy, the scene would move down and show his blue suede shoes, just before all hell broke loose with, if I remember, many sparks and short circuits. With world western culture there are so many strands that I won’t pick out any particular one. The whole culture is da kine. What is clear that this new world culture needs a new word for its penumbra and its specifics, and I’m proposing da kine.

Another, more concrete theme of da kine sensibility is that it originated as an outgrowth of traditional Buddhism. I wonder what sort of novel insight Buddhist thought could confer on Western history, culture, philosophy and religion, to say nothing about contemporary affairs. Going beyond any historical distinctions in Buddhism such as its split into Theravada and Mahayana forms, I think that the idea of “attachment” and a goal of its relaxation or lessening, is curiously underemphasized in our culture. I remember an incident that occurred some years ago when for a time I attended a “Bohm dialogue” group, dedicated to the idea of a selfless descent into creating and following interesting threads of conversion without an agenda, pretty much identical to an eighteenth-century French salon, but perhaps with a higher expectation of generating deep new insights. During a discussion of a topic, now forgotten, one person brought up the idea of “gnosis” which he considered to be knowledge and understanding of a religious doctrine, in a manner so absolute as to be impervious to refutation. What struck me at the time was that this constituted a grasping, so rock solid and with such a diamond hardness that it might well be called adamantine. What struck me even more was that this person implied that he admired this gnosis and that its “knowledge” should be taken seriously , in part, simply because it was held with such mystic conviction. At the time I was quite shocked because I had been immersed in Buddhist thought for some twenty to thirty years and didn’t realize that “grasping” could be taken in a way other than “undesirable”.

A day or so ago to learn more I looked up in Wikipedia the word “gnosis” and read about its considerable history, beginning in ancient Greece as simply a word translated as “knowledge”. Then in later Hellenistic times there were sects which were called gnostic, and in still later times it led in various modern European languages to words for two kinds of knowledge. It would seem that gnosis is a mystic kind of insight into belief, but in no part of this article was there a hint of the idea of “grasping”, a concept seemingly foreign in Western thought if applied to doctrines or ideas.

Of course, the modern scientific revolution, starting somewhere around the early seventeenth century, did implicitly bring in the idea of letting go or “ungrasping”. Scientists are supposed to have convictions, but be willing to change them when evidence rules against them. However, anyone who is at all knowledgeable about scientific history knows that this ideal is far from being followed by scientists in practice. The physicist Planck, who elucidated the first quantum mechanical phenomenon I’ve discussed in earlier posts, was not at all happy with what he discovered and only slowly accepted the idea that he had truly found something revolutionary and new. However, Planck, I think it was, reflecting on the continued opposition of older scientists to his and later discoveries said something to the effect that no amount of experimental evidence would ever cause these physicists to change their opposition; but fortunately, they would eventually die off, leaving the new, field of quantum theory, to be developed by scientists who would pay attention to the experiments which definitively demonstrated the reality of quantum phenomena.

The point is that science does have a way of eventually dealing with adamantine grasping, whether through grudging acceptance or the dying off of stubborn opposing scientists. In recent times, as I’ve discussed before, Karl Popper’s idea of “refutation” has been enormously clarifying for the philosophy of science. “Refutation” directly implies the necessity for ungrasping as a theory is disproved. Popper’s idea has furthermore diffused into areas outside of science as a touchstone of rational thought. However, the idea of refutation becomes muddy as one moves away from science into areas where refutation becomes more and more difficult or impossible. One then needs to grapple with the whole idea of “grasping” especially with what I’ve called adamantine grasping or the grasping implied by “gnosis”, impervious to any change of mind regardless of evidence or argument. For it seems clear that a tendency to grasp beliefs is ingrained in we humans, and likely has some positive survival value in many situations. However, increasingly there does seem to be a need to cope with its negative consequences; a need underappreciated in Western thought. Back to Top

History I

In an earlier post “Physics, Etc.”, as an aside, I threw out the statement that if one wanted to understand “everything”, physics was a good place to start. The implication was that I indeed wanted to understand everything and that such a goal was a worthwhile life pursuit for anyone. Now, I want to consider this idea, using history as an exemplar. One might of course take “understanding everything” as a crazy Zen dictum such as “believe nothing, understand everything”, and indeed it can be so taken. In a later post I will, in fact, consider how a religion of nothingness, fits with and caps off a comprehensive understanding of the particulars of our entire human situation.

Before engaging the main theme of this post, I will consider the somewhat irrelevant theme of why physics is, indeed, a good starting point towards “understanding everything”. This is mainly because physics, as an amalgam of abstract math and real-world experiment, is a difficult subject, an excellent pursuit in one’s youth when mathematical powers are at their peak. No one, not even Einstein, Schrödinger, Feynman, Gell-Mann, or the multitude of current day experimentalists and theorists is really smart enough to do physics well. One really needs an IQ of 4 or 5 hundred or so as well as an incredible imagination, creativity and, very likely, some luck.

Once one wants to leave physics for other fields, many open up. Technical or cross disciplinary engineering or financial fields are wide open, and even sciences such as biology or neurology, seemingly distant, have been served well by ex-physicists. However, the main point here is that if one at first concentrates in an area such as the humanities, arts, or social sciences, then later, tries to go in the other direction, towards physics or other sciences one finds that the difficulties are likely to be overwhelming. In saying this I’m not making any kind of value judgement about the worth of any particular area. In fact, part of the difficulty in moving from humanistic areas towards science, is finding the motivation to do so. Moving from an area in which the glory of the particular is a main attraction to an abstract area where there seems little magic or joy, hardly seems a worthwhile enterprise. Then if one does begin to sense an aesthetic in science or the wonder in the depths of physical being, one confronts mathematics, which is likely to come across as meaningless abstract gibberish. One ends up depending on popular expositions such as my last three posts; and such are unlikely to lead to much depth of understanding. In particular, one is unlikely to realize that physicists themselves, in spite of their great past successes are working in the dark mostly repulsed by the great mystery at the edge of their discipline.

In my own pursuit of understanding in areas outside of science, one crucial insight came to me rather late as I reflected on my experiences in becoming a decent skier. Growing up in Hawaii and then going to college in the Bay Area of California, I hardly even saw any snow until I was 22 years old. At that point I became fascinated with skiing, both downhill and back-country. In those days one used the same equipment for both. Our downhill skis had cable bindings which had side clips near the heels for downhill and by the front bindings for back-country. The latter allowed one’s heels to come up which made striding easier when on the level. For going uphill, we attached skins to the bottom of the skis. Out West where we skied on Mt. Shasta and in the Sierra, real cross country, langlauf, skis were unknown and thus the joys of kick and glide were missing. In those days I could not afford ski lessons so learned pretty much on my own and developed just about every disastrous, bad habit possible. In addition, it turned out that I didn’t really have a great deal of aptitude for the sport. Years later, in Oregon, skiing with real XC or modern downhill skis and realizing I had become a competent, if far from expert, skier in spite of all the obstacles, it became clear that my bad start and lack of aptitude didn’t much matter because I loved being out on skis and put in many enjoyable hours, in spite of many falls and a tendency for bad habits to reassert themselves in difficult situations. On cross country skis I attempted telemark turns in vain for many years finally getting the feel of the skis floating in the soft snow, the turns becoming effortless, as I skied down with an overnight pack from above the Palmer lift on Mt. Hood after climbing the mountain. With downhill skis after unlearning the old Arlberg technique, I finally trained my muscles to do the right thing by constant reminders, “skis together, look downhill, articulate”, but always I was likely to revert to a snowplow and backwards fall when things got tough. The lesson here is that when one tries to learn something new, aptitude doesn’t matter as much as finding meaning and joy in its pursuit. Such, enables the persistence in practice and the discipline which leads towards success, satisfying even when only partial. This lesson is crucial for teachers and professors at all levels and is largely ignored. That, however, is a subject for a different post.

For me personally, history is a wonderful example of confrontation and learning in a field outside of one’s main youthful interests. At first, I found boredom, if not outright hatred, then the glimmerings of interest and a grudging acceptance of some history. Finally, my interest widened to more areas and finally a fascination developed to the point where I could be accused of being a “history buff”. Ultimately, I’ve become interested not simply in the history of specific times and places, but in what doing history involves with the appreciation of the gifts and dedication required of a truly excellent historian. Finally, I’ve come to see how history has expanded to encompass in itself an understanding of everything. To tell this story of my involvement I will now move into “memoir” mode, beginning with how I grew up in Hawaii and found myself living at a time when happenings became history.

Around 1935 my parents built a large two-story house at 2244 Aloha Drive, in a still sparsely settled Waikiki neighborhood. The Ala Wai Canal, one block mauka (toward the mountains) from Aloha Drive was built between 1923 and 1928. Before then, the part of Waikiki where we were to live on Aloha Drive was swampy with two streams flowing to the ocean. The Hawaiian dictionary translates Waikiki as “spouting water”. Wai means fresh water as opposed to Kai for salt water, while Kiki can refer to any rapidly flowing water. Perhaps “turbulent fresh water” would be a better translation, but who really knows? Perhaps “spouting” could refer to waves hitting a stream as it flowed into the ocean. In any case Waikiki was a rather sleepy beach with a limited amount of sand and coral filled water off shore, cut off from the main city by the two streams. In the early years of the twentieth century the beach witnessed the resurrection of surfing, notably by Duke Kahanamoku who was also noted for his Olympic swimming medals and world records. By the 1920’s surfing was well established at Waikiki. My Dad’s pictures from that time show a row of surfers with their huge, weighty, ponderous redwood boards standing on the beach as well as pictures of Dad himself in a bathing suit that covered his chest, and, in the background, a pier running out to sea from the Moana Hotel. One aspiration of history is to create the impression of what it would be like to be present in a past time and place. I can imagine how Waikiki was in the 1920’s and how different it had already become by the time I could remember it in the mid 1930’s. (The pier had vanished among other things.) After 1928 the Waikiki streams flowed into the Ala Wai and the land where we were soon to be living was filled to appreciably above sea level. From our house we could walk makai (towards the ocean) down Royal Hawaii or Seaside Avenues five blocks to Kalakaua Avenue and the beach at Waikiki just beyond, where my younger brother George and I could play in the water and where I finally was able to keep my feet from continuously reaching for the bottom and begin to swim at age 6. (On a 1936 trip to the “mainland”; i.e., California and the U.S. beyond, George learned to swim at age 5.)

For some reason which was never at all clear to me, my parents tired of living in Waikiki and my mother, consulting with an architect, designed a new house which was built around the time I turned 11. This new home was located in lower Manoa valley at 2111 Rocky Hill Place, a short lane running uphill from Kakela Drive which began at McKinley Avenue and then rounded a corner and climbed up towards the top of Rocky Hill, an ancient volcanic remnant. (This area is easily found on Google Maps.) Our new house stood at the top of a 20 odd foot rock wall above McKinley and from our living room we had a view of Waikiki and the ocean beyond. This ocean view was partially blocked near the shore by Waikiki’s two hotels, the Moana and the Royal Hawaiian, as well as by lower buildings and the coconut trees between Kalakaua Avenue and the ocean. Later, when I was in high school, I became aware that on occasion we could see white surf break beyond the hotels when Summer storm surf came to the south shore from the great Winter storms south of Tahiti. The sight of such surf was a sign that we should, if at all possible, get down to Waikiki where my brother, my cousin and I could put on fins and swim out a half mile or so to body surf the large waves at First Break. Back in the late 1930’s and early 40’s, however, body surfing lay in the future and we mostly swam near shore and picnicked on the weekends on ivory colored coral sands surrounded by lauhala or ironwood trees on the far side of the island.

It was around this time that I became aware of the news of what was going on in Europe with a crisis concerning Czechoslovakia and the threat that Hitler’s Nazi Germany would start a world war. I was aware that there had been an earlier, very bad, war before I was born, but knew no details. I remember hearing broadcasts, called fireside chats, of our president whose reasonable, friendly, confiding, persuasive voice, capable also of withering scorn, completely won my admiration. I felt a sense of foreboding when war did come in 1939, a feeling which intensified as Poland, Norway, then France fell to the Nazis. I became aware that there was a threat from Japan, who had invaded China and Manchuria, and had joined the “axis” powers. I wondered why they were called “the axis powers”. (In fact, even now when I think I know, I haven’t actually read an explanation so my understanding is really based only on the plausible speculation that Germany and Italy cut through the heart of Europe like an axis around which all else would revolve.) In the summer of 1941 when Hitler invaded Russia, I felt a slight sense of relief after the German invasion slowed and halted, an outcome that had previously never happened. As the Fall came on there was news of worsening relations with Japan.

My folks and their friends observed that there was little threat to us in Hawaii because Navy PBY sea planes patrolled thousands of miles out to sea in all directions where they would detect any sign of Japanese naval activity. (Perhaps the PBY’s weren’t actually fictional, but they obviously did not patrol effectively to the North). My Dad worked for Castle and Cooke, one of the Big Five business firms who dominated much of the islands’ economy in those days. Castle and Cooke and the other Big Five had a controlling investment in Matson Navigation Company who owned 4 passenger ships as well as many freighters which supplied us with goods from the mainland and carried back our sugar and pineapples. Castle and Cooke also served as agents for Matson in Hawaii and I realize now that my Dad, involved with Matson’s island doings, would have been privy to all the scuttlebutt going around town. At that time, I was too ignorant and uncaring to be much aware of such things. What I was aware of were my parents’ friends in the Navy who visited us when in port. One, whose name I regrettably don’t recall, worked in the engine room of the heavy cruiser Houston. In the interest of personifying him, I’ll make up a name for our USS Houston friend, calling him “Sam” after the historic Sam Houston. (Sam might according to tickles in my brain, in fact, have been his real name.) In the Fall of 1941, Sam had heard from talk going around in the navy that we were very close to war with Japan. It could come at any time. In November Sam bid us good-bye as his ship sailed West to the far East. We never saw him again, the Houston being sunk in the early days of the war. Later, after the war, we heard that Sam had survived being in the engine room, but had died in a Japanese prisoner of war camp. Our other friend, Forest Jones, was a petty officer on the battleship West Virginia, stationed at Pearl Harbor when not at sea.

December 7, 1941 was a Sunday. The previous September I had entered the 7th grade at Punahou, a well-known private school, founded in 1841 by missionaries living in the Kingdom of Hawaii. My classes, as always in those days, were somewhat boring, but I endured and learned the material as do most children. Weekends were somewhat of a relief and on that Sunday morning, feeling relaxed and free, I walked out into our yard and looked up to see the entire sky covered with anti-aircraft bursts. I knew what they were because I had frequently seen planes towing targets which were surrounded by eight or ten of these bursts as anti-aircraft gunners practiced. My feeling on seeing the sky covered was one of shock. I knew something was badly amiss, but did not jump to the obvious conclusion. Somehow, war was unthinkable, a feeling shared by all of the Island’s military authorities who should have known better. I went back in the house exclaiming about the bursts to my parents. We went into the living room and looked down to the ocean where two small freighters were coming toward port. Suddenly, two huge columns of white water rose near the ships, making a surreal, impossible image. My parents immediately went across the room to the large Philco radio console and turned it on. After its interminable warm up the radio came to life and the broadcast sounded entirely normal for a Sunday morning. Our feeling of relief did not last long, however, as the program was soon interrupted, with an announcer saying something like, “Folks we don’t know what is going on, but we’ll find out and get back to you as soon we can.” Then the music resumed. The second program break came shortly. “… The Hawaiian Islands are under enemy attack. … The Rising Sun has been seen on the wings of the attacking planes.” Shortly after the second resumption there was a third, announcing that the station was going off the air. Then silence.

I suppose we must have eaten breakfast, but I remember nothing about it. I do remember looking up from our front yard and seeing a formation of white planes high in the sky. Their motors made an entirely different sound than what I had usually heard from airplanes. They were presumably Japanese planes of a second or third wave heading towards Pearl Harbor. Later a single plane flew fairly low over us and dropped a bomb that hit harmlessly near a home at the top of the steep slope rising across Manoa Valley. Since that area was barely visible from our house, this incident most likely happened after we had left the house and walked up on Rocky Hill where there was a good view to the South and West. I felt very frightened. It had occurred to me that if a Japanese pilot had seen us, he would have mercilessly strafed us. There were no more planes near us, but we all kept in mind that there were nearby Kiawe trees under which we could hide had any appeared. Looking West toward Pearl Harbor all we could see was the crater of Punch Bowl blocking the view. Behind Punch Bowl rose a huge cloud of black-grey smoke. I figured that the Japanese had bombed the two or three large fuel tanks that lay on the shore of Pearl Harbor near Pearl City. I was wrong. Luckily for us, the Japanese had placed the fuel tanks which could have easily been destroyed with 2 or 3 bombs each, into a lower priority, which they decided not to exercise after their successful attack. Admiral Nagumo felt that leaving as fast as possible was better than pushing his luck by refueling planes for further attacks on the lower priority targets. I heard recently that the entire fuel supply for the Pacific Fleet was in those tanks and that it would have crippled our fleet for months had they been incinerated. Instead the attack concentrated on military airfields and the ships in Pearl Harbor, destroying most planes on the ground and sinking many ships. At one point as the morning wore on, I was able to look through some borrowed binoculars at the entrance to Pearl Harbor which we could see. There were ships moving out to sea with bomb spouts rising near them. Around 11 am we went back home just in time to see a big fire burn some buildings and homes about half way down towards Waikiki. We thought the cause was a final departing plane which had dropped a bomb, but in reality, we might only have imagined that we saw such a plane.

After Pearl Harbor came the bleak early days of the war which became a total disaster for U.S. and its allies in the Pacific. There has been much history written about WWII in all its multiple theaters, but one relevant fact not sufficiently emphasized, in my opinion, is the feeling one has of being in a “Total War” such as WWII. It is a feeling of constant underlying stress, like being in a tight athletic contest whether on the field or on the bench, but much more intense and much more prolonged. When will this war ever end? This feeling of underlying dread is not always in one’s consciousness, but lurks, waiting to spring into awareness. Life is not normal because there is a feeling that the whole world is awry and disaster seems never far away even if the war is being won.

In the days after Pearl Harbor, we wondered about Forest Jones on the West Virginia. As it turned out Forest survived Pearl Harbor and the entire war in the Pacific. In 1991 there was a 50-year anniversary commemoration of Pearl Harbor with Japanese participants in the attack joining Americans who had been there. My family had kept in touch with Forest during and after the war. During the war he had visited us frequently when his ship was in port and we had heard his stories of Pearl Harbor and beyond. Forest participated in the 50th anniversary commemoration though he was far from ever forgiving the Japanese for what he had endured in the war. He wrote up an account of his experiences at Pearl Harbor for the Naval Archives, and sent a copy to my Dad. By 1991 I was an unabashed history buff so saw to it that I had a copy of the Forest Jones account. It is worth quoting excepts from it.

“Forest M. Jones, LCDR, USN (Ret) November, 1991

“I was a 1st Class Petty Officer aboard the U. S. S. West Virginia (BB48) when the Japanese attacked Pearl Harbor. I was on the forward Fire Control Platform, above the Navigation Bridge, and saw the Japanese planes coming down the shipyard channel. Three other Fire Controlmen on the platform and I immediately manned our battle stations in the two anti-aircraft gun directors located on the Fire Control Platform. We had the two directors manned with skeleton crews, before the General Quarters alarm was sounded…
…….
“From our vantage point on the upper deck, we could see that some of the starboard guns were being manned by their crews. There was no attempt to man the guns on the port side due to continued torpedo strikes, fire and debris in the vicinity of the gun area.

“We were unable to obtain power to permit operation of the Gun Directors and were also unable to establish communications with the anti-aircraft guns. …

Forest writes that he descended with some of his fire control crew to the lower deck where crewmen were setting up the guns so he and his friend, Joe Paul, another fire controlman, went to the nearby Ready Service boxes and began to remove 5” shells used by the guns.

“While Joe Paul and I were removing ammunition from a Ready Service Box we were suddenly engulfed in a cloud of kapok from the life jacket locker above the Ready Service Boxes. We later discovered, after the attack, that one of the 16” armor piercing shells that the Japanese had modified to be used as a high level bomb had struck the top of the Forward Cage Mast and was deflected by the heavy metal coaming of the Signal Bridge. Were it not deflected it could have been a direct hit on the Ready Service Box where we were working. The shell penetrated to a lower level but failed to explode. The lives of Joe Paul and I were spared twice in the matter of a second. First when the bomb was deflected and then by its failure to explode almost directly under our position on the anti-aircraft deck.

“After unloading all of the available ammunition, I went to the Navigation Bridge where Captain Bennion was sitting against the forward metal shield on the wing of the bridge. He had been fatally wounded but was still alive. Unfortunately, there was nothing we could do to alleviate his suffering. He had suffered a massive mid-torso injury by a fragment from a high level bomb that had struck the Number 2 Turret of the Tennessee.

Forest Jones and two other enlisted men then helped around 30 men to emerge from an escape tube, the only remaining route to safety from the main battery fire control room on a lower level deck.

“By this time the ship had taken a decided list to port due to underwater torpedo damage that led to extensive flooding. It was about this time I witnessed large explosions within the Arizona, which was directly astern the West Virginia. It was necessary for us to take cover in the protected areas of the bridge because of the great amount of flying debris. At this moment, I witnessed the Oklahoma, directly ahead of us, roll over to port due to heavy torpedo damage below the waterline. Within a few minutes all of the superstructure and decks were submerged and it came to rest with only part of the bottom visible.

(400 or so seamen were trapped inside the Oklahoma. In the days after Pearl Harbor we heard about a few who made their way up to the hull being rescued when their tapping was heard. All of the others perished.)

“The smoke from the burning Arizona was very heavy. Fortunately, there was no fire in the bridge area of the West Virginia. Although we were being subjected to numerous strafing attacks, we had no hits in the bridge area. During a lull in the attack I checked the Signal Bridge and Fire Control levels to make sure there were no wounded crewmen left in those areas.

“Apparently the heavy list to port was being remedied by counter flooding. The ship was gradually settling to the bottom on an even keel and finally came to rest with water to the Main Deck level. The word was passed on the upper decks and bridge levels to abandon ship (source unknown).

“Most of the crew abandoned ship in the vicinity of the starboard bow. Joe Paul and I were among this group. There were several motor launches moored in the area between the bows of the West Virginia and Tennessee. Joe Paul and I, along with an unknown fireman, manned a 40’ motor launch and made several trips to Hospital Point with wounded and other personnel who had been in the water and were heavily coated with fuel oil. We also towed floating bodies to the Hospital Point site. …

(Forest Jones mentioned to us during one of his wartime visits that after he abandoned ship, he had had to dive under burning oil to reach the launches. This detail was omitted from his report.)

“The West Virginia was raised, repaired, modernized and returned to Combat Operations in 1943. She was the only ship in Tokyo Bay during the signing of the surrender terms which had been at Pearl Harbor.”

End of the excepts from Forest Jones’ account.

After Pearl Harbor Forest was assigned to the carrier Enterprise where he saw much action especially in the Battle of Midway. Later he rejoined the West Virginia where he saw much more action, recounting to us how the battleship fired its 8” guns to create water spouts in the hope of downing kamikaze planes as they attacked the ship.
……………………
In the early days after Pearl Harbor Hawaii was put under martial law. (Surely one motivation for this was that our largest ethnic group in Honolulu at the time consisted of people who had come to Hawaii from Japan during the previous few generations.) Our lives resumed some sense of normalcy. We went down to Waikiki and swam, making our way through a passage in the rolls of barbed wire strung along the beach. The city was totally blacked out and we learned to move about our house after dark, feeling walls and remembering where doors were. Later we used cardboard and tape to blackout the windows of many of our rooms so had some light.

During the early months of the war before June, 1942, my parents had to decide whether or not to flee to the mainland rather than risk an Island invasion. They decided that the risk was small enough to be worth taking. However, some of my Punahou classmates disappeared. Under martial law our Punahou campus had been taken over by the Army Corps of Engineers. We 7th grade students held our classes in an open pavilion near the University of Hawaii Campus.

Except for following the news, hearing about our Guadalcanal Solomon Island invasion and the Battle of the Coral Sea, my memories of the time during early 1942 are quite vague. I do remember that outside of our school pavilion was a yard where we all played a game involving a football. Someone would grab the ball and run, while everyone else would chase after, tackle and pile on the runner. My memory is vague on one point, but I think the girls in our class did not sit out this game. Although I was one of the smaller kids, I thoroughly enjoyed this activity. I remember nothing about the Battle of Midway at the time except for my mother describing how she went to downtown Honolulu in early June and found the city almost entirely deserted. The grapevine had apparently informed people that something big was going on.

The “something big” was, of course, the Battle of Midway. See Incredible Victory: The Battle of Midway by Walter Lord for a fascinating full account. With our blithe assumption of American superiority, we did not realize at the time that we were taking on, arguably, the world’s best navy and naval air force who had vastly superior forces on the scene. We won the battle through luck, some vital decrypting of Japanese naval codes, some skill, and the incredible heroic sacrifice of our torpedo plane pilots. Something I found out later, probably while working at the Naval Ordnance Test Station, was a fact not mentioned in the book: until two or three years into the war the US had no torpedoes that could survive being dropped from a plane. Nevertheless, our torpedo plane pilots who must have known they were doomed, attacked the Japanese carriers once located. The lumbering torpedo planes became sitting ducks for Zero fighters who wiped out close to 100 percent of them. The Zero also played havoc with our outclassed fighter planes. After the Japanese “turkey shoot”, the planes involved needed refueling and, thinking, because of some miscommunications, that there were no American carriers anywhere nearby, Admiral Nagumo ordered a mass refueling. Thus, most of the Japanese fighter planes were helpless on their carrier decks when our largely unopposed dive bombers arrived on the scene. We sank all four carriers in their group and turned around the course of the war. The loss of their prime carriers was bad enough, but according to Saburo Sakai, a Japanese fighter ace, it was the loss of their highly accomplished fighter pilots that was even more of a disaster.

I don’t remember whether or not our victory at Midway was felt as an immediate relief in Hawaii. I do remember that one day we went down to Waikiki and the barbed wire was gone. It would be an interesting historical fact to know exactly when this happened, but as far as my memory is concerned it could have been as early as July of 1942 or as late as the end of 1943. I know that the blackout was lifted in 1944 when there really WAS no longer the possibility of a Japanese attack.

In the eighth and ninth grades we moved from our open pavilion down to a genuine classroom building at the University of Hawaii. Relevant to my rising distaste for History over the next few years was one or two social studies courses and a senior year American History course in which we seemingly, several times covered the American colonies before the Revolutionary War and nothing much beyond. At this time as I began to develop a fascination with math, the relevant history was being made right at our doorstep, and for me there was a total disconnect and irrelevance to the jumble of meaningless dates and events thrown out in the history classes.

In 1947 I became a Freshman at Stanford University. In those days, a notable course, required of all Freshman, was The History of Western Civilization. The course consisted of readings from the time being covered, followed by a presentation of the history with the relevance of the readings thrown in. This was actually an effective way of teaching history with the readings providing a flavor of the times that a mere description would lack. We had an excellent teacher, humorous, quizzical, unserious in manner, whose name I absolutely forget. Nevertheless, I and my roommate, the brilliant, creative Roger Shepard, in the same section of the course, pretty much goofed off. I skipped most of the readings, Plato’s Phaedo, being an exception. Roger and I did play close attention in the class so as to get some kind of a grade out of it and accordingly, some of the content must have rubbed off. What did begin to kindle my interest in my freshman or sophomore year was stumbling across a book in the Stanford library. The book was Germany Enters the Third Reich, written in 1933 by Calvin B. Hoover, a young economist who had received a grant to study the Soviet economy, after which he traveled to Germany in 1932 and witnessed the rise of Hitler first hand. Mr. Hoover had no access to the economics of Germany’s rise, but was a firsthand witness to the joy, relief, and passion aroused by Hitler. This book made the personal connection that began my transition to history buff. I took the WWII battles in the Pacific as personally meaningful, to say nothing of the advent of the atomic bomb and my feeling with the rise of the cold war that I was unlikely to make it to 30 years of age. At Stanford in those days, undergraduates were not allowed into the stacks and I have no memory of how I came to find the Hoover book. Perhaps it was among the books on a cart waiting to be shelved, sitting outside the stacks, where I could browse through the books.

I became interested in WWI and read about its horrors. The Great War 1914 -1918 was more of a slaughter than a war. It’s prime nightmare, for the British anyway, was the Battle of Passchendaele, along a tiny fraction of the Western Front in Belgium near the town of Ypres. See https://www.britannica.com/event/Battle-of-Passchendaele for an account. In an area turned into mud by early Fall rains, full of water-filled artillery craters, the British soldiers charged the German machine guns whose bullets tore through their bodies, while artillery, some from “friendly” fire, blew to pieces those who the machine guns missed. Daily casualties on the British side were as high as 17,000, while those for the entire engagement were some 250,000 or so. (There is still controversy). All of these casualties occurred between August and December of 1917. The battle was no picnic for the Germans either, their casualty estimate being 220,000. The ground gained by the British was minimal and they later withdrew.

Although the Allies won the war, their spirits were devastated by it, and rightly so. Though the Germans were definitely defeated, the myth arose that they had been “betrayed”, and that the shame of losing was undeserved. When Hitler came to power, his propaganda minister, Joseph Goebbels, developed the effective propaganda technique of endlessly repeating “a big lie”, which his audience was largely inclined to believe. This propaganda technique also tends to convince skeptics against their better judgement and it seems to work universally if not met by counter-propaganda. Simple truth seems to be a none too effective antidote.

I became interested in just how WWI started and read a few interesting books about how nationalistic rivalries intensified and how leaders were blind to the weapons developments that made the war so terrible. The attitude of these Kings, Chancellors, Prime Ministers and others in power arose from the knowledge that there had been a long European peace with the few threatening crises resolved by diplomacy, combined with the feeling that war historically hadn’t been all that bad and might well simply “clear the air.” Then there was the blindness of European leaders to the chauvinism that had arisen throughout the peoples of all the European nations. Nationalistic patriotism had become extreme and many were spoiling for a fight. The powder keg was, of course, the Balkans area where the empires of Austria-Hungary, Russia, and Turkey became rivals to each other, all trying to suppress the wishes for independence of their subject peoples.

During the Cold War with its emphasis on avoiding appeasement, I feared that everyone was forgetting the lessons of what led to WWI. Fortunately, through luck and the perception of what a new total war would involve, we have avoided catastrophe so far, though the threat of nuclear annihilation still lurks.

I had become interested enough in history by my junior year at Stanford that I had the disheartening experience of “The High Middle Ages” mentioned in the post, QM 1. Also, at this time I was still totally uninterested in American History. How irrelevant it seemed. In later years I have of course found American History at least as fascinating as any other. Note added 3/4/2025: If you have had some exposure to the main events and their dates in U.S, History there is a fabulous book, Freedom Just Around the Corner by Walter A. McDougall, a great read with penetrating insights into our American character.

This post is long. Here is a link to its top if you wish to escape.

More on History

Returning to one theme of this post, namely “understanding everything”, I will point out that, at least in the case of History, becoming an addict, is not sufficient for the kind of understanding that would satisfy me. One needs to get behind the output of the historian or journalist to understand how history is done. What does being an historian involve? What are the required gifts that make a great historian? What are the paradigms of historical studies?

One distinction that historians make is between “primary” and “secondary” sources. A primary source is an unfiltered, first-hand account, perhaps a newspaper article, correspondence, private papers, memories elucidated through interviews, contemporary government papers or other such material. A secondary account is the story a historian or journalist creates from a selection of primary and other secondary sources. The write-up above by Forest Jones is an example of a primary source as are my memories in the memoir paragraphs above.

A first reason one needs secondary accounts is because primary sources are incomplete and unreliable in a variety of ways. The Forest Jones account above is dramatic but is unclear on many fronts. Among other things, one needs a map of Pearl Harbor showing where the battleships were moored in order to understand why the Port rather than Starboard sides of the ships were devastated. One needs to understand the structure of the old pre-war battleships moored in Pearl Harbor. In order to give a coherent account of the attack, one needs to actually consult the various archives scattered about. Secondary sources such as Wikipedia or books about the attack have information close at hand, but often mistakes persist in the secondary literature, and if one is conscientious, one needs to actually accomplish the tedious work of traveling to archives and going through the file folders or the reading of the original newspaper accounts.

I learned a little bit about archives first hand because a friend in Eugene, Oregon was Dean of Libraries at the University of Oregon, and, knowing of my scientific background, asked me to go through the papers of Aaron Novick, who had founded the Institute of Molecular Biology at the University. When Dr. Novick passed away, his office contents were put into 27 boxes and placed in the basement of the Special Collections department of the library. I accepted this challenge and boned up a bit on molecular biology reading The Eighth Day of Creation, an account of the early days when the structure of DNA was found and its workings elucidated. I certainly didn’t understand all of the material in that account, but got the general drift and learned the names of the main actors.

Going through the papers, letters and other materials was generally tedious, but from time to time very rewarding. I could read letters from Nobel winning scientists and others, some of whom perhaps should have won the big prize. I could follow the careers of students and post docs who later became distinguished scientists. Much material was redundant and I had to make judgements about what could be safely discarded. The final result was, if I remember correctly, 23 boxes of papers organized into somewhat coherent Series with a Finding Aid which gave a rough idea of what might be in each box. I could get an idea of an historian’s work, reading through papers in file folders in search of a relevant bit of key information, hoping that whoever made the Finding Aid didn’t botch the process.

What a historian faces in trying tell a story which is interesting, coherent and enlightening, a story which brings also a new insight into the understanding of the past or present, is typically an overabundance of not only primary material, but also many previous secondary works. The gifts one needs are first, a prodigious memory, second, the persistence to immerse oneself in the mass of material to the point where one gets a deep, intuitive understanding of the time and place of interest, and finally the ability as a writer to condense, redact and present in compelling prose an interesting, meaningful story.

I will now consider an example or two based on my recent reading and the thoughts they give rise to. These examples show how history can become an attempt to “understand everything”.

Traditionally, history has been the story of politics and war. I am reading a book right now by a historian who wrote this traditional kind of history; namely, The March of Folly by Barbara W. Tuchman. The history in this book may be traditional, but the idea of the book is to examine through history a particular question, perhaps new: Why have governments of all kinds repeatedly throughout history adopted policies that are totally destructive to their own interests and then persist in these policies when their stupidity has become obvious? What we have here is history as inquiry. Ms. Tuchman is careful to limit her examples to a particular kind of misgovernment; namely, folly or perversity. I quote:

“To qualify as folly for this inquiry, the policy adopted must meet three criteria: it must have been perceived as counter-productive in its own time, not merely by hindsight.”

After commenting lucidly on this first criteria, Ms. Tuchman moves on.

“Secondly a feasible alternative course of action must have been available. To remove the problem from personality, a third criterion must be that the policy in question should be that of a group, not an individual ruler, and should persist beyond any one political lifetime. Misgovernment by a single sovereign or tyrant is too frequent and too individual to be worth a generalized inquiry.”

In her long, fascinating introductory section Ms. Tuchman mentions many possible instances of unfortunate outcomes that could be studied and lists several of the rare occasions when governments were actually competent and successful. Then, in the remaining body of the book she concentrates on four more situations occurring throughout history from the ancient world to the US involvement in Vietnam. The section I’m immersed in is an examination of how England came to lose her American colonies, concentrating on the 20 years between 1763 and 1783.

One notable feature of Ms. Tuchman’s work is that she includes interesting material that doesn’t bear directly onto her inquiry. One gets a flavor of what it would be like to live in the England of that particular time. Society was highly stratified and parliament dominated by men from the ennobled, wealthy class. Excesses of high living were rampant with gout a common ailment. It was not only King George III who had mental problems. Many other ministers and notable members of parliament were subject to bouts of insanity and incapacitating ill health. Of course, it was the supposedly sane ones who were responsible for the acts of government blindness (almost insanity) which brought about the American revolution. Because of the richness of her story, I as a reader could make connections outside of the immediate story. The social partying and visiting among the great English estates persisted throughout the nineteenth century and were celebrated in the early twentieth century before the Great War by Saki (Hector Hugh Munro), a master of exquisite prose, in his short stories, full of understated British humor and a delicious presentation of human frailty. (Mr. Munro was another victim of the war, killed on the Western front.)

The richness of Ms. Tuchman’s story invites commentary in at least two areas. For one, it shows why elementary history courses are apt to be exercises in deadly boredom. Much of the interest of history lies in the incidentals which give a rich, colorful, complex picture, making a time and place come alive. Abstract this richness from history, leaving merely the dates of events thereby rendered meaningless, and the joy of history is gone. It’s as if one is attempting to give an emotionless machine the mastery of history by logically erecting a scaffolding which can later be filled in with the details. In a later post I can perhaps suggest alternative ways of teaching not simply history but other subjects, such as mathematics and physics, whose teaching falls prey to the same fallacy. (Actually, I don’t need to do this. One merely has to read Whitehead’s The Aims of Education, to get the picture.)

A second observation is that Ms. Tuchman does not stray too far afield from her main subject. One does not get a broader picture of what was going on, even in Europe, at the same time. For example, Mozart was born in 1756 and died in 1791, flourishing during the period of Ms. Tuchman’s study. Beethoven was born in 1770. The great chemist Lavoisier was born in 1743 and did his revolutionary work in chemistry around 1778 before being guillotined in a later, political revolution. Ms. Tuchman does mention how David Hume, the philosopher, was involved in the politics of the time, but there is no mention of James Watt who pushed through his crucial modification of the steam engine to success after 10 years of effort in 1776, enabling an irresistible quickening of the industrial revolution. During the same time period Captain James Cook explored the Pacific, mapping New Zealand, Australia’s East coast and discovering the Sandwich Islands where I was to be born some 151 years later. Adam Smith’s The Wealth of Nations was published in 1776. Then there is literature and art. Interestingly, Ms. Tuchman, at the end of each section of her inquiry, includes a portfolio of paintings and documents relevant to her story. One can look at these, taking in the appearance of the main actors and by reading the captions see which artists were active at the time. As a masterful historian with an intimate knowledge of the times, Ms. Tuchman has the judgement of where to draw the line between too much and too little detail, herself painting an interesting historical picture without covering up what is vital to her inquiry. She supposes a reader, familiar enough with the history of the times, who can make connections beyond her immediate story.

Moving on, I note the dates in the last paragraph. In learning history dates should probably be kept to a minimum. How about 4000 BCE, 600 BCE, 1 CE, 618 CE (Tang dynasty), 1066 CE, 1492 CE and 1776 CE for starters? Dates, besides designating the linear flow of history, afford us the possibility of moving sideways in space and subject area so that we can appreciate what is contemporary during a given time period. This is what I’ve done in the last paragraph. With dates one can also move in new directions. In 8th grade math, one can bring in history. For example, most of us, at least in the past have learned about Roman Numerals. There are I, V, X, L, C, D and M. But wait. What is the Roman Numeral for 0? Of course, there is none. Neither is there a year 0. Dates skip zero going directly from 1 BCE to 1 CE. In an earlier post I talked about the invention of zero. One can easily do a little research online these days and find that zero was slow of establishment in Western culture. Adoption was quite uneven. This is part of the reason for its omission in our date line. There is little possibility of fixing this defect because it would mean our records of exact dates would need altering. Such is even more unlikely to happen than would be the reforming of our QWERTY keyboards. Once established, conventions are very difficult to change.

Although History with a capital H has traditionally been about politics and war, there are histories of almost any reality one can imagine. Looking at the paragraph above, one realizes that there are histories of music, chemistry, philosophy, technology, exploration, economics and art to name a few. However, there has been very little tendency for such histories to broaden themselves by moving sideways. If one is to start trying to understand everything through history one has a great deal of reading on one’s hand and has then a huge job of correlation. Of course, if this task is fun, why not fool around with it in a leisurely manner? I do own a book entitled The Timetables of History by multiple authors. The book consists of tables consisting of horizontal columns for a given date and vertical columns for History Politics, Literature Theater, Religion Philosophy Learning, Visual Arts, Music, Science Technology Growth, and Daily Life. There is a fascinating foreword by Daniel Boorstin, former librarian of congress, who has fascinating comments on what history is all about. I lack the sufficient viscerals of an historian to exhaustively engage in this book, most of the entries seeming to me of little import. Did you know that in the year 518 Sigmund, son of Gundobad became king of Burgundy, while in 1920 the Nobel prize in physics went to Eduarde Guillaume for discoveries of anomalies in nickel-steel alloys? (Actually, one might get interested enough in the times of Sigmund to wonder if Burgundy was already producing decent wine, another subject worthy of historical study.) There are interesting little gems scattered about in this book. During the interval from -2500 to -2001 Equinoxes and Solstices were determined in China, while in 1776 David Hume died. One can check out the mathematical inconsistency I’ve harped on earlier by checking that Augustus, the first emperor of Rome reigned from -30 to +14. Since 14 – (-30) is 44 as our 8th grade students have just learned, one sees he reigned for 43 years after one accounts for the missing zero. Of course, the book is a wonderful research tool.

Besides histories of the various subjects mentioned above and those specializing in particular time periods and various peculiarities, there has arisen in recent times a genre called “big” history or the history of everything. In an earlier post I commented briefly on Yuval Harari’s Sapiens: A Brief History of Humankind and I have just interrupted my reading of Barbara Tuchman’s book to read Origin Story: A Big History of Everything by David Christian, an historian who worked mostly in Sydney, Australia, specializing in Russia both imperial and soviet, before becoming interested in “big” history around 1989. Dr. Christian starts his history with the “big bang” whose time occurred, according to the latest reckoning, 13.82 billion years ago. I had this book on hold at our library and when it became available, I downloaded it for a three-week loan and am now waiting expectantly to come back to Ms. Tuchman’s insights on the folly of Vietnam. Fortunately, I own The March of Folly so can put it aside for now.
Big history begins with the modern analogue of creation myth now called modern origin story because it is based on scientific, anthropological and historical evidence with the aspiration of being as non-fictional as possible. Several physicists have told the cosmological part of this story, but their accounts often lack the kind of human interest an historian can bring to the subject. The interest now does not concern the physics but the meaning of this history for us as human beings who live in this unbelievable universe, a meaning formerly brought to so-called primitive societies by their creation myths. Dr. Christian has created a timeline of significant “events” some of which embody a generalized form of the physical concept of “emergence” in which a startling new complexity can arise out of simplicity. These emergence events he calls thresholds, of which there have been eight so far in the reckoning of Dr. Christian. Threshold 1 is the Big Bang; 2, the first stars glow, 600 million years of so later; 3,4,5, include the first life on our planet; 6, the first evidence of our species, homo sapiens, 200,000 years ago; 7, ice ages end, farming begins, 10,000 years ago; 8, the fossil fuels revolution, 200 years ago. Fifty or so years ago begins an event, not a threshold, called the Great Acceleration; humans land on the moon and begin to have a geological impact on our planet. Dr. Christian optimistically includes a threshold 9, estimated to be 100 years in the future, A Sustainable World Order. Of course, this latter threshold might well not occur; instead not only “big” history, but, for us anyway, “all” history might come to an end.

Big history is interesting in at least a couple of ways. For one, it gives us a cosmic perspective, leaving out what is traditionally considered the flesh and blood of history, the wars, the politics, the human creations of art and empire. Thus, in its own way it creates an abstraction of history similar to that of traditional histories which also leave out most history. The second way in which I, anyway, find it interesting is that “big” history aspires to be a history of our entire human perception, encompassing our entire human adventure. History has become a “master” discipline expanding its role to subsume any and all other disciplines as it may require. It has become a way of “understanding everything”, requiring the aspiring master historian to not only find meaning in the usual historical written records but to move into many other fields which provide a setting for traditional history and allow an expanded meaning of human significance and human folly. “Understanding everything” in this sense requires one to understand a specialty and then move into other areas, struggling with new paradigms and expanding one’s intuition and awareness. Perhaps this expanded awareness can ultimately reach the emptiness outside of all existence or perhaps it can’t. In any case life becomes richer, more meaningful and more significant. Back to Top

For my Future Wife

As mentioned earlier in “The Morass, Part II”,  I abandoned Auburn, Alabama in June, 1974, driving a newly purchased Volkswagen beetle to Cottage Grove, Oregon. My wife Barbara had made it clear that our relationship was finished and I saw no point in remaining at Auburn where my teaching career was also on the rocks. The VW cost $250 as I remember, but seemed in good condition and drove adequately. I had a tent and camped along the way. My route went NE through Alabama to Tennessee, then across the Mississippi into Missouri. I remember a miserable camp in that state with humid heat and insects which drove me into my sweltering tent. In western Nebraska I rejoiced at seeing the first sage brush. It felt good amidst the travail of my feelings to be back in the West.

Cottage Grove was home to the Cerro Gordo project, whose idea was to plan and build a community embodying many of the counter-cultural ideas of the time. The hope was to demonstrate a synergy among a new infrastructure, new educational ideas, and a new way of life. The new settlement was not to be a commune, but a live, diverse village. I won’t here go into details of how the village idea failed. However, fail it did, and around 1977 I moved on to try make a new life.

I was, of course, devastated at the time because I was still deeply in love with Barbara, but after seven years or so that love faded and I, indeed, settled into a new life, my sanity having been saved by cross-country skiing and the many new friends from the Cerro Gordo project. For many years my financial situation was shaky and I was occasionally close to becoming homeless. Four or five of us lived in a communal house on Washington Street in Cottage Grove sharing the rent and phone bill. One of these housemates, mentioned earlier, was Fred Ure, a talented sculptor and owner of the book about Wittgenstein’s philosophy.  Later, in the eighties, with the help of a friend I became a computer programmer and software designer and my finances improved. I learned about accounting and business and gradually became a successful stock market investor. I moved from Cottage Grove to Eugene, Oregon around 1990.

Still, my life felt basically empty because the lack of a close romantic relationship left a void. Then, much later, around the late 1990’s I met Susan S and was immediately enthralled by her. I knew she was a climber and skier and we saw each other on occasion cross country skiing and  in the rock gym. I climbed with my friend, David, and whenever she and her friends walked into the gym, I felt an electric shock. The breakthrough came one day when I rode with her and our bicycles to a bike trip near Cottage Grove. On a later hike I fought my shyness aside and told her I loved her. She seemed somewhat interested, and curious when I mentioned our age difference. (I look younger than my age). I felt that that difference would immediately kill the possibility of an intimate relationship since she was 24 years younger than me. She was indeed taken aback, but we continued to see each other and relate. Susan too had recently been divorced and missed a close relationship. I learned that she was an avid mountain climber like I was and that she was the training director of Eugene Mountain Rescue. Gradually we become closer and it was to her that I wrote the letter, quoted in an earlier version of this blog.   

“Our relationship will never go to that completion I desire unless you are
as crazy as I am. Our human condition is to be trapped in an animal body
aspiring for the stars. Our consciousness non-existent for an eternity of
the past and to be non-existent for an eternity to come. Meanwhile, for a
brief instant we are here. If one truly realizes our condition, one must
be crazy, at least by conventional standards. This is not a sick,
destructive craziness, but a creative, tragic, open and aware craziness.
This kind of craziness I’m talking about is the only sanity. Are you that
crazy? A humorous saying is, “Never sleep with anyone crazier than
yourself.” I would turn that around and say, “Never sleep with anyone who
isn’t as crazy as you are.” If one is in touch with another at that level,
nothing else matters; not differences of age, of personality, of
temperament, of wealth, of fame, or of position in society. If one
doesn’t have a deep bond at that level, the relationship may be nice but
will never be complete. And can never bear the name of true love.”

We lived together for a while, then married in 2003. Later we moved from Eugene to Bend, Oregon where I started this blog in 2016. The letter was originally posted on April 23, 2016. Today, September 21, 2023, is our 20th anniversary. “Still crazy after all these years.” Back to Top

QM 1

Before completing this post, I need to acknowledge that my goal in writing about modern physics was to create a milieu for more talking about Western Zen. However, as I’ve proceeded, the goal has somewhat changed. I want you, as a reader, to become, if you aren’t already, a physics buff, much in the way I became a history buff after finding history incredibly boring and hateful throughout high school and college. The apotheosis of my history disenchantment came at Stanford in a course taught by a highly regarded historian. The course was entitled “The High Middle Ages” and I actually took it as an elective thinking that it was likely to be fascinating. It was only gradually over the years that I realized that history at its best although based on factual evidence, consists of stories full of meaning, significance and human interest. Turning back to physics, I note that even after more than a hundred years of revolution, physics still suffers a hangover from 300 years of its classical period in which it was characterized by a supposedly passionless objectivity and a mundane view of reality. In fact, modern physics can be imagined as a scientific fantasy, a far-flung poetic construction from which equations can be deduced and the fantasy brought back to earth in experiments and in the devices of our age. When I use the word “fantasy” I do not mean to suggest any lack of rigorous or critical thinking in science. I do want to imply a new expansion of what science is about, a new awareness, hinting at a “reality” deeper than what we have ever imagined in the past. However, to me even more significant than a new reality is the fact that the Quantum Revolution showed that physics can never be considered absolute. The latest and greatest theories are always subject to a revolution which undermines the metaphysics underlying the theory. Who knows what the next revolution will bring? Judging from our understanding of the physics of our age, a new revolution will not change the feeling that we are living in a universe which is an unimaginable miracle.

In what follows I’ve included formulas and mathematics whose significance can easily be talked about without going into the gory details. The hope is that these will be helpful in clarifying the excitement of physics and the metaphysical ideas lying behind. Of course, the condensed treatment here can be further explicated in the books I mention and in Wikipedia.

My last post, about the massive revolution in physics of the early 20th century, ended by describing the situation in early 1925 when it became abundantly clear in the words of Max Jammer (Jammer, p 196) that physics of the atom was “a lamentable hodgepodge of hypotheses, principles, theorems, and computational recipes rather than a logical consistent theory.” Metaphysically, physicists clung to classical ideas such as particles whose motion consisted of trajectories governed by differential equations and waves as material substances spread out in space and governed by partial differential equations. Clearly these ideas were logically inconsistent with experimental results, but the deep classical metaphysics, refined over 300 years could not be abandoned until there was a consistent theory which allowed something new and different.

Werner Heisenberg, born Dec 5, 1901 was 23 years old in the summer of 1925. He had been a brilliant student at Munich studying with Arnold Sommerfeld, had recently moved to Göttingen, a citadel of math and physics, and had made the acquaintance of Bohr in Copenhagen where he became totally enthralled with doing something about the quantum mess. He noted that the electron orbits of the current theory were purely theoretical constructs and could not be directly observed. Experiments could measure the wavelengths and intensity of the light atoms gave off, so following the Zeitgeist of the times as expounded by Mach and Einstein, Heisenberg decided to try make a direct theory of atomic radiation. One of the ideas of the old quantum theory that Heisenberg used was Bohr’s “Correspondence” principle which notes that as electron orbits become large along with their quantum numbers, quantum results should merge with the classical. Classical physics failed only when things became small enough that Planck’s constant h became significant. Bohr had used this idea in obtaining his formula for the hydrogen atom’s energy levels. In various “old quantum” results the Correspondence Principle was always used, but in different, creative ways for each situation. Heisenberg managed to incorporate it into his ultimate vector-matrix construction once and for all. Heisenberg’s first paper in the Fall of 1925 was jumped on by him and many others and developed into a coherent theory. The new results eliminated many slight discrepancies between theory and experiment, but more important, showed great promise during the last half of 1925 of becoming an actual logical theory.

In January, 1926, Erwin Schrödinger published his first great paper on wave mechanics. Schrödinger, working from classical mechanics, but following de Broglie’s idea of “matter waves”, and using the Correspondence Principle, came up with a wave theory of particle motion, a partial differential equation which could be solved for many systems such as the hydrogen atom, and which soon duplicated Heisenberg’s new results. Within a couple of months Schrödinger closed down a developing controversy by showing that his and Heisenberg’s approaches, though based on seemingly radically opposed ideas, were, in fact, mathematically isomorphic. Meanwhile starting in early 1926, PAM Dirac introduced an abstract algebraic operator approach that went deeper than either Heisenberg or Schrödinger. A significant aspect of Dirac’s genius was his ability to cut through mathematical clutter to a simpler expression of things. I will dare here to be specific about what I’ll call THE fundamental quantum result, hoping that the simplicity of Dirac’s notation will enable those of you without a background in advanced undergraduate mathematics to get some of the feel and flavor of QM.

In ordinary algebra a new level of mathematical abstraction is reached by using letters such as x,y,z or a,b,c to stand for specific numbers, numbers such as 1,2,3 or 3.1416. Numbers, if you think about it, are already somewhat abstract entities. If one has two apples and one orange, one has 3 objects and the “3” doesn’t care that you’re mixing apples and oranges. With algebra, If I use x to stand for a number, the “x” doesn’t care that I don’t know the number it stands for. In Dirac’s abstract scheme what he calls c-numbers are simply symbols of the ordinary algebra that one studies in high school. Along with the c-numbers (classic numbers) Dirac introduces q-numbers (quantum numbers) which are algebraic symbols that behave somewhat differently than those of ordinary algebra. Two of the most important q-numbers are p and s, where p stands for the momentum of a moving particle, mv, mass times velocity in classical physics, and s stands for the position of the particle in space. (I’ve used s instead of the usual q for position to try avoid a confusion with the q of q-number.) Taken as q-numbers, p and s satisfy

ps – sp = h/2πi

which I’ll call the Fundamental Quantum Result in which h is Planck’s constant and i the square root of -1. Actually, Dirac, observing that in most formulas or equations involving h, it occurs as h/2π, defined what is now called h bar or h slash using the symbol ħ = h/2π for the “reduced” Planck constant. If one reads about QM elsewhere (perhaps in Wikipedia) one will see ħ almost universally used. Rather than the way I’ve written the FQR above, it will appear as something like

pqqp = ħ/i

where I’ve restored the usual q for position. What this expression is saying is that in the new QM if one multiplies something first by position q and then by momentum p, the result is different from the multiplications done in the opposite order. We say these q-numbers are non-commutative, the order of multiplication matters. Boldface type is used because position and momentum are vectors and the equation actually applies to each of their 3 components. Furthermore, the FQR tells us exact size of the non-commute. In usual human sized physical units ħ is .00…001054… where there are 33 zeros before the 1054. If we can ignore the size of ħ and set it to zero, p and q, then commute, can be considered c-numbers and we’re back to classical physics. Incidentally, Heisenberg, Born and Jordan obtained the FQR using p and q as infinite matrices and it can be derived also using Schrödinger’s differential operators. It is interesting to note that by using his new abstract algebra, Dirac not only obtained the FQR but could calculate the energy levels of the hydrogen atom. Only later did physicists obtain that result using Heisenberg’s matrices. Sometimes the deep abstract leads to surprisingly concrete results.

For most physicists in 1926, the big excitement was Schrödinger’s equation. Partial differential equations were a familiar tool, while matrices were at that time known mainly to mathematicians. The “old quantum theory” had made a few forays into one or another area leaving the fundamentals of atomic physics and chemistry pretty much in the dark. With Schrödinger’s equation, light was thrown everywhere. One could calculate how two hydrogen atoms were bound in the hydrogen molecule. Then using that binding as a model one could understand various bindings of different molecules. All of chemistry became open to theoretic treatment. The helium atom with its two electrons couldn’t be dealt with at all by the old quantum theory. Using various approximation methods, the new theory could understand in detail the helium atom and other multielectron atoms. Electrons in metals could be modeled with the Schrödinger’s equation, and soon the discovery of the neutron opened up the study of the atomic nucleus. The old quantum theory was helpless in dealing with particle scattering where there were no closed orbits. Such scattering was easily accommodated by the Schrödinger equation though the detailed calculations were far from trivial. Over the years quantum theory revealed more and more practical knowledge and most physicists concentrated on experiments and theoretic calculations that led to such knowledge with little concern about what the new theory meant in terms of physical reality.

However, back in the first few years after 1925 there was a great deal of concern about what the theory meant and the question of how it should be interpreted. For example, under Schrödinger’s theory an electron was represented by a “cloud” of numbers which could travel through space or surround an atom’s nucleus. These numbers, called the wave function and typically named ψ, were complex, of the form a + ib, where i is the square root of -1. By multiplying such a number by its conjugate a – ib, one gets a positive (strictly speaking, non-negative) number which can perhaps be physically interpreted. Schrödinger himself tried to interpret this “real” cloud as a negative electric change density, a blob of negative charge. For a free electron, outside an atom, Schrödinger imagined that the electron wave could form what is called a “wave packet”, a combination of different frequencies that would appear as a small moving blob which could be interpreted as a particle. This idea definitely did not fly. There were too many situations where the waves were spread out in space, before an electron suddenly made its appearance as a particle. The question of what ψ meant was resolved by Max Born (see Wikipedia), starting with a paper in June, 1926. Born interpreted the non-negative numbers ψ*ψ (ψ* being the complex conjugate of the ψ numbers) as a probability distribution for where the electron might appear under suitable physical circumstances. What these physical circumstances are and the physical process of the appearance are still not completely resolved. Later in this or another blog post I will go into this matter in some detail. In 1926 Born’s idea made sense of experiment and resolved the wave-particle duality of the old quantum theory, but at the cost of destroying classical concepts of what a particle or wave really was. Let me try to explain.

A simple example of a classical probability distribution is that of tossing a coin and seeing if it lands heads or tails. The probability distribution in this case is the two numbers, ½ and ½, the first being the probability of heads, the second the probability of tails. The two probabilities add up to 1 which represents certainty, in probability theory. (Unlike the college students who are trying to decide whether to go drinking, go to the movies or to study, I ignore the possibility that the coin lands on its edge without falling over.) With the wave function product ψ*ψ, calculus gives us a way of adding up all the probabilities, and if they don’t add up to 1, we simply define a new ψ by dividing by the sum we obtained. (This is called “normalizing” the wave function.) Besides the complexity of the math, however, there is a profound difference between the coin and the electron. With the coin, classical mechanics tells us in theory, and perhaps in practice, precisely what the position and orientation of the coin is during every instant of its flight; and knowing about the surface the coin lands on, allows us to predict the result of the toss in advance. The classical analogy for the electron would be to imagine it is like a bb moving around inside the non-zero area of the wave function, ready to show up when conditions are propitious. With QM this analogy is false. There is no trajectory for the electron, there is no concept of it having a position, before it shows up. Actually, it is only fairly recently that the “bb in a tin can model” has been shown definitively to be false. I will discuss this matter later talking briefly about Bell’s theorem and “hidden” variable ideas. However, whether or not an electron’s position exists prior to its materialization, it was simply the concept of probability that Einstein and Schrödinger, among others, found unacceptable. As Einstein famously put it, “I can’t believe God plays dice with the universe.”

Max Born, who introduced probability into fundamental physics, was a distinguished physics professor in Göttingen and Heisenberg’s mentor after the latter first came to Göttingen from Munich in 1922. Heisenberg got the breakthrough for his theory while escaping from hay fever in the spring of 1925 walking the beaches of the bleak island of Helgoland in the North Sea off Germany. Returning to Göttingen, Heisenberg showed his work to Born who recognized the calculations as being matrix multiplication and who saw to it that Heisenberg’s first paper was immediately published. Born then recruited Pascual Jordan from the math department at Göttingen and the three wrote a famous follow-up paper, Zur Quantenmechanik II, Nov, 1925, which gave a complete treatment of the new theory from a matrix mechanics point of view. Thus, Born was well posed to come up with his idea of the nature of the wave function.

Quantum Mechanics came into being during the amazingly short interval between mid-1925 and the end of 1926. As far as the theory went, only “mopping” up operations were left. As far as the applications were concerned there was a plethora of “low hanging fruit” that could be gathered over the years with Schrödinger’s equation and Born’s interpretation. However, as 1927 dawned, Heisenberg and many others were concerned with what the theory meant, with fears that it was so revolutionary that it might render ambiguous the meaning of all the fundamental quantities on which both the new QM and old classical physics depended. In 1925 Heisenberg began his work on what became the matrix mechanics because he was skeptical about the existence of Bohr orbits in atoms, but his skepticism did not include the very concept of “space” itself. As QM developed, however, Heisenberg realized that it depended on classical variables such as position and momentum which appeared not only in the pq commutation relation but as basic variables of the Schrödinger equation. Had the meaning of “position” itself changed? Heisenberg realized that earlier with Einstein’s Special Relativity that the meaning of both position and time had indeed changed. (Newton assumed that coordinates in space and the value of time were absolutes, forming an invariable lattice in space and an absolute time which marched at an unvarying pace. Einstein’s theory was called Relativity because space and time were no longer absolutes. Space and time lost their “ideal” nature and became simply what one measured in carefully done experiments. (Curiously enough, though Einstein showed that results of measuring space and time depended on the relative motion of different observers, these quantities changed in such an odd way that measurements of the speed c of light in vacuum came out precisely the same for all observers. There was a new absolute. A simple exposition of special relativity is N. David Mermin’s Space and Time in Special Relativity.)

The result of Heisenberg’s concern and the thinking about it is called the “Uncertainty Principle”. The statement of the principle is the equation ΔqΔp = ħ. The variables q and p are the same q and p of the Fundamental Quantum Relation and, indeed, it is not difficult to derive the uncertainty principle from the FQR. The symbol delta, Δ, when placed in front of a variable means a difference, that is an interval or range of the variable. Experimentally, a measurement of a variable quantity like position q is never exact. The amount of the uncertainty is Δq. The uncertainty equation above thus says that the uncertainty of a particle’s position times the uncertainty of the same particle’s momentum is ħ. In QM what is different from an ordinary error of measurement is that the uncertainty is intrinsic to QM itself. In a way, this result is not all that surprising. We’ve seen that the wave function ψ for a particle is a cloud of numbers. Similarly, a transformed wave function for the same particle’s momentum is a similar cloud of numbers. The Δ’s are simply a measure of the size of these two clouds and the principle says that as one becomes smaller, the other gets larger in such a way that their product is h bar, whose numerical value I’ve given above.

In fact, back in 1958 when I was in Eikenberry’s QM course and we derived the uncertainty relation from the FQR, I wondered what the big deal was. I was aware that the uncertainty principle was considered rather earthshaking but didn’t see why it should be. What I missed is what Heisenberg’s paper really did. The equation I’ve written above is pure theory. Heisenberg considered the question, “What if we try to do experiments that actually measure the position and momentum. How does this theory work? What is the physics? Could experiments actually disprove the theory?” Among other experimental set-ups Heisenberg imagined a microscope that used electromagnetic rays of increasingly short wavelengths. It was well known classically by the mid-nineteenth century that the resolution of a microscope depends on the wavelength of the light it uses. Light is an electromagnetic (em) wave so one can imagine em radiation of such a short wavelength that it could view with a microscope a particle, regardless of how small, reducing Δq to as small a value as one wished. However, by 1927 it was also well known because of the Compton effect that I talked about in the last post, that such em radiation, called x-rays or gamma rays, consisted of high energy photons which would collide with the electron giving it a recoil momentum whose uncertainty, Δp, turns out to satisfy ΔqΔp = ħ. Heisenberg thus considered known physical processes which failed to overturn the theory. The sort of reasoning Heisenberg used is called a “thought” experiment because he didn’t actually try to construct an apparatus or carry out a “real” experiment. Before dismissing thought experiments as being hopelessly hypothetical, one must realize that any real experiment in physics or in any science for that matter, begins as a thought experiment. One imagines the experiment and then figures out how to build an apparatus (if appropriate) and collect data. In fact, as a science progresses, many experiments formerly expressed only in thought, turn real as the state of the art improves.

Although the uncertainty principle is earthshaking enough that it helped confirm the skepticism of two of the main architects of QM, namely, Einstein and Schrödinger, one should note that, in practice, because of the small size of ħ, the garden variety uncertainties which arise from the “apparatus” measuring position or momentum are much larger than the intrinsic quantum uncertainties. Furthermore, the principle does not apply to c-numbers such as e, the fundamental electron or proton charge, c, the speed of light in vacuum, h, Planck’s constant. There is an interesting story here about a recent (Fall, 2018) redefinition of physical units which one can read about on line. Perhaps I’ll have more to say about this subject in a later post. For now, I’ll just note that starting on May 20, 2019, Planck’s constant will be (or has been) defined as having an exact value of 6.626070150×10¯³⁴ Joule seconds. There is zero uncertainty in this new definition which may be used to define and measure the mass of the kilogram to higher accuracy and precision than possible in the past using the old standard, a platinum-iridium cylinder, kept closely guarded near Paris. In fact, there is nothing muddy or imprecise about the value of many quantities whose measurement intimately involves QM.

During the years after 1925 there was at least one more area which in QM was puzzling to say the least; namely, what has been called “the collapse of the wave function.” Involved in the intense discussions over this phenomenon and how to deal with it was another genius I’ve scarcely mentioned so far; namely Wolfgang Pauli. Pauli, a year older than Heisenberg, was a year ahead of him in Munich studying under Sommerfeld, then moved to Göttingen, leaving just before Heisenberg arrived. Pauli was responsible for the Pauli Exclusion Principle based on the concept of particle spin which he also explicated. (see Wikipedia) He was in the thick of things during the 1925 – 1927 time period. Pauli ended up as a professor in Zurich, but spent time in Copenhagen with Bohr and Heisenberg (and many others) formulating what became known as the Copenhagen interpretation of QM. Pauli was a bon vivant and had a witty sarcastic tongue, accusing Heisenberg at one point of “treason” for an idea that he (Pauli) disliked. In another anecdote Pauli was at a physics meeting during the reading of a muddy paper by another physicist. He stormed to his feet and loudly said, “This paper is outrageous. It is not even wrong!” Whether the meeting occurred at a late enough date for Pauli to have read Popper, he obviously understood that being wrong could be productive, while being meaningless could not.

Over the next few years after 1927 Bohr, Heisenberg, and Pauli explicated what came to be called “the Copenhagen interpretation of Quantum Mechanics”. It is well worth reading the superb article in Wikipedia about “The Copenhagen Interpretation.” One point the article makes is that there is no definitive statement of this interpretation. Bohr, Heisenberg, and Pauli each had slightly different ideas about exactly what the interpretation was or how it worked. However, in my opinion, things are clear enough in practice. The problem QM seems to have has been called the “collapse of the wave function.” It is most clearly seen in a double slit interference experiment with electrons or other quantum particles such as photons or even entire atoms. The experiment consists of a plate with two slits, closely enough spaced that the wave function of an approaching particle covers both slits. The spacing is also close enough that the wavelength of the particle as determined by its energy or momentum, is such that the waves passing through the slit will visibly interfere on the far side of the slit. This interference is in the form of a pattern consisting of stripes on a screen or photographic plate. These stripes show up, zebra like, on a screen or as dark, light areas on a developed photographic plate. On a photographic plate there is a black dot where a particle has shown up. The striped pattern consists of all the dots made by the individual particles when a large number of particles have passed through the apparatus. What has happened is that the wave function has “collapsed” from an area encompassing all of the stripes, to a tiny area of a single dot. One might ask at this point, “So what?” After all, for the idea of a probability distribution to have any meaning, the event for which there is a probability distribution has to actually occur. The wave function must “collapse” or the probability interpretation itself is meaningless. The problem is that QM has no theory whatever for the collapse.

One can easily try to make a quantum theory of what happens in the collapse because QM can deal with multi-particle systems such as molecules. One obtains a many particle version of QM simply by adding the coordinates of the new particles, which are to be considered, to a multi-particle version of the Schrödinger equation. In particular, one can add to the description of a particle which approaches a photographic plate, all the molecules in the first few relevant molecular layers of the plate. When one does this however, one does not get a collapse. Instead the new multi-particle wave function simply includes the molecules of the plate which are as spread out as much as the original wave function of the approaching particle. In fact, the structure of QM guarantees that as one adds new particles, these new particles themselves continue to make an increasingly spread out multi-particle wave function. This result was shown in great detail in 1929 by John von Neumann. However, the idea of von Neumann’s result was already generally realized and accepted during the years of the late 1920’s when our three heroes and many others were grappling with finding a mechanism to explain the experimental collapse. Bohr’s version of the interpretation is simplicity itself. Bohr posits two separate realms, a realm of classical physics governing large scale phenomena, and a realm of quantum physics. In a double slit experiment the photographic plate is classical; the approaching particle is quantum. When the quantum encounters the classical, the collapse occurs.

The Copenhagen interpretation explains the results of a double slit experiment and many others, and is sufficient for the practical development of atomic, molecular, solid state, nuclear and particle physics, which has occurred since the late 1920’s. However, there has been an enormous history of objections, refinements, rejections and alternate interpretations of the Copenhagen interpretation as one might well imagine. My own first reaction could be expressed as the statement, “I thought that ‘magic’ had been banned from science back in the 17th century. Now it seems to have crept back in.” (At present I take a less intemperate view.) However, one can make many obvious objections to the Copenhagen interpretation as I’ve baldly stated it above. Where, exactly, does the quantum realm become the classic realm? Is this division sharp or is there an interval of increasing complexity that slowly changes from quantum to classical? Surely, QM, like the theory of relativity, actually applies to the classical realm. Or does it?

During the 1930’s Schrödinger used the difficulties with the Copenhagen interpretation to make up the now famous thought experiment called “Schrödinger’s Cat.” Back in the early 1970’s when I became interested in the puzzle of “collapse” and first heard the phrase “Schrödinger’s Cat”, it was far from famous so, curious, I looked it up and read the original short article, puzzling out the German. In his thought experiment Schrödinger uses the theory of alpha decay. An alpha particle confined in a radioactive nucleus is forever trapped according to classical physics. QM allows the escape because the alpha particle’s wave function can actually penetrate the barrier which classically keeps it confined. Schrödinger imagines a cat imprisoned in a cage containing an infernal apparatus (hollenmaschine) which will kill the cat if triggered by an alpha decay. Applying a multi-particle Schrödinger’s equation to the alpha’s creeping wave function as it encounters the trigger of the “maschine”, its internals, and the cat, the multi-particle wave function then contains a “superposition” (i.e. a linear combination) of a dead and a live cat. Schrödinger makes no further comment leaving it to the reader to realize how ridiculous this all is. Actually, it is even worse. According to QM theory, when a person looks in the cage, the superposition spreads to the person leaving two versions, one looking at a dead cat and one looking at a live cat. But a person is connected to an environment which also splits and keeps splitting until the entire universe is involved.

What I’ve presented here is an actual alternative to the Copenhagen Interpretation called “the Many-worlds interpretation”. To quote from Wikipedia “The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual ‘world’ (or ‘universe’).” The many-worlds interpretation arose in 1957 in the Princeton University Ph.D. dissertation of Hugh Everett working under the direction of the late John Archibald Wheeler, who I mentioned in the last post. Although I am a tremendous admirer of Wheeler, I am skeptical of the many-worlds interpretation. It seems unnecessarily complicated, especially in light of ideas that have developed since I noticed them in 1972. There is no experimental evidence for the interpretation. Such evidence might involve interference effects between the two versions of the universe as the splitting occurs. Finally, if I exist in a superposition, how come I’m only conscious of the one side? Bringing in “consciousness” however, leads to all kinds of muddy nonsense about consciousness effects in wave function splitting or collapse. I’m all for consciousness studies and possibly such will be relevant for physics after another revolution in neurology or physics. At present we can understand quantum mechanics without explicitly bringing in consciousness.

In the next post I’ll go into what I noticed in 1971-72 and how this idea subsequently became developed in the greater physics community. The next post will necessarily be somewhat more mathematically specific than so far, possibly including a few gory details. I hope that the math won’t obscure the story. In subsequent posts I’ll revert to talking about physics theory without actually doing any math. Back to Top

Physics, Etc.

In telling a story about physics and some of its significance for a life of awareness I’ll start with an idea of the philosopher Immanuel Kant (1724 – 1804). Kant, in my mind, is associated with impenetrable German which translates into impenetrable English. To find some clarity about Kant’s ideas one turns to Wikipedia, where the opening paragraph of the Kant entry explains his main ideas in an uncharacteristically comprehensible way. One of these ideas is that we are born into this world with our minds prepared to understand space, time, and causality. And with this kind of mental conditioning we can make sense of simple phenomena, and, indeed, pursue science. This insight predates Darwin’s theory of evolution which offers a plausible explanation for it, by some sixty-odd years, and was thus a remarkable insight on the part of Kant. Another Kant idea that is relevant to our story is his distinction between what he calls phenomena and noumena. Quoting from Wikipedia, “… our experience of things is always of the phenomenal world as conveyed by our senses: we do not have direct access to things in themselves, the so-called noumenal world.” Of course, this is only one aspect of Kant’s thought, but the aspect that seems to me most relevant to what might be meant by physical reality. Kant was a philosopher’s philosopher, totally dedicated to deepening our understanding of what we may comprehend about the world and morality by purely rational thought. He was born in Königsberg, East-Prussia, at the time a principality on the Baltic coast east of Denmark and north of Poland-Lithuania; and died there 80 years later. Legend has it that during his entire life he never traveled more than 10 miles from his home. The Wikipedia article refutes this slander: Kant actually traveled on occasion some 90.1 miles from Königsberg.

The massive extent of Kant’s philosophy leaves me somewhat appalled, particularly since I understand little of it and because what I perhaps do understand seems dubious at best and meaningless at worst. What Kant may not have realized is the idea that the extent and nature of the noumenal world is relative to the times in which one lives. Kant was born 3 years before Isaac Newton died, so by the date of his birth the stage was well set for the age of classical physics. During his life classical mechanics was developed largely by two great mathematicians, Joseph-Louis Lagrange (1736 – 1813) and Pierre-Simon Laplace (1747 – 1849). Looking back from Kant’s time to the ancient world one sees an incredible growth of the phenomenal world, with the Copernican revolution, a deepening understanding of planetary motion, and Newton’s Laws of mechanics. In the time since Kant lived laws of electricity and magnetism, statistical mechanics, quantum mechanics, and most of present-day science were developed. This advance raises a question. Does the growth of the phenomenal world entail a corresponding decrease in the noumenal world or are phenomena and noumena entirely independent of one another? Of course, I’d like to have it both ways, and can do so by imagining two senses of noumena. To get an idea of the first original idea, I will tell a brief story. In the early 1970’s we were visited at Auburn University by the great physicist, John Archibald Wheeler, who led a discussion in our faculty meeting room. I was very impressed by Dr. Wheeler. To me he seemed a “tiger”, totally dedicated to physics, his students, and to an awareness of what lay beyond our comprehension. At one point he pointed to the tiles on the floor and said to us physicists, something like, “Let each one of you write your favorite physics laws on one of these tiles. And after you’ve all done that, ask the tiles with their equations to get up and fly. They will just lie there; but the universe flies.” Wheeler had doubtless used this example on many prior occasions, but it was new to me and seems to get at the meaning of noumena as a realm independent of anything science can ever discover. On the other hand, as the realm of phenomena that we do understand has grown, we can regard noumena simply as a “blank” in our knowledge, a blank which can be filled in as science, so to speak, peels back the layers of an “onion” revealing the understanding of a larger world, and at the same time, exposing a new layer of ignorance to attack. This second sense of the word in no way diminishes the ultimate mystery of the universe. In fact, it appears to me that the quest for ultimate understanding in the face of the great mystery is what gives physics (and science) a compulsive, even addictive, fascination for its practitioners. Like compulsive gamblers experimental physicists work far into the night and theorists endlessly torture thought. Certainly, the idea that we could conceivably uncover ever more specifics into the mystery of ultimate being is what drew me to the area. That, as well as the idea that if one wants to understand “everything”, physics is a good place to start.

In my understanding, the story of physics during my lifetime and the 30 years preceding my birth is the story of a massive, earthshaking revolution. Thomas Kuhn’s The Structure of Scientific Revolutions, mentioned in earlier posts is a story of many shifts in scientific perception which he calls revolutions. In his terms what I’m talking about here is a “super-duper-revolution”, a massive shift in understanding whose import is still not fully realized in our society at large at the present time. Most of the ”revolutions” that Kuhn uses as examples affect only scientists in a particular field. For example, the fall of the phlogiston theory and the rise of Oxygen in understanding fire and burning was a major revolution for chemistry, but had little effect on the culture of society at large. Similarly, in ancient times the rise of Ptolemaic astronomy mostly concerned philosophers and intellectuals. The larger society was content with the idea that gods or God controlled what went on in the heavens as well as on earth. The Copernican revolution, on the other hand, was earth shaking (super-duper) for the entire society, mainly because it called into question theories of how God ran the universe and because it became the underpinning of an entirely new idea of what was “real”. Likewise, the scientific revolution of the 16th and 17th centuries was earthshaking to the entire society, which, however, as time wore on into the 18th and 19th centuries became accustomed to it and assumed that the classical, Newtonian “clockworks” universe was here to stay forever however uncomfortable it might be to artists and writers, who hoped to live in a different, more meaningful world of their own experience, rejecting scientific “reality” as something which mattered little in a spiritual sense. Who could have believed that in the mid 1890’s after 300 years (1590 – 1890, say) of continued, mostly harmonious development the entire underpinning of scientific reality was about to be overturned by what might be called the quantum revolution. Yet that is what happened in the next forty years (1895 – 1935) with continuing advances and consolidation up to the present day. (From now on I’ll use the abbreviation QM for Quantum Mechanics, the centerpiece of this revolution.) Of course, as with any great revolution, all has not been smooth. Many of the greatest scientists of our times, most notably Albert Einstein and Erwin Schrödinger, found the tenets of the new physics totally unacceptable and fought them tooth and nail. In fact, there is at least one remaining QM puzzle epitomized by “Schrödinger’s Cat” about which I hope to have my say at some point.

It is my hope that readers of this blog will find excitement in the open possibilities that an understanding of the revolutionary physical “reality” we currently live in suggests. In talking about it I certainly don’t want to try “reinvent the wheel” since many able and brilliant writers have told portions of the story. What I can do is give references to various books and URL’s that are with few exceptions (which I’ll note) great reading. I’ll have comments to make about many of these and hope that with their underpinning, I can tell this story and illuminate its relevance for what I’ve called Western Zen.

The first book to delve into is The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught us to Love Uncertainty by Robert P. Crease and Alfred Scharff Goldhaber. Robert Crease is a philosopher specializing in science and Alfred Goldhaber is a physicist. The book, which I’ll abbreviate as TQM, tells the history of Quantum Mechanics from its very beginning in December, 1900, to very near the present day. Copyrighted by W.W. Norton in 2014 it is quite recent, today as I write being early November, 2018. The story this book tells goes beyond an exposition of QM itself to give many examples of the effects that this new reality has had so far in our society. It is very entertaining and well written though, on occasion it does get slightly mathematical in a well-judged way in making quantum mechanics clearer. A welcome aspect of the book for me was the many references to another book, The Conceptual Development of Quantum Mechanics by Max Jammer. Jammer’s book (1966) is out of print and is definitely not light reading with its exhaustive references to the original literature and its full deployment of advanced math. Auburn University had Jammer in its library and I studied it extensively while there. I was glad to see the many footnotes to it in TQM, showing that Jammer is still considered authoritative and that there is no more recent book detailing this history. Recently, I felt that I would like to own a copy of Jammer so found one, falling to pieces, on Amazon for fifty odd dollars. If you are a hotshot mathematician and fascinated by the history of QM, you will doubtless find Jammer in any university library.

The quantum revolution occurred in two great waves. The first wave, called the “old quantum theory” started with Planck’s December, 1900, paper on black body radiation and ended in 1925 with Heisenberg’s paper on Quantum Mechanics proper. From 1925 through about 1932, QM was developed by about 8 or so geniuses bringing the subject to a point equivalent to Newton’s Principia for classical mechanics development. Besides the four physicists of the Quantum Moment title, I’ll mention Louis de Broglie, Wolfgang Pauli, PAM Dirac, Max Born, Erwin Schrödinger. And there were many others.

A point worth mentioning is that The Quantum Moment concentrates on what might be called the quantum weirdness of both the old quantum theory and the new QM. This concentration is appropriate because it is this weirdness that has most affected our cultural awareness, the main subject of the book. However, to the physicists of the period 1895 – 1932, the weirdness, annoying and troubling as it was, was in a way a distraction from the most exciting physics going on at the time; namely, the discovery that atoms really exist and have a substructure which can be understood, an understanding that led to a massive increase in practical applications as well as theoretical knowledge. Without this incredible success in understanding the material world the “weirdness” might have well have doomed QM. As we will mention below most physicists ignore the weirdness and concentrate on the “physics” that leads to practical advances. Two examples of these “advances” are the atomic bomb and the smart phone in your pocket. In the next few paragraphs I will fill in some of this history of atomic physics with its intimate connection to QM.

The discovery of the atom and its properties began in 1897 as J.J. Thomson made a definitive breakthrough in identifying the first sub-atomic particle, the lightweight, negatively charged electron (see Wikipedia). Until 1905, however, many scientists disbelieved in the “reality” of atoms in spite of their usefulness as a conceptual tool in understanding chemistry. In the “miracle year” 1905 Albert Einstein published four papers, each one totally revolutionary in a different field. The paper of interest here is about Brownian motion, a jiggling of small particles, as seen through a microscope. As a child I had a very nice full laboratory Bausch and Lomb microscope, given by my parents when I was about 7 years old. In the 9th grade I happened to put a drop of tincture of Benzoin in water and looked at it through the microscope, seeing hundreds of dancing particles that just didn’t behave like anything alive. I asked my biology teacher about it and after consulting her husband, a professor at the university, she told me it was Brownian motion, discovered by Robert Brown in 1827. I learned later that the motion was caused because the tiny moving particles are small enough that molecules striking them are unbalanced by others, causing a random motion. I had no idea at time how crucial for atomic theory this phenomenon was. It turns out that the motion had been characterized by careful observation and that Einstein showed in his paper how molecules striking the small particles could account for the motion. Also, by this time studies of radioactivity had shown emitted alpha and beta particles were clearly sub-atomic, beta particles being identical with the newly discovered electrons and the charged alpha particles turning into electrically neutral helium as they slowed and captured stray electrons.

Einstein’s other 1905 papers were two on special relativity and one on the photoelectric effect. As strange as special relativity seems with its contraction of moving measuring sticks, slowing of moving clocks, simultaneity dependent upon the observer to say nothing of E = mc², this theory ended up fitting comfortably with classical Newtonian physics. Not so with the photoelectric effect.

Planck’s Discovery of a Black Body Formula

In December, 1900, Max Planck started the quantum revolution by finding a physical basis for a formula he had guessed earlier relating the radiated energy of a glowing “black body” to its temperature and the frequencies of its radiation. A “black body” is made of an ideal substance that is totally efficient in radiating electro-magnetic waves. Such a body could be simulated experimentally with high accuracy by measuring what came out of a small hole in the side of an enclosed oven. To find the “physics” behind his formula Planck had turned to statistical mechanics, which involves counting numbers of discrete states to find the probability distribution of the states. In order to do the counting Planck had artificially (he thought) broken up the continuous energy of electromagnetic waves into chunks of energy, hν, ν being the frequency of the wave, denoted historically by the Greek letter nu. (Remember: the frequency is associated with light’s color, and thus the color of the glow when a heated body gives off radiation) Planck’s plan was to let the “artificial” fudge-factor h go to zero in the final formula so that the waves would regain their continuity. Planck found his formula, but when he set h = 0, he got the classical Raleigh-Jeans formula for the radiation with its “ultra-violet catastrophe”. The latter term refers to the Raleigh-Jeans formula’s infinite energy radiated as the frequency goes higher. Another formula, guessed by Wien, gave the correct experimental results at high frequencies but was off at lower frequencies where the Raleigh-Jeans formula worked just fine. To his dismay what Planck found was that if he set h equal to a very small finite value, his formula worked perfectly for both low and high frequencies. This was a triumph but at the same time, a disaster. Neither Planck nor anyone else believed that these hν bundles could “really” be real. Maybe the packets came off in bundles which quickly merged to form the electromagnetic wave. True, Newton had thought light consisted of a stream of tiny particles, but over the years since his time numerous experiments showed that light really was a wave phenomenon, with all kinds of wave interference effects. Also, in the 19th century physicists, notably Fraunhofer, invented the diffraction grating and with it the ability to measure the actual wave length of the waves. The Quantum Moment (TQM) has a wonderfully complete detailed story of Planck’s momentous breakthrough in its chapter “Interlude: Max Planck Introduces the Quantum”. TQM is structured with clear general expositions followed by more detailed “Interludes” which can be skipped without interrupting the story.

Einstein’s 1905 photoelectric effect paper assumed that the hν quanta were real and light actually acted like little bullets, slamming into a metal surface, penetrating, colliding with an atomic electron and bouncing it out of the metal where it could be detected. It takes a certain energy to bounce an electron out of its atom and then past the surface of the metal. What was experimentally found (after some tribulations) was that energy of the emerging electrons depended only on the frequency of the light hitting the surface. If the light frequency was too low, no matter how intense the light, nothing much happened. At higher frequencies, increasing the intensity of the light resulted in more electrons coming out but did not increase their energy. As the light frequency increased the emitted electrons were more energetic. It was primarily for this paper that Einstein received his Nobel Prize in 1921.

A huge breakthrough in atomic theory was Ernest Rutherford’s discovery of the atomic nucleus in the early years of the 20th century. Rather than a diffuse cloud of electrically positive matter with the negatively charged electrons distributed in it like raisins (the “plum pudding” model of the atom) Rutherford found by scattering alpha particles off gold foil that the positive charge of the atom was in a tiny nucleus with the electrons circling at a great distance (the “fly in the cathedral model”). There was a little problem however. The “plum pudding” model might possibly be stable under Newtonian classical physics, while the “fly in the cathedral” model was utterly unstable. (Note: Rutherford’s experiment, though designed by him, was actually carried out between 1908 and 1913 by Hans Geiger and Ernest Marsden at Rutherford’s Manchester lab.) Ignoring the impossibility of the Rutherford atom physics plowed ahead. In 1913 the young Dane Niels Bohr made a huge breakthrough by assuming quantum packets were real and could be applied to understanding the hydrogen atom, the simplest of all atoms with its single electron circling its nucleus. Bohr’s model with its discrete electron orbits and energy levels explained the spectral lines of glowing hydrogen which had earlier been discovered and measured with a Fraunhofer diffraction grating. At Rutherford’s lab it was quickly realized that energy levels were a feature of all atoms, and the young genius physicist, Henry Moseley, using a self-built X-ray tube to excite different atoms refined the idea of the atomic number, removing several anomalies in the periodic table of the time, while predicting 4 new chemical elements in the process. At this point World War I intervened and Moseley volunteered for the Royal Engineers. One among the innumerable tragedies of the Great War was the death of Moseley August 10, 1915, aged 27, in Gallipoli, killed by a sniper.

Brief Interlude: It is enlightening to understand the milieu in which the quantum revolution and the Great War occurred. A good read is The Fall of the Dynasties – The Collapse of the Old Order: 1905 – 1922 by Edmond Taylor. Originally published in 1963, the book was reissued in 2015. The book begins with the story of the immediate cause of the war, an assassination in Sarajevo, Bosnia, part of the dual monarchy Austria-Hungary empire; then fills in the history of the various dynasties, countries and empires involved. One imagines what it would be like to live in those times and becomes appalled by the nationalistic passions of the day. While explicating the seemingly mainstream experience of living in the late 19th and early 20th century, and the incredible political changes entailed by the fall of the monarchies and the Great War, the aspects of the times, which we think of, these days, as equally revolutionary are barely mentioned. These were modern art with its demonstration that aesthetic depth lay in realms beyond pure representation, the modern novel and poetry, the philosophy of Wittgenstein which I’ve discussed above and perhaps most revolutionary of all, the fall of classic physics and rise of the new “reality” of modern physics which we are talking about in this post. (With his deep command of the relevant historical detail for his story the author does, however, get one thing wrong when he briefly mentions science. He chooses Einstein’s relativity of 1905 but calls it “General Relativity” putting in an adjective which makes it sound possibly more exciting than plain “relativity”. The correct phrase is “Special Relativity” which indeed was quite exciting enough. General Relativity didn’t happen until 1915.) More about WWI is in a later post, History I, link

Unlike the second world war the first was not a total war and research in fundament physics went on. The mathematician turned physicist Arnold Sommerfeld in Munich generalized Bohr’s quantum rules by imagining the discrete electron orbits as elliptical rather than circular and taking their tilt into account, giving rise to new labels (called quantum numbers) for these orbits. The light spectra given off by atoms verified these new numbers with a few discrepancies which were later removed by QM. During this time and after the war ended, physicists became concerned about the contradiction between the wave and particle theories of light. This subject is well covered in TQM. (See the chapter “Sharks and Tigers: Schizophrenia”. It is easy to see the problem. If one has surfed or even just looked at the ocean, one feels or sees that a wave carries energy along a wide front, this energy being released as the wave breaks. This kind of energy distribution is characteristic of all waves, not just ocean waves. On the other hand, a bullet or billiard ball carries its energy and momentum in a compact volume. Waves can interfere with each other, reinforcing or canceling out their amplitudes. So, what is one to make of light which makes interference patterns when shined through a single or double slit but acts like a particle in the photoelectric effect or, even more clearly, like a billiard ball collision when a light quantum, called a photon, collides with an electron, an effect discovered by Arthur Compton in 1923. To muddy the waters still further, in 1922 the French physicist Louis de Broglie reasoned that if light can act like either a particle or wave depending on circumstances, by analogy, an electron, regarded hitherto as strictly a particle, could perhaps under the right conditions act like a wave. Although there was no direct evidence for electron waves at the time, there was suggestive evidence. For example, with the Bohr model of the hydrogen atom if one assumed the lowest, “ground state” orbit was a single electron wave length, one could deduce the entire Bohr theory in a new, simple way. By 1924 it was clear to physicists that the “old” quantum mechanics just wouldn’t do. This theory kept classical mechanics and classical wave theory and restricted their generality by imposing “quantum” rules. With both light and electrons being both wave and particle, physics contained an apparent logical contradiction. Furthermore, though the “old” theory had successes with its concept of energy levels in atoms and molecules, it couldn’t theoretically deal at all with such seemingly simple entities as the hydrogen molecule or the helium atom which experimentally had well defined energy levels. The theory was a total mess. It was in 1925 that the beginnings of a completely new, fundamental theory made its appearance leading shortly to much more weirdness than had already appeared in the “old quantum” theory. In the next post I’ll delve into some of the story of the new QM. Back to Top

Funny Numbers

During the century between about 600 BCE to 500 BCE, the first school of Greek philosophy flourished in Ionia. This, arguably, is the first historical record of philosophy as a reasoned attempt to explain things without recourse to the gods or out-and-out magic. But where on earth was Ionia? Wherever it was it’s now long gone. Wikipedia, of course, supplies an answer. If one sails east from the body of Greece for around 150 miles, passing many islands in the Aegean Sea, one reaches the mainland of what is now Turkey. Along this coast at about the same latitude as the north coast of the Peloponnesus (37.7 degrees N) one finds the island of Samos, a mile or so from the mainland; and just to the north is a long peninsula poking west which in ancient times held the city-state of Ionia. Wikipedia tells us that this city-state, along with many others along the coast nearby formed the Ionian League, which in those days, was an influential part of ancient Greece, allying with Athens and contributing heavily, later on, to the defeat of the Persians when they tried to conquer Greece. One can look at Google Earth and zoom in on these islands and in particular on Samos, seeing what is now likely a tourist destination with beaches and an interesting, rocky, green interior. On the coast to the east and somewhat south of Samos was the large city of Miletus, home to Thales, Anaximander, Heraclitus and the rest of the Ionian philosophers. At around 570 BCE on the Island of Samos Pythagoras was born. Nothing Pythagoras possibly might have written has survived, but his life and influence became the stuff of conflicting myths interspersed with more plausible history. His father was supposedly a merchant and sailed around the Mediterranean. Legend has it that Pythagoras traveled to Egypt, was captured in a war with Babylonia and while imprisoned there picked up much of the mathematical lore of Babylon, especially in its more mystical aspects. Later freed, he came home to Samos, but after a few years had some kind of falling out with its rulers and left, sailing past Greece to Croton on the foot of Italy which in those days was part of a greater Greek hegemony. There he founded a cult whose secret mystic knowledge included some genuine mathematics such as how musical harmony depended on the length of a plucked string and the proof of the Pythagorean theorem, a result apparently known to the Babylonians for a thousand years previously, but possibly never before proved. Pythagoras was said to have magic powers, could be at two places simultaneously, and had a thigh of pure gold. This latter “fact” is mentioned in passing by Aristotle who lived 150 years later and is celebrated in lines from the Yeats poem, Among School Children:

Plato thought nature but a spume that plays

Upon a ghostly paradigm of things;

Solider Aristotle played the taws

Upon the bottom of a king of kings;

World-famous golden-thighed Pythagoras

Fingered upon a fiddle-stick or strings

What a star sang and careless Muses heard:

Yeats finishes the stanza with one more line summing up the significance of these great thinkers: “Old clothes upon old sticks to scare a bird.” Although one may doubt the golden thigh, quite possibly Pythagoras did have a birthmark on his leg.

I became interested in Ionia and then curious about its history and significance because I recently wondered what kind of notation the Greeks had for numbers. Was their notation like Roman numerals or something else? I found an internet link, http://www.math.tamu.edu/~dallen/history/gr_count/gr_count.html which explained that the “Ionian” system displaced an earlier “Attic” notation throughout Greece, and then went on to explain the Ionian system. In the old days when a classic education was part of every educated person’s knowledge, this would be completely clear as an explanation. Although I am old enough to have had inflicted upon me three years of Latin in high school, since then I had been exposed to no systematic knowledge of the classical world so was entirely ignorant of Ionia, or at least of its location. I had heard of the Ionian philosophers and had dismissed their philosophy as being of no importance as indeed is the case, EXCEPT for their invention of the whole idea of philosophy itself. And, of course, without the rationalism of philosophy, it is indeed arguable that there would never have been the scientific revolution of the seventeenth century in the West. (Perhaps that revolution was premature without similar advances in human governance and will yet lead to disaster beyond imagining in our remaining lifetimes. Yet we are now stuck with it and might as well celebrate.)

The Ionian numbering system uses Greek letters for numerals from 1 to 9, then uses further letters for 10, 20, 30 through 90, and more letters yet for 100, 200, 300, etc. The total number of symbols is 27, quite a brain full. The important point about this notation along with that of the Egyptian, Attic, Roman and other ancient Western systems is that position within a string of numerals has no significance except for that of relative position with Roman numerals. This relative positioning helps by reducing the number of symbols needed in a numeric notation, but is a dead end compared to an absolute meaning for position which we will go into below. The lack of meaning for position in a string of digits is similar to written words where the pattern of letters within a word has significance but not the place of a letter within the word, except for things like capitalizing the first letter or putting a punctuation mark after the last. As an example of the Ionian system, consider the number 304 which would be τδ, τ being the symbol for 300 and δ being 4. There is no need for zero, and, in fact, these could be written in reverse order δτ and carry the same meaning. In thinking about this fact and the significance of rational numbers in the Greek system I came to understand some of the long history with the sparks of genius that led in India to OUR numbers. In comparison with the old systems ours is incredibly powerful but with some complexity to it. I can see how with unenlightened methods of teaching, trying to learn it by rote can lead to early math revulsion and anxiety rather than to an appreciation of its remarkable beauty, economy and power.

In the ancient Western systems there is no decimal point and nothing corresponding to the way we write decimal fractions to the right of the decimal point. What we call rational numbers (fractions) were to Pythagoras and the Greeks all there was. They were “numbers”, period, and “obviously” any quantity whatever could be expressed using them. Pythagoras died around 495 BCE, but his cult lived on. Sometime during the next hundred years, one of his followers disproved the “obvious”, showing that no “number” could express the square root of 2. This quantity, √2, by the Pythagorean theorem, is the hypotenuse of a right triangle whose legs are of length 1, so it certainly has a definite length, and is thus a quantity but to the Greeks was not a “number”. Apparently, this shocking fact about root 2 was kept secret by the Pythagoreans, but was supposedly betrayed by Hippasus, one of them. Or perhaps it was Hippasus who discovered the irrationality. Myth has it that he was drowned (either by accident or deliberately) for his impiety towards the gods. The proof of the irrationality of root 2 is quite simple, nowadays, using easy algebra and Aristotelian logic. If a and b are integers, assume a/b = √2. We may further assume that a and b have no common factor, because we may remove them all, if any. Squaring and rearranging, we get a²/2 = b². Since b is an integer, a²/2 must also be an integer, and thus “a” itself is divisible by 2. Substituting 2c for a in the last equation and then rearranging, we find that b is also divisible by 2. This contradicts our assumption that a and b shared no common factor. Now we apply Aristotelian logic, whose key property is the “law of the excluded middle”: if a proposition is false, its contrary is necessarily true, there is no “weaseling” out. In this case where √2 is either a fraction or isn’t, Aristotelian logic applies, which proves that a/b can’t be √2. The kind of proof we have used here is called “proof by contradiction”. Assume something and prove it false. Then by the law of the excluded middle, the contrary of what we assumed must be true. In the early twentieth century, a small coterie of mathematicians, called “intuitionists”, arose who distrusted proof by contradiction. Mathematics had become so complex during the nineteenth century that these folks suspected that there might, after all, be a way of “weaseling” out of the excluded middle. In that case only direct proofs could be trusted. The intuitionist idea did not sit well with most mathematicians who were quite happy with one of their favorite weapons.

Getting back to the Greeks and the fifth century BCE one realizes that after discovering the puzzling character of √2, the Pythagoreans were relatively helpless, in part because of inadequacies in their number notation. I haven’t tried to research when and how progress was made in resolving their conundrum during the 25 centuries since Hippasus lived and died, but WE are not helpless and with the help of our marvelous number system and a spreadsheet such as Excel, we can show how the Greeks could have possibly found some relief from their dilemma. The answer comes by way of what are called Pythagorean Triplets, three integers like 3,4,5 which satisfy the Pythagorean Law. With 3,4,5 one has 3² + 4² = 5². Other triplets are 8,15,17 and 5,12,13. There is a simple way of finding these triplets. Consider two integers p and q where q is larger than p, where if p is even, q is odd (or vice-versa) and where p and q have no common factor. Then let f = q² + p², d = q² – p², and e = 2pq. One finds that d² + e² = f². Some examples: p = 1, q = 2 leads to 3,4,5; p = 2, q = 3 leads to 5,12,13. These triplets have a geometrical meaning in that there exist right triangles who sides have lengths whose ratios are Pythagorean triplets. Now consider p = 2, q = 5 which leads to the triplet 20,21,29. If we consider a right triangle with these lengths, we notice that the sides 20 and 21 are pretty close to each other in length, so that the shape of the triangle is almost the same as one with sides 1,1 and hypotenuse √2. We can infer that 29/21 should be less than √2 and 29/20 should be greater than √2. Furthermore, if we double the triangle to 40,42,58, and note that 41 lies halfway between 42 and 40, the ratio 58/41 should be pretty darn close to √2. We can check our suspicion about 58/41 by using a spreadsheet and find that the 58/41 is 1.41463 to 5 places, while √2 to 5 places is 1.41421. The difference is 0.00042. The approximation 58/41 is off by 42 parts in 100,000 or 0.042%. The ancient Greeks had no way of doing what we have just done; but they could have squared 58 and 41 to see if the square of 58 was about twice the square of 41. What they would have found is that 58² is 3364 while 2 X 41² is 3362, so the fraction 58/41 is indeed a darn good approximation. Would the Greeks have been satisfied? Almost certainly not. In those days Idealism reigned, as it still does in modern mathematics. What is demanded is an exact answer, not an approximation.

While there is no exact fraction equal to √2, we can find fractions that get closer, closer and forever closer. Start by noticing that a 3,4,5 triangle has legs 3,4 which though not as close in length as 20, 21, are only 1 apart. Double the 3,4,5 triangle to 6,8,10 and consider an “average” leg of 7 relative to the hypotenuse of 10. The fraction 10/7 = 1.428 to 3 places while √2 = 1.414. So, 10/7 is off by only 1.4%, remarkably close. Furthermore, squaring 10 and 7, one obtains 100, 49 while 2 = 100/50. The Pythagoreans could easily have found this approximation and might have been impressed though certainly not satisfied.

I discovered these results about a month or so ago when I began to play with an Excel spread sheet. Playing with numbers for me is relaxing and fun; and is a pure game whether or not I find anything of interest. I suspect that this kind of “playing” is how “real” mathematicians do find genuinely interesting results, and if lucky, may come up with something worthy of a Fields prize, equivalent in mathematics to a Nobel prize in other fields. While my playing is pretty much innocent of any significance, it is still fun, throws some light on the ancient Greek dilemma, and for those of you still reading, shows how a sophisticated idea from modern mathematics is simple enough to be easily understood.

With spreadsheet in hand what I wondered was this: p,q = 1,2 and p,q = 2,5 lead to approximations of √2 via Pythagorean triplets. Are there other p,q’s that lead to even better approximations? To find such I adopted the most powerful method in all of mathematics: trial and error. With a spreadsheet it is easy to try many p,q’s and I found that p = 5, q = 12 led to another, even better, approximation, off by 1 part in 100,000. With 3 p,q’s in hand I could refine my guesswork and soon came up with p = 12, q = 29. I noticed that in the sequence 1,2,5,12,29,… successive pairs gave increasingly better p,q’s. This was an “aha” moment and led to a question. Could I find a rule and extend this sequence indefinitely?

In my life there is a long history of trying to find a rule for sequences of numbers. In elementary school at Hanahauoli, a private school in the Makiki area of Honolulu, I learned elementary arithmetic fairly easily, but found it profoundly uninteresting if not quite boring. Seventh grade at Punahou was not much better, but was interrupted part way through the year by the Pearl Harbor attack of December 7, 1941. The Punahou campus was taken over by the Army Corps of Engineers and our class relocated to an open pavilion on the University of Hawaii campus in lower Manoa Valley. I mostly remember enjoying games of everyone trying to tackle whoever could grab and run with a football even if I was one of the smaller children in the class. Desks were brought in and we had classes in groups while the rain poured down outside the pavilion. Probably, it was during this year that we began to learn how fractions could be expressed as decimals. In the eighth grade we moved into an actual building on the main part of the University campus and had Miss Hall as our math teacher. The math was still pretty boring, but Miss Hall was an inspiring teacher, one of those legendary types with a fierce aspect, but a heart of gold. We learned how to extract square roots, a process I could actually enjoy, and Miss Hall told us about the fascinating things we would learn as we progressed in math. There would be two years of algebra, geometry, trigonometry and if we progressed through all of these, the magic of “calculus”. It was the first time I had heard the word and, of course, I had no idea of what it might be about, but I began to find math interesting. In the ninth grade we moved back to the Punahou campus and our algebra teacher was Mr. Slade, the school principal, who had decided to get back to teaching for a year. At first, we were all put off a bit by having the fearsome principal as a teacher, but we all learned quickly that Mr. Slade was actually a gentle person and a gifted teacher. As we learned the manipulations of algebra and how to solve “word problems”, Mr. Slade would, fairly often, write a list of numbers on the board and ask us to find a formula for the sequence. I thoroughly enjoyed this exercise and learned to take differences or even second differences of pairs in a sequence. If the second differences were all the same, the expression would be a quadratic and could easily be found by trial and error. Mr. Slade also tried to make us appreciate the power of algebra by explaining what was meant by the word “abstraction”. I recall that I didn’t have the slightest understanding of what he was driving at, but my intuition could easily deal with an actual abstraction without understanding the general idea: that in place of concrete numbers we were using symbols which could stand for any number. Later when I did move on to calculus which involves another step up in abstraction, I at first had difficulty in the notation f(x), called a “function” of x, an abstract notation for any formula; or indeed a representation of a mapping that could occur without a formula. I soon got this idea straight and had little trouble later with a next step of abstraction to the idea used in quantum mechanics of an abstract “operator” that changes one function into another.

Getting back to the sequence 1,2,5,12,29,… I quickly found that taking differences didn’t work; the differences never seemed to get much smaller because the sequence turns out to have an exponential character. I soon discovered, however, using the spreadsheet that quotients worked: take 2/1, 5/2, 12/5, 29/12, all of which become more and more similar. Then multiplying 29 by the last quotient, I got 70.08. Since 29 was odd, I needed an even number for the next q so 70 looked good and indeed I confirmed that the triplet resulting from 29, 70 was 4059, 4060, 5741 with an estimate for √2 that was off by only 1 part in a 100 million. After 70 I found the next few members of the sequence, 169, 408, 985. The multiplier to try for the next member seemed to be closing in on 2.4142 or 1 + √2. At this point I stopped short of trying for a proof of that possibility, both because I am lazy and because the possible result seemed uninteresting. What is interesting is that the sequence of p,q’s goes on forever and that approximations for √2 by using the resulting triplets will converge on √2 as a limit. The ideas of a sequence converging to a limit was only rigorously defined in the 19th century. Possibly it might have provided satisfaction to the ancient Greeks. Instead, the idea of irrational numbers that were beyond fractions became clear only with the invention by the Hindu’s in India of our place based numerical notation and the number 0.

Place based number notation was developed separately in several places, in ancient Babylon, in the Maya civilization of Central America, in China and in India. A place based system with a base of 10 is the one we now use. Somewhere in one’s education one has learned about the 1’s column just to the left of a decimal point, then the 10’s column, the 100’s column and so forth. When the ancient Hindu’s and the other civilizations began to develop the idea of a place based system, there was no concept of zero. Presumably the thought was the idea that symbols should stand for something. Why would one possibly need a symbol that stood for nothing? So, one would begin with symbols 1 through 9 and designate 10 by ”1·”. The dot “·” is called a “place holder”. It has no meaning as a numeral, serving instead as a kind of punctuation mark which shows that one has “10”, not 1. Using the place holder in the example above of Ionian numbers, the τδ would be 3·4, the dot holding the 10’s place open. The story with “place holders” is that the Babylonians and Mayans never went beyond, but the Hindu’s gradually realized the dot could have a numerical meaning within its own right and “0” was discovered (invented?). Recently on September 13 or 14th, 2017, there was a flurry of reports that carbon dating of an ancient Indian document, the Bakhshali manuscript revealed that some of its birch bark pages were 500 years older than previously estimated, dating to a time between 224 – 383 AD. The place holder symbol occurring ubiquitously in the manuscript was called shunya-bindu in the ancient Sanskrit, translated in the Wikipedia article about the manuscript as “the dot of the empty place”. (Note that in Buddhism shunyata refers to the “great emptiness” a mystic concept which we might take as the profound absence of being logically prior to the “big bang”) A readable reference to the recent discovery is https://www.smithsonianmag.com/smart-news/dating-ancient-indian-text-gives-new-timeline-history-zero-180964896/. According to the Wikipedia article the Bakhshali manuscript is full of mathematics including algebraic equations and negative numbers in the form of debts. As a habitual skeptic I wondered when I first heard about the new dating whether Indian mathematicians with their brilliant intuition hadn’t immediately realized the numerical meaning of their place holder. Probably they did not. An easy way to see the necessity of zero as a number is to consider negative numbers as they join to the positives. In thinking and teaching about math I believe that using concrete examples is the best road leading to an abstract understanding. The example of debts is a compelling example of this. At first one might consider one’s debts as a list of positive numbers, amounts owed. One would also have another list of positive numbers, one’s assets, amounts owned. The idea might then occur of putting the two lists together, using “-“ signs in front of the debts. As income comes in one’s worth goes, for example, -3, then -2, -1. Then what? Before going positive, there is a time when one owes nothing and has nothing. The number 0 signifies this time before the next increment of income sends one’s worth to 1. The combined list would then be …, -3, -2, -1, 0, 1, 2, 3, … . Doing arithmetic, using properly extended arithmetic rules, when one wants to combine various sources of debt and income becomes completely consistent, but only because 0 was used.

If the above seems as if I’m belaboring the obvious, let me then ask you why when considering dates, the next year after 1 BCE is not 0, but 1 AD? Our dating system was made up during an early time before we had adopted “0” in the West. Historians have to subtract 1 when calculating intervals in years between BCE and AD and centuries end in hundreds, not 99’s. This example is a good one for showing that if one gets locked in to a convention, it becomes difficult if not impossible to change. I was quietly amused at the outcry as Y2K, the year 2000 came along with many insistent voices pointing out the ignorance of we who considered the 21st century to have begun. The idea of zero is not obvious and I hope I’ve shown in considering the Pythagorean’s and their dilemma with square roots, just how crippled one is trying to get along without it. Back to Top