What is Life?

Animal_diversity

Composite Photo source:  Wikipedia “Animal_Diversity” in article “Animal” by permission through Creative Commons Attribution-Share Alike 3.0 Unported license.

What is life?

 We can describe the characteristics and infinitely varied forms of living things, but what exactly is life itself? In the past, it was assumed that there was a vital force present in all living things that passed down from life to life. This philosophy was called vitalism. Because it borders on the Devine, today vitalism has been replaced by the philosophy of mechanism, which states that all natural phenomena, particularly life, can ultimately be explained through physics and chemistry. The universe is thought to be merely mechanical in nature. Under this philosophy, life is just a process produced by physical laws acting on matter.

Life is assumed to be a given, like gravity, which incidentally has not been explained or understood well either. We know gravity exists and how it behaves, but really don’t understand why. There is not a single location in any creature or cell that we can point to and say, this is its life. Definitions of Life usually describe what living things consist of and what they do. They do not actually tell us what Life itself is. We can’t collect, isolate or test it, so it appears to be a transcendent quality. What exactly distinguishes a living cell from a dead one or a mixture of cellular components? Depending on the source, explanations vary from biochemical to functional.

Life only comes from life. Spontaneous generation of living things has been shown over and over to be false. Spoiled food does not beget flies or mold. Each only comes from other flies or mold spores. Life as a process requires just the right kind and amount of regulated energy and a fine balance of the right molecules and structures. Science has not been able to create life or even most biological molecules without the help of molecules first derived from living systems or those systems themselves, e.g. bacteria engineered to produce insulin. Even if all of the components of a living organism are blended in the lab in the correct proportions no life results.

What is it that assembles and winds up the machine or provides the vital spark? Science does not know.   Proponents of molecular Evolution believe that non-living molecules at least once in earth’s history spontaneously became a living system from which all subsequent life descended. They argue (and with some merit) that spontaneous generation cannot occur today because living organisms would consume any components before they would have time to accumulate and self-organize into a living system. They assume that only in the absence of life could components accumulate sufficiently to form life spontaneously from non-living components.

Never mind that the key molecules, e.g. proteins and nucleic acids, are unstable in water for the length of time that would be necessary to accumulate and assemble the correct mixture into a living system. These molecules are assembled by linking smaller molecules together with the loss of a water molecule for each link. When excess water is introduced, e.g. ocean or pond, the reactions tend to be reversed and the links fall apart. That is why proteins inside cells are constantly being assembled to replace those that have been degraded. Molecular Evolution proponents believe that production of life in the laboratory can be accomplished at some future time, although they have no evidence to support that belief. We will look at some of the more popular origin of life theories and the validity of the arguments later.

Life is a continuous process that is constantly working against forces that would end it. It has been said that Life (1) is improbable, (2) defies entropy (the 2nd Law of Thermodynamics), (3) is unstable (4) needs a constant supply of raw materials and energy to survive. Let’s look at each of these claims.

 

  1. Life is Improbable

Life really is improbable, partly because of the extremely low probabilities of such complex systems forming by random chance even once. It is the ultimate “Infinite Improbability Drive”[1].   Even the simplest known bacterium contains thousands of types of proteins and other unique biological molecules and structures. Metabolic processes necessary for life depend on thousands of different, specific enzymes for facilitation and regulation through feedback, etc. Enzymes are proteins made of chains of amino acids that are folded into useful shapes. If we suppose that an average enzyme is 200 amino acids long[2], using the 20 left-handed amino acids living beings use, the probability of only one specific enzyme sequence forming at random is 1 in 20200 or 1 in 10260. That’s a 1 with 260 zeros after it.

If the universe is 13.7 billion years old, there have been 4.32 x 1017 seconds since it began. We would need to make 231.4 x 10180 attempts each second since the beginning of the universe to make the random assembly of even this one specific protein plausible – that is, to make the number of attempts that are in the same ball park as the odds against it. This all pre-supposes that all of the amino acids have been pre-assembled and are readily available. Amino acids occur in left and right-handed forms, and only left-handed forms are used by living things. If we take this into account, the odds would be much higher. But remember, to form even one cell all this must happen in a very confined space so that all of the proteins and other molecules can be collected in one place, not just anywhere in the universe or even anywhere on the earth.

If we assume that life molecules were assembled on earth, which is thought to be only 4.5 billion years old, and evidence of life was present 3.8 billion years ago, then the number of attempts per second rises to even more impossible levels by a factor of about 3.4 (13.7/4 billion). And that is just for one protein enzyme assembled from readily available units, excluding interfering molecules, and under the ideal conditions for assembly and preservation. Already we are seeing the extreme odds against a specific enzyme being produced. If we look at what it would take to produce by chance the thousands of different specific enzymes necessary for metabolism, the probability of random assembly of the correct mix would be (20200)3000 for a simple bacterium with 3000 enzymes, or 1 in 10780,000; that’s a 1 with 780,000 zeros after it. The terms “impossible” and “miracle” come to mind.

Now let us look at DNA. There are four different molecules that form base pairs like the rungs on a ladder along the coiled “double helix” of DNA that encodes for proteins, etc. Bacterial DNA, whose chain forms a circle and is tightly wound around proteins, is 300,000 to 4 million base pairs in length. If we assume that a simple bacterium has DNA that is 500,000 nucleotides long, using 4 types of “bases” (two purines and two pyrimidines), the probability of forming the correct sequence is 1 in 4500,000 or 1 in 10301,030 – that’s a 1 with 301,030 zeros after it. Even this presumes that each nucleotide has already been pre-formed from one of the four readily available bases, its partner and a pair of specific phosphorylated sugar (deoxyribose) molecules that form the sides of the “ladder.”

It’s even worse than that, however, since each purine must pair with its specific pyrimidine to form each base pair[3] so double the number is needed. Now add the probabilities of assembling, in one place, the DNA and its associated proteins (histones), the thousands of enzymes and other structures like cell membranes, and it is obvious that the probability of forming even the simplest bacterium is so infinitesimally small that it can only be called either impossible or a miracle. Even if we assume that an earlier form contained a tenth or a hundredth of this number of components, it would still be called impossible or a miracle. For 1% of the components, it would be 1 in (10260)30 or 1 in 107800 (1 with 7800 zeros) for enzymes and 1 in 103010 (1 with 3010 zeros) for DNA (or RNA), plus assembly of all the other components as noted above. Over a ten thousand-fold reduction (0.001%), would be required to make it meaningful, which would leave precious few components to “live.”

One of the origin of life theories proposes that RNA, not DNA was the original control and inheritance molecule. The difference in the structures of DNA and RNA is that DNA uses the deoxy- form of ribose sugar and RNA uses ribose itself. Since DNA now transcribes instructions for protein assembly to RNA first, this theory skips this extra complexity as a more believable scenario. Presently, some viruses use RNA instead of DNA, but viruses are incapable of most life processes on their own and must take over the DNA of host cells to reproduce, etc.   They can be thought of as parasitic “seeds”, not complete organisms.

Fred Hoyle, a famous astronomer and atheist, stated that the odds of forming a living being at random from lifeless molecules would be like the chance that “a tornado sweeping through a junk-yard might assemble a Boeing 747 from the materials therein.” Note that Fred Hoyle and N. C. Wickramasinghe estimated the odds at 1 in 1040,000 by assuming that numerous structures of enzymes could perform the same functions. That’s still pretty steep odds. Others have calculated the odds with various assumptions and outcomes, but all result in extremely small odds. Many are enthusiastic about the possibility of life or life’s building blocks arriving from outer space after being assembled by high energy processes in space. Looking at the extreme odds, pre-assembly elsewhere is like weighing a flea on the back of an elephant. It is not a real answer. Some even have speculated that a more advanced, intelligent life form seeded earth with life, but that only pushes the causes further back in time. How did life come to these advanced civilizations?

All of the extreme improbabilities above don’t even address whether life would spontaneously arise under the right conditions, if all components are available, or whether we would just have the same non-living jumble of molecules we could assemble in a laboratory. In other words, we still haven’t addressed what assembles and winds up the mechanism to start life processes. Clearly, some other unknown process or overarching principle besides random chance has been at work in both assembling the components and in turning them into something alive.

 

  1. Life Defies Entropy (the 2nd Law of Thermodynamics)

Entropy is a measure of the disorder of a closed system and the Second Law of Thermodynamics states that entropy always increases – that chaos always increases and usable energy always decreases. Life seems to defy entropy because life is very, very organized and uses matter to generate energy and build more and more complex structures. However, living things are never closed systems; they need material and/or energy from outside to survive, so an organism that seems to decrease entropy within itself may increase entropy of its surroundings continually. Is it enough to result in a net increase in entropy of the earth or the universe? The answer is unknown but possible. Note that this assumes that the Second Law of Thermodynamics is absolutely true in all cases, but this has not been proven either. It is a well-accepted and thoroughly tested theory and thus is a scientific law by that definition.

A planet with abundant life is far more complex and organized than a dead planet simply because the chemistry of life is far more complex and dynamic than inorganic crystalline structures. It is difficult to see how the net decrease in entropy caused by life on an isolated planet can affect any other planet, much less the universe as a whole. If the planet is considered the “closed system” then there is indeed a net decrease in entropy and a net increase in complexity, order and usable energy, e.g. fossil fuels. Of course, that also depends on your definition of order and disorder. If we define disorder as an increase in the number of states, and order as uniformity of form and function, than the dead planet is not as disordered as a planet with abundant life in all its forms and complexity of functions. However, if disorder is the rule, then the ultimate outcome of continued disordering and loss of energy is a uniform, cold, dead universe in the lowest energy and organizational state possible.

 

  1. Life is Unstable

Life is indeed unstable because it exists on the edge of destruction, far from equilibrium. Ordinarily, chemical reactions reach a state of lowest energy called equilibrium where they are stable. At that point the reaction stops or is stabilized dynamically where the net amount of products no longer increases and the net amount of starting materials no longer decreases. Life is never at or near equilibrium and requires input of material and energy to maintain itself in this unstable state. It can only exist under very specific physical circumstances including temperature, pH, pressure and presence or absence of oxygen. An aerobic organism requires oxygen, whereas oxygen is deadly to an anaerobic organism. The only time an organism is stable or at equilibrium is when it is dead. This brings us to (4.).

 

  1. Life needs a constant supply of raw materials and energy to survive

Life requires a constant or nearly constant supply of materials and energy from outside itself to survive. Ultimately, most of life on earth depends on the products of photosynthesis as a source of energy that is initially derived from the sun. The only exceptions are those living systems present in deep seas and deep interiors that derive energy from bacterial processing of inorganic chemicals such as hydrogen sulfide. In both cases, energy and material from outside the organism are necessary to maintain life.

Since no one knows what life actually is, the best we can do is define what living things must have and must do to live. All living things are more alike than different. An advertising flyer I received a few years ago from a supplier of products for biochemistry stated “Did you know that humans share about 50% of their DNA with bananas?[4]” All living things use essentially the same basic biochemical processes such as metabolism in the everyday business of living, so the DNA that encodes for the chemicals used for life processes are necessarily very similar. The differences are relatively minor compared to the similarities. The processes used to accomplish all of life’s functions at the molecular or cellular level have to be very similar for all living beings. Because the processes are so complex and similar, the surprising thing is not that the workhorse protein molecules (and thus the DNA that encodes for them) of different living things are so similar, but that they are as different as they are and still function in essentially the same way.

Living things at the minimum consume and process food, excrete waste, grow and reproduce. Some evolutionists would add “and, through natural selection, adapt in succeeding generations”[5]. Some living things also move, sense and communicate. Some can even go dormant for long periods and only “come to life” when conditions are right. This is true of many bacteria. Bacteria that had lain dormant for 120,000 years have been found under Greenland’s glaciers[6]. Once, I left a closed jar of saturated salt solution, which I had used to treat a sore throat, sitting for a month or so. When I started to throw it out, there was a fuzzy white ball of bacteria floating in the middle of it. This extremophile[7] bacteria that could grow in this high salinity environment was probably from the salt and may have been dormant for thousands of years before awakening[8]. Re-vitalization of dormant organisms is a great mystery. How can life itself be suspended and then be restarted spontaneously?   Is it really suspended or is it just slowed to an imperceptible level? But how could it survive for thousands of years?

So, we are only left with questions about what life is and how it came to be. Obviously the odds against life forming spontaneously put it into the realm of miracles, unless there is some as yet undefined and undiscovered process or principle. In a later post, we will examine some of the theories put forth to try to explain life’s origin.

[1] Hitchhikers Guide to the Galaxy, Douglass Adams, 1979. A satire in which instantaneous intergalactic travel is possible due to an infinite improbability drive.

[2] Note that the hypothetical numbers given here of amino acids, proteins and DNA nucleosides in a simple bacterium are simplified to make calculations easier.

[3] Purines Adenine (A) and Guanine (G) must pair with Pyrimidines Cytosine (C) and Thymine (T), only as A-T and G-C to form each nucleotide pair that forms each “rung” of DNA. RNA substitutes Uricil for Thymine.

[4] Sigma Life Science, part of Sigma Aldrich Company, St. Louis, MO, USA. http://www.sigmaaldrich.com

[5] Life, from Wikipedia

[6] Tiny Frozen Microbe May Hold Clues To Extraterrestrial Life, Science Daily (June 15, 2009) — “A novel bacterium — trapped more than three kilometres under glacial ice in Greenland for over 120,000 years… Dr Jennifer Loveland-Curtze and a team of scientists from Pennsylvania State University report finding the novel microbe, which they have called Herminiimonas glaciei, in the current issue of the International Journal of Systematic and Evolutionary Microbiology. The team showed great patience in coaxing the dormant microbe back to life; first incubating their samples at 2˚C for seven months and then at 5˚C for a further four and a half months, after which colonies of very small purple-brown bacteria were seen. … and it has been shown that ultramicrobacteria are dominant in many soil and marine environments.”

[7] Extremophile – bacteria that thrive in extreme conditions that would kill other organisms. They have been found in boiling hot water, under extreme pressure, at high altitudes, in sulfuric acid rich waters, in oil wells, etc. Almost no place on earth is devoid of life. It is ubiquitous.

[8] Table salt is produced in two ways, mines or evaporation of salt water, so it is uncertain if this was an ancient organism. Ponds used to evaporate sea water are often tinged purple or red by halobacteria and must be purified before sale for food products, so salt with dormant microbes was probably mined from deep underground.

Does the observer determine outcomes in quantum physics ?

Solvay Conference on Quantum Mechanics 1927
Solvay Conference on Quantum Mechanics 1927

Quantum Mechanics or Quantum Theory, which is based on complex mathematics, tries to describe and explain the odd behavior of particles and forces in the atomic and subatomic realm. In this theory, things don’t happen in a smooth (analog) manner but in a punctuated (digital) manner Electrons move around the nucleus at high speeds so that their exact location at any one moment is not known precisely without measurement. The likelihood of finding a given electron at a particular place in its orbital is described by a probability, thus defining the electron “cloud” or “shell.” An electron jumps from one allowed orbital to another by absorbing energy (a photon) at a specific energy (wavelength).

The absorbed photon at a specific energy level is called a quantum, thus quantum theory. The electron will also fall from this “excited” state back to its more stable “ground” state orbital by emitting a quantum of energy. Electrons exist or move between one allowed energy state (orbital) and another based on discrete quanta of energy that they absorb, emit or carry. Each element has unique orbital energies so that light interacting with an atom shows absorption and emission lines at specific wavelengths that can be used to identify the element.

Wave-Particle Duality:

In Quantum Theory, subatomic particles are described as both particles and waves simultaneously. This is referred to as wave-particle duality. All types of energy, including subatomic binding forces, are also defined as both particles and waves, so that matter and energy are treated as if they are the same thing. Both subatomic particles and photons sometimes act like waves and sometimes like particles, depending on how they are tested or detected. Two experiments are noted as evidence: the double slit interference patterns and the photoelectric effect in which electrons are emitted when light is shined on a metal surface. Einstein assumed this proved that energy waves were really made of particles that he called photons.

The double slit experiment is said to demonstrate the wave nature of particles and photons. The photoelectric effect is said to demonstrate the particle nature of particles and photons. Wave-particle duality rests on the assumption that single photons or particles are being measured. Since all detectors have threshold sensitivities below which nothing is detected, it could mean that multiple, not single, photons or particles are really being tested[1]. This would explain the interference patterns seen when either photons or electrons are tested in the double slit experiment. Photoelectric experiments may also be misinterpreted. It is possible that absorbed energy, not photon particles, causes emission of loosely held electrons on the metal surface. Granted, this is speculation at this time, but calls for more study.

Copenhagen Interpretation:

In the widely accepted Copenhagen interpretation of quantum mechanics, a particle is said to not have a fixed state but exist in a smeared out multiplicity of states at once until a measurement is taken when it “collapses” into one state. The observer (or detector) becomes a part of the quantum system. This is the principle of superposition. Because an electron can be found in any of the probability-allowed “shell” locations, this interpretation assumes that the electron really is at all the locations or states at once and only assumes a fixed state when measured. This assumption extends to all of the characteristics of the electron such as position, spin or momentum. This assumption also extends to all other subatomic particles and photons (energy particles).

The Copenhagen interpretation of Quantum Theory also says the electron exists in one or the other allowed orbital level but does not exist anywhere between.   When a quantum of energy is absorbed the electron is said to pop out of existence in the original shell and simultaneously pop into existence in the new shell. But, since the electron shell defines a probability, and most of the time the electron exists in one of these shells, the probability of finding it anywhere between is statistically infinitesimal.   It is said not to exist there, and it is thus called “forbidden.” Is it only an extremely small probability or are we talking about its actual existence? The Copenhagen interpretation of Quantum Theory says it is the latter. Other interpretations of Quantum Theory differ as to what actually happens. See list below.

Uncertainty Principle: Ontology or Epistomology?

 In trying to measure these discrete orbitals and their electron locations and momenta, it became apparent that measurement of any kind disturbed the system so that only one of two coupled parameters could be determined at any one time, e.g. position and momentum (or speed). This led to the Heisenberg Uncertainty Principle, which states that it is impossible to know both the position and the momentum of any one subatomic particle at the same time. The system is disturbed by measurement because measuring subatomic particle parameters is like administering eye drops with a fire hose. Because the subatomic particles are so small compared to any means of measuring their parameters, what is measured is in a disturbed condition.

The Heisenberg Uncertainty Principle was meant to be a statement of experimental limitations, not that location and momentum (or other coupled parameters) did not exist in a fixed state at the same time. However, Bohr and other Copenhagen interpretation proponents interpreted it that way, assuming that atoms or atomic particles were never in a fixed state until measured, and that uncertainty is a fundamental characteristic of subatomic particles, not just an experimental limitation. Thus they have substituted ontology (being) for epistemology (ability to know). Heisenberg never accepted the principle of superposition or non-locality claimed in the Copenhagen interpretation.

Superposition:

 Edwin Schrodinger provided the mathematical equations for the behavior of electromagnetic waves that are used in quantum mechanics[2]. These probabilistic differential wave equations are linear (first order), that is, they can be plotted as straight lines on a graph. Superposition is a concept in mathematics stating that in linear equations all of the contributing factors must add up to the net effect of each factor individually. Since Schrodinger’s equations for waves are linear, it is assumed that their application to subatomic particles is also linear. From there it is a leap of faith to assume that particles don’t just have the capability of being in different states, but that they are simultaneously in all possible states at once. Instead of just being a mathematical concept, superposition now was applied directly to subatomic particles in a real physical sense.

However, Schrodinger did not agree with this Copenhagen interpretation of quantum mechanics. He came up with an example within everyone’s sphere of experience that illustrated the absurdity of their assumed superposition. This was the famous Schrodinger’s Cat thought experiment. He set up the experiment so that a cat in a closed box could be either alive or dead, depending on whether a radioactive particle spontaneously decayed setting off a mechanism that released a deadly poison gas. In this thought experiment, using the Copenhagen interpretation of superposition, since we don’t know what state the cat is in until the box is opened, the cat is both dead and alive until it is opened at which time the cat becomes either dead or alive. The act of observing somehow must cause the cat to assume either a dead or alive state. In all other realms, this would be called Magical Thinking. Meant to point out the weakness or absurdity of superposition, it has been used to illustrate the opposite through convoluted “reasoning” to make it fit the Copenhagen or similar interpretations.

 Communication at a distance:

 The idea of instantaneous communication and action at a distance is a consequence of this assumed superposition where particles did not assume a fixed state until observed. By Pauli’s Exclusion Principle, no two electrons in the same orbital can be in the same quantum state. Each must differ in some way, for example they must have opposite spins. The two particles are said to be entangled since each must be in the opposite state to the other.   If one of the electrons is emitted and travels relatively far away, when one of the electrons is measured (observed), it collapses into a fixed state and simultaneously the other one collapses into the opposite state that can be confirmed when it is measured. This implies speed of communication faster than the speed of light, the assumed upper limit of speed[3].

Einstein thought that quantum action at a distance was an illusion based on the assumption of superposition, aka non-locality. If particles are assumed to have fixed states, although unknown to an observer, the action at a distance is no mystery. It only implies that entangled states, e.g. opposite spins, persist after separation. When one of the particles is measured, you automatically know the state of the other since they must be the opposite of each other, whether together of separated. Einstein spent the latter part of his career trying to prove this.

Other Interpretations:

There are more than a dozen other interpretations. The most popular, among a long list, (see table following), are the Copenhagen interpretation and its variants, the Many Worlds interpretation and the Ensemble interpretation. Variants of the Copenhagen interpretation involve either the observer or the cat (as observer and participant) as being parts of the quantum system. Another, the Many Worlds interpretation is even more speculative. In this scenario, each time a subatomic particle collapses and “chooses” a fixed state, reality splits in two and both possible realities still exist, but in different undetectable dimensions. Think of this as a time series of pictures or a strip of movie film. At the decision point, the one series becomes two, and at the next decision point, becomes four, etc. ad infinitum.

The Ensemble interpretation states that Quantum Mechanics can only be applied to statistically significant numbers of particles, not to individual particles. Since the wave equations describe probabilities, it would be meaningless to apply probabilities or statistics to single particles. This is the interpretation favored by Einstein but is discounted by leading QM physicists. Similar realistic interpretations such as those proposed by de Broglie-Bohm and science philosopher Karl Popper assume real particles with real positions and real wave functions that do not need to “collapse” upon measurement. I tend to prefer these theories because of their realism.

“The attempt to conceive the quantum-theoretical description as the complete      description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.”

—A. Einstein in Albert Einstein: Philosopher-Scientist

Is the universe really indeterminant?

As a consequence of the probabilistic view of the subatomic world, Quantum Theory leads to a conclusion that events are not deterministic, but rather are indeterminate; that they just happen without actual connections between cause and effect. If deterministic, then events in the past must predict future events as causal antecedents. In the macro or “real” world, everything has a cause or causes, whether known or not. Determinism is the accepted view or apparent state of the real universe because, knowing the mass, position and the momentum of a (larger) body, plus all of the influences on it and the mathematical equations governing its movement, one can (in theory) calculate its position and speed at any other time in the future or the past. This is the basis of celestial mechanics by which planets, etc. are tracked.

The question is: since we don’t know for sure what the outcome according to QM will be, is it really indeterminate or are there certain things we don’t or can’t know about the system that only makes it look indeterminate? If it were possible to know all of the parameters and influences without disturbing the system could we, with certainty, predict outcomes? According to the Copenhagen interpretation of quantum theory, the universe is really indeterminant at the atomic level and only LOOKS determinant at the macroscopic level. This eliminates the infinite series of cause and effect, and therefore the question of a first cause.

 

[1] See also Andrew Ancel Gray at http://modelofreality.org/cgi-bin/iet.cgi

[2] Side note: These equations assume massless particles and waves. Since real particles have mass, particle physicists assume there is a particle that gives all other particles mass. The Higgs boson is the assumed particle that creates mass when a particle is in a Higgs field.

[3] It should be noted that many thought experiments and most actual experiments have been done using light, not subatomic particles. The results of these actual experiments depend on your interpretation of Quantum Theory. See other interpretations that follow.

Major QM Interpretations  (click to follow link)

Carbon Dioxide is plant food and increases growth rates

Animals exhale Carbon Dioxide (CO2) and breathe Oxygen (O2), while plants use CO2 and exhale O2. Professional greenhouses often add extra CO2 to increase growth rates. Increased plant growth removes much of the CO2 released into the atmosphere. Between pre-industrial and present times, studies show an average of 15% increase in plant growth rates, with some species increased many times that, e.g. young pine trees. Increased plant growth rates and wider distribution of arable (farmable) land due to warming as well as improved farming practices can solve the so-called overpopulation problem. If much of the data used in the climate models are based on proxy data from tree rings, and growth has been increased by CO2, does that mean that the data is artificially skewed toward “warmer” results? Hmmm.

Plant Growth Chart 1.png

Figure 1. Comparison of Plant Growth at Pre-industrial CO2 levels (295 ppm in pink), at 383 ppm and 600 ppm (in blue) in Dry Wheat, Wet Wheat, Oranges, Orange Trees and Young Pine Trees. Note Percent increases.
Source: Review Article: “Environmental effects of increased atmospheric carbon dioxide,” Willie Soon (1), Sallie L. Baliunas(1), Arthur B. Robinson (2), Zachary W.Robinson (2) Climate Research. 13, 149-164, (1999)

(1) Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, Massachusetts 02138

(2) Oregon Institute of Science and Medicine, 2251 Dick George Road, Cave Junction, Oregon 97523

A. Critics created the “progressive nitrogen limitation hypothesis,” which assumes that increased growth rates of trees would deplete poor soils of nitrogen, thus mediating the positive effects of increased CO2. This is a scenario based on theory, not reality, which stubbornly refuses to support the hypothesis. Many studies[1] show that, contrary to the hypothesis, although roots grow deeper and produce more fine hairs, soil and forest floor are enriched in nitrogen from biological sources, i.e. increased root mass and leaf litter supporting beneficial microbes in the soil.

B. One benefit of increased CO2 is that the stomata (openings) of leaves, which take in CO2 and emit water vapor and oxygen, are reduced, leading to less water loss, enhanced water use and improved tolerance to dryer conditions. At elevated CO2 levels, stomata do not need to be open as far to allow sufficient CO2 in for photosynthesis and, as a result, less water is lost through transpiration[2]. In controlled studies, an additional benefit of reduced stomata openings is a reduction of ozone damage.

C. The increased rate of growth of plants, from forests to sea algae, results in more of certain cooling aerosols being produced. These include Carbonyl Sulfide (COS) from soil and seas that become highly reflective sulfate in the stratosphere to reflect more solar radiation back into space, Iodo-compounds from sea algae that nucleate clouds to reflect more solar radiation back into space, dimethyl sulfide (DMS), from seas that nucleates clouds and other aerosols such as isoprene from trees with similar effects.

D.  Hormesis is a phenomenon, commonly seen in medicine and nutrition, where a low concentration or dose results in a positive effect, but a larger dose results in damage. For instance, some salt and water are necessary to good health, but beyond a certain point, ingesting more can be harmful or fatal. The effect of CO2 on plant life appears to be one such system. Increased CO2 obviously benefits plant life, but it is uncertain at what level CO2 might have a detrimental effect on growth. In professional greenhouses and experiments, even ten times the current level is still beneficial.

Hormesis Chart 1

Figure 2.  Illustration of how Carbon Dioxide is beneficial to plants through Hormesis. Horizontal Axis is Increasing CO2 level.

Source: http://www.drroyspencer.com/Earth’s Response to Increasing CO2: An Example of Hormesis? August 11th, 2014

[1] Example: Phillips, R.P., Finzi, A.C. and Bernhardt, E.S. 2011. “Enhanced root exudation induces microbial feedbacks to N cycling in a pine forest under long-term CO2 fumigation”. Ecology Letters 14: 187-194.

[2] See review article of research papers: “Responses of agricultural crops to free-air CO2 enrichment” Kimball, B.A., Kobayashi, K. and Bindi, M., Advances in Agronomy 77: 293-368 2002.

Why CO2 is not the cause of climate change

Does Carbon Dioxide cause climate change?

a) Carbon dioxide is a minor player in any further warming. It is uniformly distributed in the atmosphere but only absorbs infrared (heat) in a very narrow wavelength range. The CO2 wavelength range is outside the range of most of the solar radiance that penetrates our atmosphere. It falls roughly inside the wavelength range of temperatures re-radiated when solar radiation heats the earth’s surface. The atmospheric CO2 already absorbs almost all of the radiation that it can in that range. Most of the warming effect of CO2 has already occurred in the past and is one of the reasons our planet is not a frozen wasteland. Any increase in CO2 will have a very minor effect. With CO2 absorption near saturation, almost all of the re-radiated heat in that wavelength range is already being trapped, so it can have little or no effect on future increases in temperature or supposed forcing of water vapor. With CO2 essentially eliminated as a source, any increases in temperature must be from some other source.

Absorption of gases – note narrow CO2 bands & broad water bands.

http://www.globalwarmingart.com/wiki/File:Atmospheric_Transmission_png

Source: Robert A. Rohde (Dragons flight at English Wikipedia) – This figure was created by Robert A. Rohde from published data and is part of the Global Warming Art project. http://www.atmo.arizona.edu/stud

This figure requires a bit of explaining. The top spectrum shows the wavelengths at which the atmosphere transmits light and heat as well as the “black body” idealized curves for no absorption. It is a little misleading because the data is not based on actual solar and earth data. It is based on two experimental heat sources, one centered at 5525 K (5252o C or 9485o F), the approximate temperature of solar radiation, and one centered in the range of 210 to 310 K (-63o C to 36.8o C or -82oF to 98o F), the approximate temperature range of re-radiated heat from the earth. In reality solar radiation power, (Watts/m2/micron), shown in red, is six million times as strong as the power of re-radiated heat from the earth, shown in blue.

The other spectra are absorption[1] spectra. The top one shows the relative percent absorption by total atmospheric gases at various wavelengths, (note that this spectrum is practically the inverse of the transmission spectrum above it), and the spectra below that show the absorption wavelength ranges of individual atmospheric gases, but not the relative strength of that absorption in reality. As experimental, not real atmospheric, data they can only tell us the wavelength ranges of the absorption, not their relative strengths.

Note that CO2 absorbs in the 15 micron range[2], which is within both the range of re-radiated heat and the strong absorption by water vapor of which the CO2 peak forms a mere shoulder. This is used to claim forcing of water vapor by CO2, without regard to the near-saturation level of CO2. Lesser CO2 peaks in the 2.7 and 4.3 micron ranges also only contribute in a minor way, the first is completely covered by a water vapor absorption peak and the second forms a shoulder in another water vapor peak. These minor peaks occur in a region where both solar radiation and re-radiation are minimized. Methane and nitrous oxide are also shown to be minor players, having narrow absorption ranges and low concentrations. Note too that ozone blocks most of the ultraviolet light from the sun.

b.) Water is by far the most important greenhouse gas/liquid in the form of vapor, high and low altitude clouds, rain and snow, which both absorb and reflect sunlight and re-radiated heat from the surface. Water vapor is not uniformly distributed in the atmosphere, being concentrated near the earth, but strongly absorbs heat in a wide range of wavelengths. More heat means more water vapor evaporating from the oceans. Sounds pretty scary, doesn’t it? Contrary to what is assumed by climate modelers, who use this to claim forcing by CO2, the extra vapor doesn’t remain as vapor. It quickly forms low altitude clouds that strongly reflect in-coming sunlight and heat into space. Any re-radiated heat from the surface that may be trapped by clouds is a small fraction compared to the in-coming solar radiation, so blocking solar radiance has a net cooling effect that overwhelms any increases in trapped re-radiation. High altitude clouds tend to trap heat from being re-radiated into space, but have little effect because the increases in cloud cover due to warming are mostly in low altitude clouds.

[1] Transmission and Absorption are inversely related by the formula A = 1/log T.

[2] The horizontal axis is a log scale in microns so that the 1 to 10 range is in units of 1 and the 10 to 70 range is in tens.

NOTE: Republished from July 22, 2015 Post (media link broken and here restored)


Want to know more about this and other Modern Myths including climate change, evolution, origin of life, Big Bang cosmology or quantum physics? See related posts on this website or buy the book Perverted Truth Exposed: How Progressive Philosophy Has Corrupted Science in print or as e-book/Kindle on line at WND Superstore (the publisher) or at Amazon, Books-a-Million or Barnes & Noble .

What about Climate Change?

CLIMATE CHANGE: PHILOSOPHY DISGUISED AS SCIENCE

For a current example of philosophy and/or politics disguised as science, we need look no further than the climate change debate. Regardless of the merits of the case for the Anthropogenic Global Warming or AGW theory that manmade carbon dioxide is the cause of climate change, the way it has been advocated is more akin to a political campaign than to a dispassionate search for truth. Political action is advocated that would drastically change our world, crippling industry and technological progress while leaving developing nations to flounder in their poverty. The two pronged approach of this philosophy is to curtail both technology and population.

Developed countries are said to hog all the resources at the expense of developing countries. People are said to be the problem, and advanced societies must be brought down to near subsistence levels while primitive societies are not raised from their squalor. But is any of this true? Is it science or is it politics? Unfortunately, it is more about politics, philosophy and belief than about science. Are there too many people, and are subsistent societies cleaner and less ecologically harmful? Are developed nations really hogging all of the resources at the expense of developing ones? The answer to each of these questions is NO.

Typical political tactics employed include:

  1. Appeal to authority, (a logical fallacy): the “consensus of scientists” with only a very small, elite group deemed qualified to understand or comment on it.

2.  Appeal to ignorance(a logical fallacy): It must be increasing CO2 (Carbon Dioxide) because we can’t find any other cause – but we aren’t looking very hard at things like wind and water cycles or solar activity.

3.  Depend on statistics and computer models instead of real historical facts and experimental data. Remember GIGO – Garbage In, Garbage Out. A model is only as good as the data used or left out, the type of mathematical calculations based on that data and assumptions and conclusions made.

4.  Use fear to sell the agenda: Use warnings of catastrophic consequences if action is not taken immediately by governments, industry and individuals based on models of a poorly understood atmospheric and planetary system. (The Con: “special today just for you but you must call within the next 30 minutes or you’ll miss out.” or “You can save the planet, so call your congressman today before it’s too late.”)

5.  Use guilt and shame to get people, governments and industries to “go green” and curtail the activities that use fossil fuels or otherwise emit CO2.

    1. Do you use incandescent light bulbs? Then you’re killing the planet because you’re consuming power from fossil fuel driven power plants. The government must phase out incandescent light bulbs in favor of LED or compact fluorescents, (which contain mercury, a primary pollutant); we must regulate power plants to insure maximum efficiency regardless of increased cost to the consumer, which hurts the poor most.
    2. Do you eat beef? You’re killing the planet because of methane from cows. The government must regulate the methane from cows.
    3. Do you Fly, drive or use a ferry or train?  You’re killing the planet because of fossil fuel consumption. The government must demand more efficient transportation – even if CAFE[1] standards demand lighter and less safe vehicles that are killing people.
    4. Do you use manmade fibers or plastics in any form? You’re killing the planet because it takes fossil fuels to produce them – never mind that most of these things get put in a landfill, which is a form of sequestering carbon. The government must regulate the industries that produce them – and the landfills, too.
    5. Do you use paper products? You’re killing the planet because trees that could consume CO2 are cut down to produce paper. Never mind that trees are farmed and harvested and new trees are planted to more than replace those used. Younger trees consume CO2 at a faster rate per ton than older trees.

6.  Use the press to promote their views and disparage the opposition in the form of propaganda. (TV, radio, internet, movies, books, magazine and journal articles, newspapers)

    1. Present a parade of “experts” and dire predictions as absolute settled facts, not as projections of a computer model.
    2. Sensationalize and exaggerate any “fact” that supports the global warming theme and downplay or fail to report on things that don’t.
    3. Declare that the polar bears are drowning because the sea ice is melting. Never mind that polar bears can swim up to 60 miles between feeding areas, that there is no net loss of sea ice over time and that polar bear numbers are increasing.
    4. Have a storm, flood or drought? Blame it on Climate Change. Make it sound biblical in proportions and the worst in history.
    5. Have a problem with mosquitos because of a particularly wet spring? It must be Global Warming.
    6. Are the seas rising at the same rate they have been rising for centuries[2]? Oh, my God, our cities will soon be underwater and we’re all going to drown!

7.  Demonize anyone who disagrees as “deniers” with the unspoken implication that they are on a moral level with holocaust deniers. Climate experts that aren’t on board with the whole global warming scenario and who have DATA to back it up are called “just weathermen” who are unqualified to comment, even though many of them have better credentials than many of the AGW proponents.

8.  Exclude from publication or grants, any research that doesn’t agree with their conclusions, and then declare that there are few peer reviewed papers on the other side. Never mind that government funding is overwhelmingly on one side. Journals such as Science and Nature have become advocates instead of unbiased scientific publications.

9.  Hide raw and analyzed data and analysis methods from other researchers who wish to verify the work. Real science always shares data and methods with other researchers so their results can be verified. This one includes the “massaging” of the data to say something it doesn’t. The Climate-gate scandal was all about hiding the data and massaging it to eliminate the Medieval Warm Period and the Little Ice Age and to create a “hockey stick” that was used to alarm governments into drastic control measures.   When other researchers finally got hold of the (massaged) data[3] and analysis methods, it was discovered that any random set of numbers, when plugged into the formula, produced a similar “hockey stick.” This showed that the analysis algorithm on which the computer models were based was worse than worthless.

Recorded temperature throughout history (red) vs. IPCC model (blue)

Comparison of graphs published by earlier and later versions of the IPCC Assessment on Climate Change. Note that the later graph eliminated both the Medieval Warm Period and the Little Ice Age.  Source: http://wattsupwiththat.com/2010/03/10/when-the-ipcc-disappeared-the-medieval-warm-period/  Note also that the vertical temperature scale is less than a degree above and below today’s temperature, which is set near zero on the graph.

  • The data for the line marked Moberg is available through NOAA website and is from “Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data” Nature, Vol. 433, No. 7026, pp. 613 – 617, 10 February 2005. Anders Moberg1, Dmitry M. Sonechkin2, Karin Holmgren3, Nina M. Datsenko2 & Wibjörn Karlén3 1 Department of Meteorology, Stockholm University, SE-106 91 Stockholm, Sweden 2 Dynamical-Stochastical Laboratory, Hydrometeorological Research Centre of Russia, Bolshoy Predtechensky Lane 11/13, Moscow 123 242, Russia 3 Department of Physical Geography and Quaternary Geology, Stockholm University, SE-106 91 Stockholm, Sweden

10.  Ignore other factors that may contribute to or mediate the supposed effects of manmade carbon dioxide (such as water vapor, high and low altitude clouds, methane, increased plant growth, ocean sequestering or release, solar activity cycles, precession of earth’s tilt, Pacific Decadal Oscillation, etc.)

Manmade global warming or climate change as it has come to be called[4] is said to be an established fact, and the consequences are dire unless global governments act now to mitigate its effects. The polar ice caps and glaciers will melt away; the oceans will rise and drown coastal and island regions; droughts, floods, storms and temperatures will all increase and millions, dare I say billions, will die. But is any of it true, and is it science? The answer is NO. While there has been general warming since the “Little Ice Age” of the 17th and 18th centuries overlain with periods of lesser heating and cooling, is the change good or bad? Is it unusually rapid now and will these alleged trends continue into the future? Is it extreme enough to cause the dire effects predicted and should global governments act now to prevent the predicted disastrous consequences? It is important to know if these modeled projections are reliable predictions and if real, is real science involved in any meaningful way. What do we really know about it?

NOTE: IPCC temperature chart was reinserted 8/22/2020 due to broken original link.

[1] CAFE Standards are Corporate Average Fuel Economy Standards first mandated by Congress in 1975 during the energy crisis. These standards have been continually tightened for even greater fuel efficiency. The result is lighter, smaller, less protective and less safe cars that are contributing to highway crash deaths.

[2] Seas have been rising since the Little Ice Age in the 17th and 18th centuries at about 7 inches per century.

[3] Raw data was actually destroyed to prevent others from getting it through a Freedom on Information request.

[4] Recently there have been efforts to define it as climate disruption or climate catastrophe, implying that a tipping point is near

Did Darwin steal his theory of Evolution?

After his trip around the world on The Beagle, Darwin waited 23 years to present his theory of Evolution.  The myth is that he sat on the theory out of fear of repercussions. However, when Charles Darwin published On the Origen of Species in 1859 evolutionary theories had been around for a long time.  There were at least a dozen evolutionary theories, including one by Erasmus Darwin, Charles’ grandfather, in the late eighteenth and early nineteenth century that were openly debated among scientists.

Alfred Russel Wallace 1862 - Project Gutenberg eText 15997
Alfred Russel Wallace 1862

It wasn’t until Alfred Russell Wallace, a naturalist and admirer, sent Darwin his observations and theory of Evolution while still away on a voyage to the Malay Archipelago and Borneo, that Darwin’s theory was (hurriedly?) presented, acknowledging Wallace as co-discoverer, and published, establishing primacy over Wallace.  Was Wallace the true originator of a theory that Darwin had overlooked in his own observations?  Did Wallace provide the link that brought all his speculations together?  Darwin’s claims were backed by his friends Charles Lyle and Joseph Hooker, so we may never know the truth.  It is sure that the scientific reputation of Wallace declined, while Darwin’s grew.  It is interesting to note that Wallace later rejected the theory as lacking both mechanism and sufficient evidence.

Is Cosmology Science?




 

Cosmologists tell the following story: 

When the universe began, it all fit into a very tiny volume that then violently “exploded” and began to expand, ultimately creating all of the energy, matter, space and time.  Immediately after the Big Bang when there was only very hot energy, there was an Inflationary Period caused by to a false vacuum with repulsive gravity that expanded faster than the speed of light, but then inflation ended.  After that the universe continued to expand until it cooled enough for subatomic particles to condense out of energy.  Both matter and antimatter particles were created, so that most of the particles annihilated each other leaving only a small amount of leftover matter.  When the universe expanded and cooled further, subatomic particles were formed into the lightest atoms, mostly hydrogen and helium with a tiny amount of lithium.

Only when atoms of Hydrogen dominated the universe did the universe become transparent to radiation, e.g. light, X-Rays.  The very uniform Cosmic Microwave Background radiation is the cooled, redshifted remnant of the light from the Surface of Last Scattering, just before the universe became transparent to energy.  When objects such as stars were formed that could produce ions, the neutral universe became a reionized plasma[1].  Much later, as bodies moved farther apart, expansion began to accelerate due to Dark Energy, which is a repulsive force, counteracting Gravity.

Ordinary matter and energy make up less than 10% of the universe.  Dark Energy and Dark Matter, neither of which has been directly detected yet, make up the other 90-plus percent.  Dark Matter, which interacts only through gravity, is responsible for     1.) the formation of large scale structures, 2.) galaxy rotation rates that do not decrease with distance from the center and 3.) “closing” the universe to a finite size rather than an “open” universe that is infinite.


But is it science? What is the evidence for this scenario and are there other possible explanations that have been ignored?


Evidence for the Big Bang, Expanding Universe, Inflation, Acceleration, Dark Energy and Dark Matter: 

  • Solutions to Einstein’s general relativity field equations by Georges Lemaitre and Alexander Friedman in the 1920s that predicted expansion (or contraction) of the universe.  Alternative Possibility: There are many possible solutions to Einstein’s field equations, so choosing this one only fits a preconceived or preferred idea. It was seemingly confirmed by the redshift data. See below. The field equations are mere mathematical models of mathematically possible universes. Einstein’s own calculations included a Cosmological Constant that resulted in a static, non-expanding, universe, which did not fit with the desired progressive picture of others. At one point he supposedly renounced the Cosmological Constant when he told George Gamow that the “…the introduction of the cosmological term was the biggest blunder of his life,” although some others who knew him contended that, if true, it must have been a joke. Note that this so-called Einstein quote was only related by Gamow in 1970, not directly by Einstein who died in 1955. Long after rejecting Einstein’s Cosmological Constant, cosmologists have included a new Cosmological Constant, attributed to Dark Energy, to explain an apparent acceleration of expansion.

 

  • Redshift of light increases with distance indicating, by the Doppler Effect, that objects are receding and space between is expanding.  Alternative Possibility: Redshift could be due to other factors than the Doppler Effect. Longer wavelength(redshifted) light has lower energy so redshift could be from loss of power rather than from being stretched by receding sources. We know that light is affected by gravity and other fields and forces. We also know that farther is older, so forces acting on light have been acting longer the farther the object is from us, causing ever increasing redshift with distance. Fritz Zwicky proposed that gravitational forces sap energy from light as it passes.  His detractors called it “Tired Light” and wrongly attributed it to collisions in the Compton Effect, which Zwicky expressly excluded as causing too much scattering. See Hubble post.

 

  • Cosmic Background Radiation interpreted as extremely redshifted light from Surface of Last Scattering.  Alternative Possibility: The Cosmic Background Radiation (CMB) may just be the residual temperature of the universe from stars within it. It may even be a local feature of our galaxy. The 2.7K temperature of the CMB was accurately predicted by scientists as residual temperature from starlight long before it was discovered. Although CMB is the strongest, other wavelengths are also present in the cosmic background. See list and relative power of each wavelength region below. (figure from Wikimedia, public domain by user: pkisscs)  Ref: “History of the 2.7 K Temperature Prior to Penzias and Wilson” A. K. T. Assis, Instituto de Física “Gleb Wataghin” Universidade Estadual deCampinas 13083-970 Campinas, São Paulo, Brasil M. C. D. Neves Departamento de Física Universidade Estadual de Maringá 87020-900 Maringá, PR, Brazil

Extragalactic-background-power-density

CGB = Cosmic Gamma Ray Background

CXB = Cosmic X-Ray Background

CUVOB = Cosmic UV-Visible Background

CIB = Cosmic Infrared Background

CMB = Cosmic Microwave Background

CRB = Cosmic Radio Wave Background

 

  • Large scale uniformity of the universe as evidence of early inflation.  Alternative Possibility: This is a red herring. The CMB is not that uniform, nearby galaxies are excluded and the visible universe may be a tiny part of an infinite universe that is not expanding, so no need to explain the supposed uniformity.

 

  • Mismatch of type 1A Supernovae standard candle redshift, interpreted as an acceleration of expansion and as evidence of Dark Energy, a repulsive force.  Alternative Possibility: This is a no-brainer. No real intergalactic distances have ever been measured.  They are calculated using a series of standard candles, so they may not be the actual distances and error likely increases with distance.  The standard candle mismatch does not necessarily mean there is a change in speed, only that there are forces we don’t understand that may affect standard candles or redshift.

 

  • Dark Matter, which has never been detected, is proposed on the basis that, by the Big Bang timeline, (13.7 billion years) there has not been time enough to form the large structures composed of galaxies without some unseen influence drawing galaxies together.  Galaxy rotation is still a mystery but if a Dark Matter halo is causing it, there must be an extraordinary balance in each galaxy to account for observations.  Unlike the solar system, where outer planets move slower than inner planets according to standard gravitational calculations, galactic outer bodies appear to revolve in near unison with the inner bodies.  Dark matter is proposed to account for this unsolved mystery.  Alternative Possibilities: If the universe is not expanding and is both infinite and very, very old, large scale structures are not a problem. Galaxy rotation, while still a mystery, may have more to do with the galactic plasma magnetic fields than gravity alone.  Work is needed in this area but is not funded by leading cosmologists who prefer to believe in magic foo-foo dust.  It turns out that the universe is nearly flat, not severely curved and finite as first proposed.  There is no need to “close” the universe if it is not expanding.

“Mathematicians deal with possible worlds, with an infinite number of logically consistent systems.  Observers explore the one particular world we inhabit.  Between the two stands the theorist.  He studies possible worlds but only those which are compatible with the information furnished by the observers.  In other words, theory attempts to segregate the minimum number of possible worlds which must include the actual world we inhabit. Then the observer, with new factual information, attempts to reduce the list still further.  And so it goes, observation and theory advancing together toward a common goal of science, knowledge of the structure and behavior of the physical universe.”

                 —Edwin Hubble, “The Problem of the Expanding Universe,” 1942


Unfortunately, this is not what we see in cosmology, which has become mired in dogma and has not allowed further progress that does not fit with their nested set of assumptions.  Redshift interpreted as recessional speed and a preferred mathematical model that predicted expansion are the basis of modern cosmology. Other views or data are not considered, funded or published. Conclusion:  Cosmology as we know it is not science.  It is a religiously held philosophy that supports the progressive anti-god agenda.


“I find it quite improbable that such order came out of chaos.  There has to be some organizing principle.  God to me is a mystery but is the explanation for the miracle of existence, why there is something instead of nothing.”

                                                                    —Alan Sandage, Cosmologist


“However, the most unhealthy aspect of cosmology is its unspoken parallel with religion. Both deal with big but probably unanswerable questions. The rapt audience, the media exposure, the big book-sale, tempt priests and rogues, as well as the gullible, like no other subject in science.”

 —Michael Disney, “The Case Against Cosmology” Published in General Relativity and Gravtitation, Vol. 32, Issue 6, p. 1125, 2000


 

 

Did Hubble discover the Big Bang?

The Redshift Trap

Shortly after stars were first seen in galaxies, confirming that they are outside our galaxy, Edwin Hubble and others in 1929 discovered that the redshift of light from nearby galaxies was proportional to the distance as calculated from apparent brightness of Cepheid variable stars within the galaxies[1].  This is called Hubble’s Law and the proportionality constant is the Hubble Constant.  Because a redshift had been noted earlier in stars within our galaxy and had been attributed to movement of the source stars away from us, it was natural to assume, based on Hubble’s observations, that redshift of nearby galaxies was also caused by movement away from us.

This phenomenon is known as the Doppler Effect and is attributed to the fact that each wave of light is emitted just a little farther away as the source recedes, thus “stretching” the light to longer (redder) wavelengths.  Since farther is redder, farther must be faster by the Doppler Effect.

Since galaxies are light years distant we are seeing them as they appeared in the past.  Were the stars in the past moving faster than those in more recent times?  At first it appeared to be so.  Was the effect caused by the universe slowing down with time?  If the expansion is slowing down, could it eventually stop and then start to contract?  Instead, almost from the beginning, due to preconceived mathematically based theories postulating a beginning from a much smaller size, the redshift was seen as an expansion of the universe, not as contracting or slowing.  But what could explain the acceleration into the past?

After Einstein had defined space as being space-time, astronomers started to think of empty space as a thing the way the preceding generation talked about space filling aether.  Some theoretical astronomers, i.e. cosmologists, decided that the space between galaxies was expanding making more distant objects only appear to be moving faster.  (Like raisins on rising bread, all are moving at the same rate, but the expanding spaces between add up so that farther appears to be faster.) They never offered to explain the expansion of space; they just assumed it as a given.

After redshifts were found that indicated speeds near the speed of light, Hubble doubted that recessional speed was responsible for the redshift of galaxies.  In later years, he speculated about the intergalactic medium interacting with the light by gravitation or magnetism, etc. rather than expansion, as the cause of the redshift.  He is credited with discovering the expanding universe and thus the Big Bang, but after his earlier work, he spent the rest of his life working to refute it[2].


“[If the redshifts are a Doppler shift] … the observations as they stand lead to the anomaly of a closed universe, curiously small and dense, and, it may be added, suspiciously young. On the other hand, if redshifts are not Doppler effects, these anomalies disappear and the region observed appears as a small,homogeneous, but insignificant portion of a universe extended indefinitely both in space and time”

                             — E. Hubble, Roy. Astron. Soc. M. N., 17, 506, 1937


Link:  Hubble and red shift by Vincent Sauvé

[1] “A Relationship Between Distance and Radial Velocity Among Extra-Galactic Nebulae,” Edwin Hubble, Proceedings of the National Academy of Science, Vol. 15, 168, 1929.

[2] “The Problem of the Expanding Universe,” Edwin Hubble, American Scientist, Vol. 30, April 1942, No. 2

Darwin’s Problem with Ants

Darwin’s Claims

Worker ants of various castes and two large queens
Leaf Cutter Ants – Worker ants of various castes and two large queens

Darwin thought cells were simple bags of gel.  He knew nothing of DNA or any other cellular structures.  He believed that inheritance was through “Gemmules” that each cell shed and that traveled to the gametes (sperm and egg).  Since each cell “voted” it was called pangenesis. He believed that the life experiences of the parents were passed on to their offspring in this way. He believed evolutionary incremental changes occurred by passing these life experiences on to subsequent generations.

Darwin’s Dilemma

Colony insects were a problem for Darwin.  If life experiences were passed on, how does a queen ant, who has never experienced foraging for food, pass on the behavior of the worker ants who hunt for food and bring it back to the colony?

His theory of evolution taught that use and disuse along with adaptation to environmental changes experienced by parents were passed on and were responsible for the changes seen between species by gradual changes over time, coupled with natural selection aka survival of the fittest.  How is this any different from J-B Lamarck’s theory of acquired characteristics, which was discredited as having no foundation?  Did acceptance for Darwin’s theory and not Lamarck’s have more to do with politics and marketing than science?

Modern Evolutionary Biologists’ Dilemma

Obviously, modern evolutionary biologists found pangenesis and inheritance of acquired traits embarrassing, so, in the early 20th century they changed the theory to include genetics with an emphasis on natural selection and called it Neo-Darwinism or the Modern Synthesis. Later, they included DNA.  Although Darwin is still revered as if he had everything right, this form of Evolutionary theory is grossly different from the original Darwinian theory except for the assumption of natural selection and unlimited gradual changes producing new species over time.

 

Why Science and Religion are Compatible

 


“Try and penetrate with our limited means the secrets of nature and you will find that, behind all the discernible concatenations, there remains something subtle, intangible and inexplicable.”                                                    

                                                                                          — Albert Einstein


 


“In view of such harmony in the cosmos which I, with my limited human mind, am able to recognize, there are yet people who say there is no God. But what makes me really angry is that they quote me for support of such views.”

                                                                                          — Albert Einstein