Friday, January 27, 2012

Fick and Diffusion

Adolf Fick

I've been working on homework for a phase transformations class this week, and all of the problems have involved Fick's First and Second Laws of Diffusion, so he seemed to be a very obvious choice of subject this week.  Adolf Fick actually began his studies as a mathematician and physicist, but switched to medicine and got his PhD in 1851.  His thesis, rather than being on fluids as one might expect, was on astigmatism.  He taught at Zurich and then at Würzburg until his retirement.

I haven't been able to find anything about Fick's personal life except that he had a son, but he was quite active in his scholarly life.  He continued to apply the principles of physics and math to his study of medicine, and published works on how joints are articulated, where muscles get energy, how to measure blood pressure, how to measure how much carbon dioxide we exhale, and how we sense light and color.

I'd like to briefly introduce two of his inventions before returning again to diffusion.  He designed the first tonometer, which measures intraocular pressure (in the eye).  If you have ever been to an optometrist, they generally measure this now by puffing air into your eye.  Fick's method involved direct contact.  Using Fick's tonometer, one would press a plate against the eye and measure the force required to flatten the cornea to a specific diameter.  Fick's particular contribution was developing a mathematical way to converting the surface area of the contact between the eye and a plate and the force exerted on that plate into a measure of the pressure within the eye.  This relationship is often called the Imbert-Fick Law, since it was also discovered by Armand Imbert (1850-1922). The tonometer and Fick's studies of eyes led his nephew, also called Adolf Fick (1852-1937), to develop the first contact lenses in the late 1880s.1

Another of Fick's major contributions to medicine is known as Fick's principle or Fick's method.  I had never heard of these before researching this post, but perhaps those of you with a background in medicine will be more familiar with them.  Fick's principle provides a method for measuring cardiac output, or how much blood is being pumped by the heart.  He assumed that the rate at which oxygen is consumed is proportional to the rate of blood flow and the rate of oxygen absorption by red blood cells. By comparing the amount of oxygen in the blood entering and exciting the lungs, the rate of oxygen absorption can be measured, and from this the rate of blood flow can be deduced as being the rate of oxygen absorption over the difference in oxygen.  This principle can also be applied to blood flow through other organs, but as you can imagine, it is invasive and requires drawing and analyzing blood and oxygen consumption, but despite this, Fick's method is still referenced in literature today.

But back to diffusion.  Fick began his studies of diffusion after noticing that Thomas Graham (1805-1869), though he studied diffusion of salts in water and gases, had not developed a fundamental law describing diffusion.  Fick sought to rectify that.  He did not start from scratch, however, but recognized that the diffusion of atoms would be similar to the diffusion of heat.  Equations regarding heat transfer had been formulated in 1811 by Joseph Fourier (1768-1830). Fick pointed out that these equations had already been applied by Georg Ohm (1789 – 1854) to the diffusion of electricity in a conductor, so he set out to apply them now to the question of diffusion of liquids.  He published his paper On Liquid Diffusion in 1855 at the age of 26.

Top: Fick's equation
Bottom: modern equation
Fick's second law is the first equation on the right. He concluded his paper by saying that "such an hypothesis may serve as the foundation of a subsequent theory of these very dark phaenomena."2  On the whole, he was right.  The form of the second law has not changed from his initial statement except to be extended into three dimensions and to allow for the diffusion coefficient to be dependent on concentration.  The similarity can be seen in the second equation, which is that found in my textbook today.

I will leave you with that, and go and finish my diffusion homework.

[1] Mark, Armand Imbert, Adolf Fick, and their tonometry lawEye (2012) 26, 13–16.
[2] Fick, On Liquid Diffusion, Philosophical Magazine, 10, no. 63, July 1855.

Sunday, January 22, 2012

Schrödinger's Cat

While I try to write on people, this week I'll delve briefly into the subject of Schrödinger's Cat, which was raised by a comment on the introduction post (Thanks, Janet!).  I regret that I haven't gotten to the bottom of why Schrödinger picked a cat, but hopefully I can make the subject a little bit clearer, without saying something wrong.  I'm still trying to understand the nuances of quantum theory myself.

I usually try to have a picture to make the blog more interesting, but this time I'm going to start with a clip from the Big Bang Theory in which Sheldon explains the phenomenon of Schrödinger's Cat.

As Sheldon correctly states, Erwin Schrödinger (1887-1961) introduced his cat in a paper in 1935.  This was nine years after he had formulated his famous equation outlining the wave formulation of quantum mechanics. (That will undoubtedly be covered in more detail in a later post).  Below is the excerpt from the 1935 paper in which he describes the cat in the box, translated from the German.
One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.
Schrödinger introduced this thought experiment to show the "ridiculousness" of the concept of blurring and his discontent with the lack of determinism in quantum mechanics.  These terms require some background knowledge of quantum mechanics.

But first, what you may be wondering, even if quantum mechanics isn't your cup of tea, so why a cat?  Did Schrödinger have a thing against cats, or did he have a pet cat so this was the first thing that came to mind?  I have no idea.  But I do know that the choice of a cat fits the parameters of the experiment very well.  He needed an animal that would fit inside his hypothetical steel box, so elephants are out, but that would also be quiet while in there so as to not give away its state of being before the box was opened.  I'm sure some would argue that a cat in a box would scratch, but just think how much noisier a bird or dog would be.  And lastly, the animal needs to be killed by the poison and be obviously dead or alive in the end.  The cat seems to fit all of these.  Personally, I think a rabbit might have done better, but as I'm rather fond of them myself, I'm happy to let Schrödinger have his thought experiment cat.

Now back to the details of the experiment.  Quantum theory had introduced the idea that electrons and other particles can only be in certain states, but not ones in between.  There is a modification to that, which is that particles can be in a situation called a superposition, where they are in two different states at the same time, though only one can be measured at a time.  The Copenhagen interpretation says that the wavefunction of the particle (or, using Schrödinger's words, the psi-function), gives the probabilities that, when measured, the electron will be in a certain state comprising the superposition.

The nature of this superposition is what Schrödinger is addressing in this thought experiment.  If the electron, or in this case the radioactive atom, is in two states at once, undecayed and decayed, the cat, whose life depends on the state of the atom, must also be in two states, corresponding to the two states of the atom.  This is referred to as entanglement (another term with lots of implications).  The idea that the cat is both alive and dead until we look in the box is obviously a problem, and shows Schrödinger's discontent with the probabilistic interpretation at the atomic scale.

One issue with the cat is the question of an observer and how measurements affect the wavefunctions.  When measurements are made of quantum systems, they always give a determinate answer-the electron is in one state or the other.  This is called the collapse of the wavefunction.  But if you sample an electron in the same state (though defining what is the same can be difficult), it will give different answers when you measure it multiple times.  If it is in an equal superposition of states A and B, when you measure it numerous times (returning it to the same starting state each time), it will say it is in A half the time and in B the other half.  When you ask the electron which state it is in by measuring it, you become an observer.  So in the case of Schrödinger's cat, whose life is tied to the state of the nucleus, is the cat an observer of the nucleus such that it forces the nucleus to no longer be in a superposition?  By this logic, though, if you are constantly measuring a radioactive element, will it ever decay?

So after perhaps raising more questions than I gave answers, that is the general gist of the cat.  I think one of the things that people often overlook is that this was a thought experiment proceeded by the phrase "one can even set up quite ridiculous cases..."  Schrödinger was not saying that this is what actually happens to the cat by any means.  He was using this to show that it he thought it was naive to think that electrons are smeared out over different states.

Even though few people, and I would not even consider myself to be one of them, understand the full implications of what Schrödinger was trying to say, his cat has caught the public imagination.  I here include several links to interesting more popular and humorous references to the cat.
Viennese Meow, a short story from the point of view of the cat
The story of Schroedinger's cat, an epic poem

And for those with a more scholarly bent, here are a couple of papers on the subject, from least to most scholarly.
Schrödinger's Cat, a better description and certainly better illustrated
How to Create Quantum Superpositions of Living Things
The death of Schrödinger’s cat and of consciousness based quantum wave-function collapse, Carpenter and Anderson, Annales de la Fondation Louis de Broglie, Volume 31, no 1, 2006.

Friday, January 6, 2012

Celsius and the Centigrade

Anders Celsius
There is quite a bit of confusion in the United States about what the temperature is.  Today, for instance, I'd tell someone that it was forty-two degrees.  But if I told that to one of my international friends, she would look at me funny and, perhaps, begin working on a conversion from Fahrenheit.  Some true-blooded Americans also use the Celsius scale to give temperatures, as do most European countries.  But that isn't the end of it.  Some people, when asked which scale they just gave a temperature in, might say "centigrade," which just adds another term to the confusion.

The first part of this confusion originated in the eighteenth century, when two men, Anders Celsius and Daniel Fahrenheit, both developed thermometers with different scales.  Theirs were not the first, however. Galileo is usually credited with inventing the first thermometer, in 1592, but he did not develop a memorable scale to go with it.  In the seventeenth century, liquid thermometers were developed and could be made quite accurately, but no standard scale had come into use.  In the 1660s, Robert Hooke developed a thermometer scale that went from -7 to 13, and many other scientists also developed temperature scales.

Anders Celsius was a Swedish astronomer.  As a professor at the University of Uppsala, starting in 1730, he spent a lot of time making measurements.  In 1730 he published a paper on a new method of determining the distance of the earth from the sun, and in 1736 he participating in an expedition to Lapland to measure the arc of a meridian.  In conjunction with another expedition to Peru, this measurment confirmed Newton's theory that the earth bulges slightly in the middle.  He also measured the brightness of stars by seeing how many layers of a thin film it took before the light disappeared.  With the strength of these measurements behind him, he persuaded the University of Uppsala to let him build an observatory, which was completed in 1741.  This was the same observatory that Anders Ångström would be in charge of over a hundred years later.

None of those measurements would seem to necessitate having a thermometer, however, and this is in part because his job description as astronomer is different from what we think of today. Certainly measuring the distance of the earth from the sun falls under astronomy, but back in the eighteenth century, so did measuring distances like the arc of a meridian, the changes in the height of seawater, and more meteorological measurements, including temperature. Celsius developed his scale by setting the boiling point of water at 0 and the freezing point of water at 100.  He called the units "centigrade", because the distance between those points is divided into one hundred equal steps. This was not radical, as that was the way most scales were created--by choosing two points and putting a certain number of degrees between them.  Celsius took his study of temperature one step further. He was not content with just making a thermometer that worked in Uppsala, but wanted to better understand the nature of temperature and make sure that it was independent of location.  By making measurement in many places, along with measuring the atmospheric pressure, he determined that the freezing point, but not the boiling point, was independent of pressure, though neither depends on latitude.  He published a paper reporting his results entitled "Observations on two persistent degrees on a thermometer" in 1742.  He died only two years later of tuberculosis.

If you were paying attention when I mentioned what the two points of his scale were, you will notice that his scale went backwards from what it does today--the boiling point of water is 0 and the freezing point is 100.  The switched scale, as we know it today, was made popular by Carl Linnaeus, the Swedish botanist famous for originating the biological nomenclature used today.  He made measurements of the conditions in which plants grow, and for  him, the freezing point of water was vital, since many plants die below that temperature.  In a paper published in 1745, only a year after Celsius's death, he used the same size of a degree and the same fixed points as Celsius, but placed 0 at the freezing point of water and 100 at the boiling point.  The modern Celsius scale was born.  Well, not quite.

Celsius had called his scale the centigrade scale, and it continued to be called that for centuries.  It was not until 1948 that the Ninth General Conference of Weights and Measures renamed degrees centigrade degrees Celsius, and caused even more confusion.  So the next time someone says "degrees centigrade," they aren't wrong, per se, just outdated.

Sources and further information:
Temperature Scales from the early days of thermometry to the 21 st century
History of the Celsius Temperature Scale
Anders Celsius
Linnaeus' Thermometer

Sunday, January 1, 2012

More than a Flask: Emil Erlenmeyer

An Erlenmeyer Flask
(Photo by Lucasbosch and
used under the CC licence)
When learning one's way about the laboratory and it's equipment, it is easy to see how glasswares such as the volumetric flask and graduated cylinder got their names.  The Erlenmeyer flask, however, is not at all descriptive, which perhaps stems from the fact that an easy name including a suggestion of its use or shape would be hard to come by.  Maybe a swirling flask, or a narrow-neck flask?  But whatever else it could be called, what has come down to us in the lab today is the name of the flask's inventor, Emil Erlenmeyer.

Emil Erlenmeyer
Erlenmeyer's full name was Richard August Carl Emil Erlenmeyer, so it is easy to see why he went with only Emil.  He began his studies as a chemist, and then turned to pharmacy in the 1840's and became an apothecary. Recall, however, that this was hardly a time of sophisticated medicine.  Florence Nightingale and her pleas for sanitation would not come until the next decade, and the effectiveness of the plethora of pills we have today was unheard of.

He returned to chemistry, however, and began on an academic career in German universities, studying at the University of Geissen and then becoming a professor at the University of Heidelberg.  I haven't been able to find a good story about how he created the Erlenmeyer flask, which he did in 1861, but since he was working in the lab experimenting with chemicals, it is easy to see why he did so. The shape has two main benefits: the shape, being wider at the bottom, makes it easier to swirl liquids without them splashing out, and the narrow neck reduces the amount of air exchange. He wasn't alone in working on improving laboratory equipment. Robert Bunsen, who had invented the Bunsen burner in the 1850s, was also a professor at the University of Heidelberg.

After invention of his flask, he had many more years to enjoy the fruits of his labors and the greater ease that the flask gave him in performing his experiments.  He published, according to a German website, at least twenty-seven articles, including several on cinnamic acid and other issues affecting organic chemistry.1   Another notable organic chemist was also working at Heidelberg when Erlenmeyer was there, August Kekulé, who was the first to write the structure of benzene as alternating double bonds.  The precise structure of molecules was a question that Erlenmeyer went on to study further.

Erlenmeyer's most memorable contribution to organic chemistry, though not as good for his name recognition as the flask, is "Erlenmeyer's Rule."  He developed this in the 1880s, by which point he was teaching at and retiring from the Munich Polytechnic School.  When I took organic chemistry, I don't recall this principle being called Erlenmeyer's Rule, though a quick literature search revealed that the name is still used.  Erlenmeyer's Rule says that if there is a hydroxyl group attached to a double bonded carbon, tautomerization will occur into the ketone or aldehyde form.  I wish I had known it had a name in organic chemistry, because I tended to forget this when writing the result of a reaction, and it would have been much better to blame it on a rule!

[1] DFG on Emil Erlenmeyer And if you are in Germany, you can register and read them for yourself! I, however, do not speak German and am now wishing that I did.