From Cosmos to Chaos The Science of Unpredictability by Peter Coles

From Cosmos to Chaos The Science of Unpredictability by Peter Coles

Author:Peter Coles [Coles, Peter]
Format: epub
Tags: Science
ISBN: 0198567626
Publisher: Oxford UP USA
Published: 2010-09-25T03:00:00+00:00


m

3

2

pðvÞ ¼ 4

v2 exp À mv2 ;

2pkT

2kT

where m is the molecular mass; the mean kinetic energy is just 3kT. This may be ringing vague bells about the way I sneaked in a different definition of entropy in Chapter 4. There I introduced the quantity Z

pðxÞ

S ¼ À

pðxÞ log mðxÞ dx

From Engines to Entropy

109

as a form of entropy without any reference to thermodynamics at all. One can actually derive the Maxwell–Boltzmann distribution using this form too: the distribution of velocities in each direction is found by maximising the entropy subject to the constraint that the variance is constant (as the variance determines the mean square velocity and hence the mean kinetic energy). This means that the distribution of each component of the velocity must be Gaussian and if the system is statistically isotropic each component must be independent of the others. The speed is thus given byffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

q

ffi

v ¼

v2 þ v2 þ v2

x

y

z

and each of the components has a Gaussian distribution with variance kT. A straightforward simplification leads to the Maxwell–Boltzmann form. But what was behind this earlier definition of entropy? In fact the discrete form (with uniform measure)

X

S ¼ ÀI ¼ À

pi log pi

i

derives not from physics but from information theory. Claude Shannon derived the expression for the information content I of a probability distribution defined for a discrete distribution in which i runs from 1 to n. Information is sometimes called negentropy because in Shannon’s definition entropy is simply negative information: the state of maximum entropy is the state of least information. If one uses logarithms to the base 2, the information entropy is equal to the number of yes or no questions required to take our state of knowledge from wherever it is now to one of certainty. If we are certain already we do not need to ask any questions so the entropy is zero. If we are ignorant then we have to ask a lot; our entropy is maximized. The similarity of this statement of entropy to that involved in the Gibbs algorithm is not a coincidence. It hints at something of great significance, namely that probability enters into the field of statistical mechanics not as a property of a physical system but as a way of encoding the uncertainty in our knowledge of the system. The missing link in this chain of reasoning was supplied in 1965 by the remarkable and much undervalued physicist Ed Jaynes. He showed that if we set up a system according to the Gibbs algorithm, i.e. so 110

From Cosmos to Chaos

that the starting configuration corresponds to the maximum Gibbs entropy, the subsequent evolution of Gibbs entropy is numerically identical to the macroscopic definition given by Clausius I introduced right at the beginning of this Chapter. This is an amazingly beautiful result that is amazingly poorly known.

This interpretation often causes hostility among physicists who use the word ‘subjective’ to describe its perceived shortcomings. I do not think subjective is really the correct word to use, but there is some sense in which it does apply to thermodynamics.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.