













































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
the subject is related to my course on stat mech . includes topics on microscopic, macroscopic description of matter
Typology: Lecture notes
1 / 53
This page cannot be seen from the preview
Don't miss anything!
The classical and quantum dynamics of simple systems with few degrees of freedom is well understood in that we know the equations of motion which govern their time evolution and, in most cases, we can solve these equations exactly or, in any case, to the required degree of accuracy. Even systems with many degrees of freedom can be analyzed, for example, many problems in electromagnetic theory, where the field degrees of freedom are large, can be solved using Maxwell’s equations. However, it remains that there are physical systems, which are typified by possessing a large number of degrees of freedom, for which analysis of this kind in inappropriate or, in practice, impossible, Whilst these systems are made up of identifiable constituents whose interactions are well known, different concepts are needed to describe the evolution of the system and the way it interacts with an observer. In particular, the concepts of heat, temperature, pressure, entropy, etc. must arise from a proper analysis – they are certainly not evident in the equations of motion, even though knowledge of the equations of motion is indispensable. These are ideas which come from a notion of statistical averaging over the detailed properties of the system. This is because they only make sense when thought of as properties which apply on the large scales typical of the observer, rather than the microscopic scales of the individual constituents. Statistical mechanics was developed to address this conceptual problem. It enables one to answer questions like “how do you calculate the physical properties of a system in thermal contact with large reservoir of heat (known as a heat bath) at temperature T ?”
Statistical mechanics deals with macroscopic†^ systems with very many degrees of freedom. For example, a gas with a large number of particles or a complex quantum system with a large number of quantum states. A thermally isolated or closed system is one whose boundary neither lets heat in or out. Consider such a system, S, that is not subject to any external forces. No matter how it is prepared, it is an experimental fact that after sufficient time S reaches a steady state in that it can be specified by a set, Σ, of time-independent macroscopic or thermodynamic, variables which usefully describe all the large-scale properties of the system for all practical purposes. The system is then said to be in a state of thermal equilibrium specified
†By macroscopic we mean that the system is large on the scale of the typical lengths associated with the microscopic description, such as the inter-atomic spacing in a solid or mean particle separation in a gas. That being said, it can be the case that systems with as few as a hundred particles can be successfully described by statistical mechanics.
by Σ. For example, for a gas isolated in this manner, Σ includes the pressure P, volume V, temperature T, energy E, and entropy S. Although the state of thermal equilibrium described by Σ is unique and time- independent in this sense, the individual degrees of freedom such as particle positions and momenta are changing with time. It is certainly not practical to know the details of how each degree of freedom behaves, and it is a basic tenet of statistical mechanics that it is not necessary to know them in order to fully specify the equilibrium state: they are not of any practical importance to calculating the accessible experimental properties of the equilibrium system. The task of statistical mechanics is to use averaging techniques and statistical methods to predict the variables Σ characterizing the equilibrium state. We are generally able to describe the dynamics of such systems at the microscopic level. For instance, we can specify the interaction between neighbouring atoms in a crystal or between pairs of particles in a gas. This description arises ultimately from our knowledge of quantum mechanics (and relativity), but it may also be legitimate and useful to rely on classical mechanics which we view as derivable from quantum mechanics by the correspondence principle. We shall develop the ideas of statistical mechanics using quantum mechanics in what follows. We define
(a) The microscopic states of S are the stationary quantum states |i〉.
This is the Dirac ‘bra’ (〈i|) and ‘ket’ (|i〉) notation for states. The label i stands implicitly for the complete set of quantum numbers which specify the state uniquely. For example, H|i〉 = Ei|i〉. The matrix element of an operator X between states |i〉 and |j〉 is 〈i|X|j〉 which, in wave-function notation, is the same as
∫ dx φ∗ i Xφj. Then 〈i|X|i〉 is the expectation value, 〈X〉, of X.
(b) The macroscopic states of S are the possible states of thermodynamic equilibrium and are described by the corresponding set of thermodynamic variables, Σ. These are not states in the quantum mechanical sense but involve a vast number of microstates.
An important idea is the ergodic hypothesis (which has only been proved for a few systems†) which states that:
A system S evolves in time through all accessible microstates compatible with its state of thermodynamic equilibrium.
This means that time averages for a single system S can be replaced by averages at a fixed time over a suitable ensemble E of systems, each member identical to S in its macroscopic properties. An important feature is that the ensemble, E, can be realized in different ways whilst giving the same results for the thermodynamic properties of the original system. The major realizations of E are:
The important principle underlying statistical mechanics is that we assign equal a priori probability to each way of realizing each allowed configuration. This means that the probability of finding configuration {ai} is proportional to W (a). In the light of the earlier discussion, the probability distribution for the {ai} will be sharply peaked and the fluctuations about the peak value will be very small: ∼ O(1/
A). Since we can take A as large as we like, the average of any variable over the ensemble will be overwhelmingly dominated by the value it takes in the most probable configuration, {¯ai}. For fixed N, V and E, we shall associate the state of thermodynamic equilibrium of S with the most probable configuration of E that is consistent with the constraints (1.2.1). We find {¯ai} by maximizing log W (a) w.r.t. {ai}. Using Stirling’s formula, (log n! = n log n − n + 12 log 2πn), we have
log W ∼ A log A − A −
∑
i
ai(log ai − 1) = A log A −
∑
i
ai log ai , (1.2.3)
since
∑ i ai^ =^ A.^ We maximize this expression subject to the constraints (1.2.1) by using Lagrange multipliers. Then we have
∂ ∂aj
( A log A −
∑
i
ai log ai − α
∑
i
ai − β
∑
i
aiEi
) = 0 , ∀i. (1.2.4)
Thus
log aj + 1 + α + βEj = 0 , =⇒ aj = e−^1 −α−βEj^. (1.2.5)
We eliminate α using (1.2.1), and define Z, the canonical partition function:
A =
∑
i
ai = e−^1 −αZ ,
with Z =
∑
i
e−βEi^ =
∑
E
Ω(E)e−βE^ , (1.2.6)
where Ω(E) is the degeneracy of levels with energy E, and E runs over all distinct values in the spectrum of the system. The fraction of members of E found in microstate |i〉 in the macrostate of thermo- dynamic equilibrium is
ρi =
ai A
e−βEi^. (1.2.7)
This is the Boltzmann distribution and is thought of as the (quantum mechanical) probability of finding |i〉 in the state of thermodynamic equilibrium. The average 〈X〉 of a physical observable is then
〈X〉 =
∑
i
〈i|X|i〉ρi. (1.2.8)
For example,
〈E〉 =
∑
i
〈i|E|i〉ρi =
∑
i
aiEi = E , (1.2.9)
from (1.2.1), as we expect it should. Z is very important since we shall see that it allows us to calculate the thermody- namic and large-scale properties of a system in equilibrium from the quantum mechan- ical description of the system. An important example is that
〈E〉 = −
( ∂ log Z ∂β
)
V
Holding V fixed means holding the Ei fixed since these latter depend on V. As emphasized earlier for the canonical ensemble we can think of each individual system as being in contact with a heat bath made up of the rest of the ensemble. What are the fluctuations in energy of a system? Using above results, we have
∂E ∂β
∂β^2
( ∂Z ∂β
) 2
For large systems typically E ∼ N and, as E depends smoothly on β we expect
∂E ∂β
1 (^2) ∝ V −^
1 (^2). (1.2.12)
We have found that energy plays a vitally central rˆole in determining the probability for finding a system in a given state. In principle, the ρi can depend on other variables too. For example, the particle number Ni, corresponding to the grand ensemble, or even angular momenta where relevant. In general, such quantities must be be conserved and observable on the large scale.
Consider two systems Sa and Sb with volumes Va, Vb and particle numbers Na, Nb, re- spectively. Form a joint ensemble Eab from the separate ensembles Ea and Eb by allowing all members of both to exchange energy whilst keeping the overall total energy fixed. By allowing Sa and Sb to exchange energy we are allowing for interactions between the particles of the two systems. These will be sufficient to establish equilibrium of Sa with Sb but otherwise be negligible. For instance, they take place across a boundary common to the systems and so they contribute energy corrections which scale with the boundary area and so are therefore negligible compared with the energy, which scales with volume. Now form a composite system Sab by joining Sa and Sb together. Let Sa have microstates |i〉a with energy Eia and Sb have microstates |i〉b with energy Eib. Because the extra interactions can be neglected in the bulk, then Sab will have microstates |ij >ab= |i >a |j >b with energy Eijab = Eai + Ebj. Now, Ea and Eb are separately in equilibrium characterized by Lagrange multipliers βa and βb, respectively. Also, Eab is an equilibrium ensemble which is characterized by βab, and so we must have
ρabij (Eia + Ejb ) = ρai (Eia )ρbj (Ejb ) , ∀ Eai and Ejb. (1.3.1)
Then e−βab(E
ai +Ejb )
Zab
e−(βaE ia +βbEjb ) ZaZb
, ∀ Eai and Ejb. (1.3.2)
So far β and β′^ do not necessarily have the same value since they are just Lagrange multipliers imposing the energy constraint in two separate maximizations. However, the relations between {cij } and {ai, bj } given in (1.3.4) can be only be satisfied if β′^ = β. It then follows that (c.f. (1.3.1))
Zab(β) = Za(β)Zb(β) , and ρabij = ρai ρbj. (1.3.10)
The temperature can be defined by
β =
kT
where k is Boltzmann’s constant and T is in degrees Kelvin. Because the analysis applies to any pair of systems k is a universal constant: k = 1.38 10−^23 Joules/^0 K. This definition ensures that the average energy increases as T increases. For a dilute gas with N particles this implies Boyle’s law, P V = N kT , as we shall see.
A system undergoes an adiabatic change if
(i) the system is thermally isolated so that no energy in the form of heat can cross the system boundary: δQ = 0,
(ii) the change is caused by purely by external forces acting on the system,
(iii) the change takes place arbitrarily slowly.
By heat Q we mean energy that is transfered in a disordered way by random processes. In contrast, work W is energy transfered through the agency of external forces in an ordered way. Work can be done on the system by moving one of the walls (like a piston) with an external force, which corresponds to changing the volume. If the change is adiabatic, no energy is permitted to enter or leave the system through the walls. Because the motion is slow the applied force just balances the pressure force P A (A is the area of the wall), and the work done on the system is then δW = −P AδL = −P δV. If the system is in a given microstate with energy Ei the change is slow enough that it remains in this state – there is no induced quantum-mechanical transition to another level. This is the Ehrenfest principle. The change in energy balances the work done and so δWi = δEi = ∂Ei ∂V
δV. (1.4.1)
For a system in TE the work done in an adiabatic change is the ensemble average of this result
−P δV = δW =
∑ i
ρi ∂Ei ∂V
δV =⇒ P = −
∑ i
ρi ∂Ei ∂V
This is reasonable since if P exists then it is a thermodynamic variable and so should be expressible as an ensemble average. These ideas are most easily understood in the context of an ideal gas. Consider the quantum mechanics of a gas of N non-interacting particles in a cube of side L. The
single particle energy levels are En = ¯h^2 n^2 / 2 mL^2 with n ∈ Z^3. Note that for given n, En ∝ V −^2 /^3 – generally, we expect E = E(V ). Then
∂Ei ∂V
Ei V =⇒ P =
∑
i
ρiEi = 23 , (1.4.3)
where = E/V is the energy density. This is the formula relating the pressure to the energy density for a non-relativistic system of particles. We also deduce the energy conservation equation for an adiabatic change (δQ = 0):
P δV + δE = 0. (1.4.4)
For the ideal gas an adiabatic change does not change the occupation probabilities ρi of the energy levels – just the energies of the levels change. This is the Ehrenfest principle. For systems with Ei(V ) ∝ V −s^ the ρi are still Boltzmann equilibrium probabilities in V + δV but with a different temperature – s = 2/3 for the ideal gas. If the change is not adiabatic then heat energy can enter or leave the system. Since E =
∑ i ρiEi, in the most general case we have δE =
∑
i
ρiδEi +
∑
i
Eiδρi
or δE = −P δV +
∑
i
Eiδρi. (note that
∑
i
δρi = 0.) (1.4.5)
To deal with this eventuality we introduce the concept of entropy. Define the entropy, S, of a system by
S =
k A log W (a) , (1.4.6)
where, as before, W (a) is the number of ways of realizing the ensemble configuration {ai}. Since there are A members in the ensemble this is the entropy per system. Note that the ensemble is not necessarily an equilibrium one. However, with this definition we see that for an equilibrium ensemble
Seq =
k A log Wmax , (1.4.7)
and from (1.4.6) and (1.4.7) we deduce that S is maximized for the an equilibrium ensemble. Using (1.2.2) we get
k A
( A log A −
∑
i
ai log ai
)
k A
( A log A −
∑
i
ai log ρi − log A
∑
i
ai
) ,
=⇒
S = − k
∑ i ρi^ log^ ρi^ (1.4.8)
where we have used ρi = ai/A ,
∑ i ai^ =^ A. This is an important formula which can be derived in many different ways from various reasonable requirements. However, for
Note that neither δQ or δW are exact derivatives even though dE, dS and dV are. For an adiabatic change we have dS = δQ/T = 0. So the definition of an adiabatic change may be taken to be dS = 0. From (1.4.10) we see that
( ∂E ∂S
)
V
( ∂E ∂V
)
S
For a system in equilibrium we can derive S ≡ Seq from the partition function. We have
S = −k
∑
i
ρi log ρi
= −k
∑
i
ρi(−βEi − log Z)
E + k log Z. (1.4.14)
We define the free energy, F , to be
F = − kT log Z =⇒ F = E − T S. (1.4.15)
In differential form we then have
dF = dE − T dS − SdT ,
or, using the first law in the form (1.4.10) for nearby equilibrium states
dF = − P dV − SdT , =⇒ P = −
( ∂F ∂V
)
T
( ∂F ∂T
)
V
We shall see that the idea of the free energy, F , is a very important one in thermody- namics. From (1.4.16) we deduce that F is a function of the independent variables V and T. Similarly, from (1.4.16),
F = F (V, T ) , =⇒ P = P (V, T ) , S = S(V, T ). (1.4.17)
Note also that E = E(S, V ) since S, V are the independent variables. Thus the ther- modynamic state of equilibrium characterized by P, V, T, S can be determined from the partition function, Z, through the idea of the free energy, F. The equation which relates P, V and T , in either of the forms shown in (1.4.16) and (1.4.17), is known as the equation of state. Remember, all these quantities depend implicitly on the fixed value of N and hence on the particle density n = N/V.
Suppose an equilibrium system is placed next to an exact copy and the partition be- tween them is withdrawn to make the system twice the size. Variables that double in this process are called extensive and those whose value is the same as the original are called intensive. For example, N, V, S, E are extensive and P, T are intensive.
To elaborate. Consider two equilibrium systems at temperature T , S 1 and S 2 , which are identical in every way except that they have volumes V 1 and V 2 , respectively. Put them next to each other but do not remove the partition. Then for the whole system, S 12 , we have V 12 = V 1 + V 2 , N 12 = N 1 + N 2. Since, Z 12 = Z 1 Z 2 , we see that E = E 1 + E 2 , S = S 1 + S 2 etc. If these variables are extensive as claimed then these equalities will hold after the partition is removed. We expect this to be the case if S 1 , S 2 are homogeneous (which means S 12 will be, too). The particles in each system are of the same nature and are not distinguishable and so by removing the partition we do not mix the systems in a detectable way. Hence, the entropy S 12 should remain unchanged. Then we expect that V, E, S are proportional to N. Clearly, S 1 and S 2 have the same T, P and so T, P are intensive. This expectation can fail, for example,
(a) if surface energies are not negligible compared with volume energies – the partition matters too much,
(b) there are long-range forces such as gravity. The systems are no longer homogeneous. Think of the atmosphere – different parcels of air at different heights may have the same temperature but the pressure and density are functions of height.
(i) Let S be the sum of two systems, S = S 1 + S 2 , neither of which is necessarily in equilibrium. Although the partition function is not useful we can still show that the entropy defined by (1.4.8) is additive. Let the level occupation probabilities for S 1 , S 2 be ρi, σi, respectively. Then
S = −k
∑
ij
ρiσj log ρiσj
= −k
∑
ij
ρiσj (log ρi + log σj )
= −k
∑
i
ρi log ρi +
∑
j
σj log σj
= S 1 + S 2 , (1.6.1)
where we have used
∑ i ρi^ =^
∑ j σj^ = 1. (ii) From the definition of entropy (1.4.6) we see that the entropy is a maximum for a system in equilibrium. Indeed, it was the entropy that we maximized to determine the equilibrium state. More on this later.
(iii) Of the four variables, P, V, T, S, any two can be treated as independent on which the others depend. We have naturally found above V, T to be independent, but we can define G = F + P V which leads to
dG = dF + P dV + V dP , =⇒ dG = V dP − SdT. (1.6.2)
G is called the Gibbs function. Clearly, G = G(P, T ) and now V and S are functions of P, T. This merely means that the experimenter chooses to control P, T and see how V, S vary as they are changed. The transformation G = F + P V , which changes the set of independent variables, is called a Legendre transformation.
The function g() is the density of states. A general analysis in D-dimensions gives
g() = gs
(2π¯h)D^
pD−^1 dp d
where SD is the surface area of the unit sphere in D-dimensions, and V = LD. Again for non-relativistic particles we can use (1.7.1.3) to get
D = 1 g() = gs
2 π
( 2 m ¯h^2
) (^12) −^
1 2
D = 2 g() = gs
4 π
( 2 m ¯h^2
)
. (1.7.1.8)
Note that for D = 2 g() is a constant independent of . For relativistic particles of rest mass m, the analysis is identical and we can use the general formula (1.7.1.7) with the relativistic dispersion relation (p) =
√ p^2 c^2 + m^2 c^4. In particular, photons have m = 0 and gs = 2 since there are two distinct polarization states. In this case, and for D = 3 we get
g() = V
π^2 ¯h^3 c^3
Since = ¯hω we get the density of states in ω −→ ω + dω, g(ω), to be
g(ω) = g() d dω
ω^2 π^2 c^3
1.7.2 the partition function for free spinless particles
Consider N particles in volume V = L^3. Then the single particle partition function is
z =
∑
i
e−βi^ −→
∫ (^) ∞
0
d g()e−β^. (1.7.2.1)
From (1.7.1.6), and setting y = β, this gives
z =
4 π^2
( (^2) mkT
¯h^2
) 32 ∫ (^) ∞
0
dy y
(^12) e−y^ = V
( 2 mπkT h^2
) (^32)
. (1.7.2.2)
In D-dimensions a similar results holds and we find from (1.7.1.7)
z ∝ (kT )D/^2. (1.7.2.3)
Now, from before, we have Z = zN^ and so log Z = N log z. Then
E = − N
∂β log z = − N
∂β
log β + const.) =
N kT. (1.7.2.4)
This is an important result of classical statistical mechanics and is called the equipar- tition of energy which states that
The average energy per particle degree of freedom is 12 kT
Combining this result with P V = 23 E from (1.4.3) we get the equation of state for a perfect gas. P V = N kT Boyle’s Law. (1.7.2.5)
Note that this result holds in D-dimensions since the generalization of (1.4.3) is P = 2 D ^.
1.7.3 entropy and the Gibbs paradox
Using (1.4.15) and (1.4.16), which states that
F = − kT log Z , S = −
( ∂F ∂T
)
V
we get
S = N k log z +
N k
= N k log V +
N k log
( 2 mπkT h^2
)
N k , (1.7.3.1)
which is not extensive since V ∝ N. This is Gibbs paradox. The resolution is to realize that using Z = zN^ treats the particles as distinguishable since it corresponds to summing independently over the states of each particle. In reality the particles are indistinguishable, and we should not count states as different if they can be transformed into each other merely by shuffling the particles. The paradox is resolved by accounting for this fact and taking Z = zN^ /N !. Using Stirling’s approximation, we find
S = N k log
( V N
)
N k log
( 2 mπkT h^2
)
N k , (1.7.3.2)
which is extensive. We should note at this stage that dividing by N! correctly removes the overcounting only in the case where the probability that two particles occupy the same level is negligible. This is the situation where Boltzmann statistics is valid and certainly it works for T large enough or the number density is low enough. In other cases we must be more careful and we shall see that the consequences are very important.
We saw in section 1.4 from the definition of entropy (1.4.6) and (1.4.7), that S is maximized for an equilibrium ensemble. This is a general principle which states that
For a system with fixed external conditions the entropy, S, is a maximum in the state of thermodynamic equilibrium.
By “fixed external conditions” we mean variables like V, E, N for an isolated system or V, T, N for a system in contact with a heat bath at temperature T. For example, let S 1 be a perfect gas of volume V 1 with N 1 molecules of species 1, and S 2 be a perfect gas of volume V 2 with N 2 molecules of species 2. Both systems are
to the four constraints (it may be unchanged c.f. Gibbs paradox). Hence, the entropy will generally increase when the systems are combined. Mixing just corresponds to removal of some constraints.
(iv) We have not ever discussed how a system reaches equilibrium. In practice, it takes a certain time to evolve to equilibrium and some parts of the system may reach equi- librium faster than others. Indeed, the time constants characterizing the approach to equilibrium may differ markedly. This can mean that although equilibrium will be established it might not occur on time scales relevant to observation. For example, some kinds of naturally occurring sugar molecules are found in one isomeric form or handedness. In equilibrium, there would be equal of each isomer but this is not relevant to nature. The description of the system at equilibrium need not include those forces which are negligible for most purposes. For example, exactly how a gas molecule interacts with the vessel walls is a surface effect and does not affect any thermodynamic property in the infinite volume limit. However, it is generally assumed that it is such small randomizing interactions which allow energy to be transfered between levels, allowing the most probable state to be achieved. They are vital to the working of the ergodic hypothesis. This is by no means certain to hold and in one dimension it is quite hard to establish that it does. The point is, that if a closed system is not in equilibrium then, given enough time, the most probable consequence is an evolution to more probable macrostates and a con- sequent steady increase in the entropy. Given ergodicity, the probability of transition to a state of higher entropy is enormously larger than to a state of lower entropy. Chance fluctuations to a state of lower entropy will never be observed.
The law of increase of entropy states that: If at some instant the entropy of a closed system does not have its maximum value, then at subsequent instants the entropy will not decrease. It will increase or at least remain constant.
2 Thermodynamics
Consider a volume V of gas with fixed number N of particles. From the work on statistical mechanics we have found a number of concepts which enable us to describe the thermodynamic properties of systems:
(i) E: the total energy of the system. This arose as the average ensemble energy and which has negligible fluctuations. It is extensive.
(ii) S: the entropy. This is a new idea which measures the number of microstates con- tributing to the system macrostate. S is a maximum at equilibrium and is an extensive property of the system. It has meaning for systems not in equilibrium.
(iii) For nearby equilibrium states dE = T dS − P dV.
(iv) F : the free energy. It is naturally derived from the partition function and, like E, is a property of the system. For nearby equilibrium states dF = −SdT − P dV. It is extensive.
(v) All thermodynamic variables are functions of two suitably chosen independent variables: F = F (T, V ), E = E(S, T ). Note, if X is extensive we cannot have X = X(P, T ).
(vi) Equations of state express the relationship between thermodynamic variables. For example, P V = N kT for a perfect gas.
It is because E, S, P, V, T, F etc. are properties of the equilibrium state that we are able to write infinitesimal changes in their values as exact derivatives. The changes in their values are independent of the path taken from an initial to final equilibrium state. This is not true of the heat δQ absorbed or emitted in a change between nearby equilibrium states – it does depend on the path: δQ is not an exact derivative. However, dS = δQ/T is an exact derivative, and 1/T is the integrating factor just like in the theory of first-order o.d.es. Energy conservation is expressed by the fist law:
δE = δQ + δW for infinitesimal change ∆E = ∆Q + ∆W for finite change (2.1)
where δQ(∆Q) is the heat supplied to the system and δW (∆W ) is the work done on the system. Suppose the state changes from E(S 1 , V 1 ) to E(S 2 , V 2 ) then in general we expect ∆W ≥ −
∫ (^) V 2
V 1
P dV (2.2)
because additional work is done against friction, generating turbulence etc. However, dE = T dS − P dV which implies
∫ (^) S 2
S 1
T dS. (2.3)
These inequalities hold as equalities for reversible changes. This will hold if the change is non-dissipative and quasistatic. Quasistatic means a change so slow that the system is always arbitrarily close to a state of equilibrium. The path between initial and final states can then be plotted in the space of relevant thermodynamic variables. This is necessary because otherwise thermodynamic variables such as P, T etc. are not defined on the path and we cannot express δW = −P δV as must be the case for reversibility. In this case, no energy is dissipated in a way that cannot be recovered by reversing the process. For an irreversible change the extra increase in entropy is due to the extra ran- domizing effects due to friction, turbulence etc. A simple example is rapid piston movement as well as mixing of distinguishable gases and free expansion of gas into a vacuum. For an isolated, or closed, system ∆Q = 0 but ∆S ≥ 0, the equality holding for a reversible change. In real life there is always some dissipation and so even if δQ = 0 we expect δS > 0. These ideas are compatible with common sense notions. For reversible transitions the system can be returned to its original state by manipulating the external conditions.
( ∂z ∂x
)
y
( ∂x ∂z
)
y
and
( ∂z ∂x
)
y
( ∂z ∂y
)
x
( ∂y ∂x
)
z
Multiply the last equation by (∂x/∂z)y and use the second equation to get ( ∂z ∂y
)
x
( ∂y ∂x
)
z
( ∂x ∂z
)
y
Substitute any three of P, V, T, S for x, y, z to relate the terms in Maxwell’s relations. We note that E, F, H, G are all extensive. Note that G explicitly only depends on the intensive quantities T, P and so its extensive character is due to N and we can write G = μ(P, T )N , where μ is called the chemical potential. More on this later. Consider E = E(V, S) and change variables (V, S) −→ (V, T ). Then we have ( ∂E ∂V
)
T
( ∂E ∂V
)
S
( ∂E ∂S
)
V
( ∂S ∂V
)
T
Using dE = T dS − P dV this gives ( ∂E ∂V
)
T
( ∂S ∂V
)
T
and using (2.2.1.6) (from differentiability of F ) we have ( ∂E ∂V
)
T
( ∂P ∂T
)
V
For a perfect gas the equation of state is P V = N kT which implies ( ∂E ∂V
)
T
Since we only need two independent variables we have in this case that E = E(T ) – a function of T only. This is an example of equipartition where E = 32 N kT.
2.2.2 specific heats
Because δQ is not an exact derivative there is no concept of the amount of heat stored in a system. Instead δQ is the amount of energy absorbed or emitted by the system by random processes and excluding work done by mechanical or other means. If we specify the conditions under which the heat is supplied then we can determine δQ using
δQ = dE + δW , (2.2.2.1)
where dE is an exact derivative. It may be the case that absorption/emission of heat is not accompanied by a change in T. An example, is the latent heat of evaporation of water to a gas. However, this is a phase change which we exclude from the current discussion. We can write δQ = CdT where C is the heat capacity of the system. Clearly, to give C meaning we need to specify the conditions under which the change takes place. Important cases are where the change is reversible and at fixed V or fixed P. Then δQ = T dS and we define
CV = T
( ∂S ∂T
)
V
( ∂S ∂T
)
P
where CV is the heat capacity at constant volume and CP is heat capacity at constant pressure. Using
dE = T dS − P dV and dH = T dS + V dP. (2.2.2.3)
we have also
CV =
( ∂E ∂T
)
V
( ∂H ∂T
)
P
Changing variables from T, P to T, V we have ( ∂S ∂T
)
P
( ∂S ∂T
)
V
( ∂S ∂V
)
T
( ∂V ∂T
)
P
and using the Maxwell relation (2.2.1.6) and the definitions of CP and CV we have
CP = CV + T
( ∂S ∂V
)
T
( ∂V ∂T
)
P =⇒ CP = CV + T
( ∂V ∂T
)
P
( ∂P ∂T
)
V
For any physical system we expect ( ∂V ∂T
)
P
( ∂P ∂T
)
V
and hence that CP > CV. This makes sense because for a given increment dT the system at fixed P expands and does work P dV on its surroundings whereas the system at fixed V does no work. The extra energy expended in the former case must be supplied by δQ which is correspondingly larger than in the latter case. An important example is that of a perfect gas. An amount of gas with NA (Avo- gadro’s number) of molecules is called a mole. The ideal gas law tells us that at fixed T, P a mole of any ideal gas occupies the same volume. The ideal gas law for n moles is P V = nNAkT = nRT , (2.2.2.8)
where R is the gas constant, and is the same for all ideal gases. Note that at sufficiently low density all gases are essentially ideal. The heat capacity per mole, c, is called the specific heat: c = C/n. Then from (2.2.2.6) we have
cp − cv = R. (2.2.2.9)
We define
γ =
cp cv
cv + R cv =⇒ γ > 1. (2.2.2.10)
If the number of degrees of freedom per molecule is denoted NF then equipartition of energy gives that the energy of 1 mole is
E =
cv =
R , cp =
( NF 2
) R , γ =
( NF + 2 NF
) . (2.2.2.11)
For a monatomic gas NF = 3, for a diatomic gas NF = 5 (3 position, 2 orientation), etc, and so γ = 5/ 3 , 7 / 5 ,.. ..