Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Solving the Time-Dependent Schrödinger Equationa Abstract, Schemes and Mind Maps of Quantum Mechanics

This module discusses methods for solving the time-dependent Schrödinger equation in one space dimension. This is the central problem in quantum mechanics, ...

Typology: Schemes and Mind Maps

2021/2022

Uploaded on 09/27/2022

gustavott
gustavott 🇬🇧

3.9

(14)

254 documents

1 / 19

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Solving the Time-Dependent Schr¨odinger Equationa
David G. Robertson
Department of Physics, Otterbein University, Westerville, OH 43081
(Dated: October 10, 2011)
Abstract
Methods for solving the time-dependent Schr¨odinger equation in one dimension are discussed.
Possible simulation projects include the study of scattering by barriers and wells, the analysis of
time evolution by expansion in energy eigenstates, and tests of time-dependent perturbation theory.
Keywords: quantum mechanics, Schr¨odinger equation, time dependence, scattering, perturbation
theory, Crank-Nicholson method
aWork supported by the National Science Foundation under grant CCLI DUE 0618252.
Email: drobertson@otterbein.edu
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13

Partial preview of the text

Download Solving the Time-Dependent Schrödinger Equationa Abstract and more Schemes and Mind Maps Quantum Mechanics in PDF only on Docsity!

Solving the Time-Dependent Schr¨odinger Equationa

David G. Robertson† Department of Physics, Otterbein University, Westerville, OH 43081 (Dated: October 10, 2011)

Abstract

Methods for solving the time-dependent Schr¨odinger equation in one dimension are discussed. Possible simulation projects include the study of scattering by barriers and wells, the analysis of time evolution by expansion in energy eigenstates, and tests of time-dependent perturbation theory.

Keywords: quantum mechanics, Schr¨odinger equation, time dependence, scattering, perturbation theory, Crank-Nicholson method

a (^) Work supported by the National Science Foundation under grant CCLI DUE 0618252. † (^) Email: drobertson@otterbein.edu

CONTENTS

I. Module Overview 2

II. What You Will Need 2

III. Quantum Mechanics Background 3

IV. Solving the Time-Dependent Schr¨odinger Equation 5

V. Simulation Projects 12

References 16

Glossary 18

I. MODULE OVERVIEW

This module discusses methods for solving the time-dependent Schr¨odinger equation in one space dimension. This is the central problem in quantum mechanics, and courses in quantum physics typically devote considerable time to developing solutions in analytically tractable cases (e.g., square wells, the harmonic oscillator). The ability to determine the wavefunction numerically allows exploration of much wider class of potentials, however, as well as the study of e.g., scattering with actual wavepackets rather than plane waves. One can also test the predictions of time-dependent perturbation theory. Such exercises allow the development of deeper intuition regarding the properties of quantum systems.

II. WHAT YOU WILL NEED

The minimal physics background required includes quantum mechanics at the level of a typical sophomore-level course on modern physics, specifically the basic properties of wavefunctions and the Schr¨odinger equation. Students will ideally have worked through the free particle in one dimension and be familiar with wave packets, dispersion, and so on. An upper-level course in quantum mechanics will allow a richer exploration of problems with the tools developed here, for example the study of time-dependent perturbations.

Wavefunctions that satisfy this condition are said to be “normalized.” An important feature of the time evolution defind by the Schr¨odinger equation is that if the wavefunction is normalized at one time, then it will be normalized for all other times as well. This property is known as “unitarity.” The physical importance of the normalization requirement cannot be over-emphasized; it is this condition that puts the “quantum” in quantum mechanics. Wavefunctions that do not satisfy this fall into two classes: functions for which the normalization integral is finite but not equal to one, and functions for which the normalization integral is infinite. Given a function in the first class, we can easily produce a normalized wavefunction. Assume that we have found a solution of the Schr¨odinger equation Ψ for which ∫ (^) ∞ −∞

|Ψ(x, t)|^2 dx = A, (3.5)

with A a finite number. Then the rescaled wavefunction

Ψ′(x, t) = √^1 AΨ(x, t) (3.6)

will be properly normalized, i.e., will satisfy ∫ (^) ∞ −∞

|Ψ′(x, t)|^2 dx = 1. (3.7)

Note that because the Schr¨odinger equation is linear in the wavefunction, the rescaled wave- function remains a solution. Functions for which the normalization integral is infinite, on the other hand, cannot be normalized at all. Such wavefunctions are simply unacceptable for describing real physical systems, and must be discarded despite being solutions to the Schr¨odinger equation [4]. The problem of finding solutions to the Schr¨odinger equation is usually approached using the technique of separation of variables. This leads to the “time independent” Schr¨odinger equation, which determines the energy eigenstates ψ(x):

− ¯h

2 2 m

d^2 ψ(x) dx^2 +^ V^ (x)ψ(x) =^ Eψ(x),^ (3.8)

Here E is a constant to be determined by solving the equation [5]. It has the form of an eigenvalue equation, Hψˆ = Eψ, (3.9)

where Hˆ is the Hamiltonian operator

Hˆ = − h¯^2 2 m

d^2 dx^2 +^ V^ (x)^ (3.10)

and E is the eigenvalue. It should be emphasized that the TISE determines both ψ and E, that is, we find that each solution ψ works only with some particular value of E. We can label the solutions by an index n, so that each function ψn has a corresponding eigenvalue En. We must also be sure that our wavefunctions are normalizable; this means that ψn themselves must be normalizable. It is convenient to require that ∫ (^) ∞ −∞

|ψn(x)|^2 dx = 1. (3.11)

The solution to the Schr¨odinger equation can then be obtained by expanding the initial wavefunction in terms of the stationary states,

Ψ(x, 0) =

n

cnψn(x) (3.12)

for some coefficients cn. The full solution is then

Ψ(x, t) =

n

cnψn(x)e−iEnt/¯h^ (3.13)

An alternative to this procedure is to develop a numerical approach to solving the Schr¨odinger equation directly, without expanding in stationary states. It is to this task that we now turn.

IV. SOLVING THE TIME-DEPENDENT SCHR ¨ODINGER EQUATION

In this section we will develop techniques for solving the full Schr¨odinger equation nu- merically. The first step is to introduce a grid of space points, separated by some distance ∆x, on which we will determine the wavefunction. We will also discretize in time, that is, evaluate the wavefunction only for a discrete set of times separated by some ∆t. Hence we replace the function Ψ(x, t) with the discrete values Ψni , where i labels the space point and n labels the time value. Specifically, the grid points are located at

xi = xmin + i∆x (4.1)

a perfectly valid approximation for finite ∆x. The latter is often preferred, however, because it is more accurate for a given value of ∆x. Switching to the grid notation where Ψ(xi ± ∆x, t) = Ψni± 1 , the discretized version of the Schr¨odinger equation thus takes the form

i¯h

(Ψn+ i −^ Ψni ∆t

= − h¯

2 2 m

(Ψn i+1 −^ 2Ψni + Ψni− 1 ∆x^2

  • V (^) in Ψni (4.9)

Now, imagine that we know the wavefunction at any one time; eq. (4.9) could then be used to evolve it forward to the next time step. On has only to solve for Ψn i +1, which will be given in terms of Ψ at the current time step and the potential function. The new Ψ then becomes the basis for a further step, and so on. Hence given some initial wavefunction we can calculate Ψ at any later (or earlier) time. Note that in general the value of Ψn+1^ at some grid point will depend on the value of Ψ at that point and its nearest neighbors – this is the effect of the space derivatives in the Hamiltonian. Unfortunately, this approach is totally useless because it is numerically unstable [2]. The basic reason for this can be seen as follows. Let us adopt a matrix notation and write eq. (4.9) as i¯h

(Ψn+1 (^) − Ψn ∆t

= HˆΨn^ (4.10)

Here Ψn^ represents a vector of values (across the space grid) and Hˆ is a (hermitian) matrix defined by the requirement that it reproduce the right-hand side of eq. (4.9). Solving for Ψn+1^ then gives Ψn+1^ =

1 − i∆ ¯h tHˆ

Ψn^ (4.11)

Now, imagine that we expand Ψn^ in terms of the eigenvectors of Hˆ, that is the vectors ϕα satisfying: Hϕˆ α = αϕα (4.12)

where the eigenvalues α are real (since Hˆ is hermitian). We write

Ψn^ =

α

cnαϕα (4.13)

and in the exact solution to the Schr¨odinger equation we would have

Ψn+1^ =

α

cnαϕαe−iα∆t/¯h^ (4.14)

The approximate solution eq. (4.11) amounts to replacing the exponential by the first-order expression e−iα∆t/¯h^ ≈ 1 − iα ¯h∆ t (4.15)

But this is a complex number with magnitude greater than one: ∣∣ ∣∣ 1 − iα∆t ¯h

(^2) α∆t 2 ¯h^2

Hence the solution will be unstable; all modes grow without bound as we iterate. In fact it is “unconditionally unstable” – it is unstable no matter how small we make ∆t. It is just doomed. To fix this we need a new discrete equation. Fortunately we have a great deal of freedom in this regard – the main restriction is that it must reduce to the correct Schr¨odinger equation in the limits ∆t → 0 and ∆x → 0. For example, we could replace the right hand side of eq. (4.9) with its value at the future time step, that is,

i¯h

(Ψn+ i −^ Ψni ∆t

= − ¯h

2 2 m

(Ψn+ i+1 −^ 2Ψn i +1+ Ψn i−+1 1 ∆x^2

  • V (^) in +1Ψn i +1 (4.17)

or, in matrix notation, i¯h

(Ψn+1 (^) − Ψn ∆t

= HˆΨn+1^ (4.18)

This clearly has the same continuum limit as the original equation – it is essentially what we obtain by taking the “backwards” difference for the time derivative. It is also implicit, in the sense that Ψn+1^ appears on both sides of the equation, and we must solve this system of linear equations to determine it. Still, this could be done (details to follow) so this is another possible discretization we can use. But does it help? Well, yes and no. The evolution is now stable; solving for Ψn+1^ we find

Ψn+1^ = (^) 1 + 1 i∆t ¯h Hˆ^

Ψn^ (4.19)

Now the energy eigenstates are multiplied at each time step by a complex number of mag- nitude (^) ∣ ∣∣ ∣∣1 + 1 iα∆t ¯h

1 + ^2 α ¯h∆ 2 t^2

which is less than one. However, the “time evolution operator” in eq. (4.19) – the matrix that multiplies Ψn^ to give Ψn+1^ – is not unitary! (Recall that Hˆ is a hermitian matrix.) This

Eq. (4.26) is a set of linear equations that determine Ψn+1^ in terms of Ψn. To solve these, observe that the system is tri-diagonal, meaning that if we write it as a matrix equation M Ψn+1 ^ = B:

   

A 1 1 0 0 0...

1 A 2 1 0 0...

0 1 A 3 1 0...

Ψn 1 + Ψn 2 + Ψn 3 + ...

B 1

B 2

B 3

then the only nonzero elements in M are on the central diagonal and the two neighboring ones. This reflects that the derivatives couple grid points to their nearest neighbors only. This allows a straightforward solution by row reduction, as follows. Let us first convert eq. (4.29) to the form     

1 U 1 0 0 0...

0 1 U 2 0 0...

0 0 1 U 3 0...

Ψn 1 + Ψn 2 + Ψn 3 + ...

R 1

R 2

R 3

by means of the following manipulations. Multiply the first row of eq. (4.29) by 1/A 1 ≡ U 1 , and further define R 1 ≡  B 1 /A 1 = B 1 U 1. This gives

   

1 U 1 0 0 0...

1 A 2 1 0 0...

0 1 A 3 1 0...

Ψn 1 + Ψn 2 + Ψn 3 + ...

R 1

B 2

B 3

Next, subtract the first row from the second to obtain 

   

1 U 1 0 0 0...

0 A 2 − U 1 1 0 0...

0 1 A 3 1 0...

Ψn 1 + Ψn 2 + Ψn 3 + ...

R 1

B 2 − R 1

B 3

Dividing the second row by  A 2 − U 1 then gives

   

1 U 1 0 0 0...

0 1 U 2 0 0...

0 1 A 3 1 0...

Ψn 1 + Ψn 2 + Ψn 3 + ...

R 1

R 2

B 3

where U 2 ≡ (^) A 2 −^1 U 1 (4.34)

and R 2 ≡ A A^22 −−^ RU^11 = (A 2 − R 1 )U 2 (4.35)

The procedure generalizes, so that eq. (4.30) is obtained with

U 1 = (^) D^11 , Ui = (^) Ai −^1 Ui− 1 (i > 1) (4.36)

R 1 = B 1 U 1 , Ri = (Bi − Ri− 1 )Ui (i > 1) (4.37)

We can now solve equations (4.30) in reverse order, starting from the last row, which reads simply Ψn N+1 = RN (4.38)

The next-to-last row reads Ψn N+1 − 1 + UN − 1 Ψn N+1 = RN − 1 (4.39)

so that Ψn N+1 − 1 = RN − 1 − UN − 1 Ψn N+1 (4.40)

and so on. In general we have Ψn i +1= Ri − UiΨn i+1+1 (4.41)

The following sequence of operations summarizes the above steps (written in C):

U[1] = 1.0/A[1]; for (i=1; i<N; i++) U[i] = 1.0/(A[i] - U[i-1]);

R[1] = B[1]U[1]; for (i=1; i<N; i++) R[i] = (B[i] - R[i-1])U[i];

psi[N] = R[N]; for (i=N-1; i>0; i--) psi[i] = R[i] - U[i]*psi[i+1];

these items to the standard output. If this code is part of a loop over time steps, then at each time step it generates a new plot with the current values. If the code is then run as, for example,

./a.out | gnuplot

then the result is an animation of the time evolution. A detailed example of this is given in the sample code tdse-gnuplot.c.

  1. The Free Particle The first project should be a simulation of a free particle (V = 0) with a gaussian initial wavefunction. This problem can be solved exactly, so the correctness (or otherwise) of the code can be checked easily. For complete details see any text on quantum mechanics [1]. The normalized initial wavefunction is given by

Ψ(x, 0) = (^) π 1 / 41 √σ exp

[

−(x^ −^ x^0 )

2 2 σ^2

]

exp [ip 0 x/¯h] (5.1)

This represents a particle localized to within ∆x ∼ σ of x 0 , and with average momen- tum p 0. The solution to the Schr¨odinger equation is then (for V = 0):

Ψ(x, t) = (^) π 1 / 4 √σ(1 +^1 i¯ht/mσ (^2) ) exp

[

−(x^ −^ (x^0 +^ p^0 t/m))

2 2 σ^2 (1 + i¯ht/mσ^2 )

]

exp [i(p 0 x − Et)/¯h] (5.2) where E = p^20 / 2 m. In building the simulation you will need an array to hold the values of the wavefunction at each spatial grid point. Say there are N of these. Recall that to update each grid point we need the values on the neighboring grid points; obviously something will need to be done for the first and last point, which are missing one neighbor each. Here you should just specify Ψ = 0 for all time; this should introduce no problems as long as the wavepacket is well away from the edges. Alternatively, this is equivalent to putting infinitely high potential “walls” at these locations. You will also want to adopt a “reasonable” system of units so that the numerical values of quantities like x, p 0 , etc. are neither too large nor too small. One convenient choice

is to use nanometers (1 nm = 10−^9 m) for distance, femtoseconds (1 fs = 10−^15 s) for time, and electron volts (1 eV = 1. 6 × 10 −^19 J) for energy. In these units we have

¯h = 0.6582 eV · fs (5.3) c = 300 nm/fs (5.4)

A useful combination of these is ¯hc = 197.5 eV · nm. The mass of a particle is conve- niently expressed in terms of its rest energy, e.g., for an electron mec^2 = 0.511 MeV or 511,000 eV. Hence its mass is me = 511, 000 / 3002 = 5.68 in these units. A standard test of a program for time evolution is to evolve the solution forward for a while, then evolve backwards by the same amount. The solution should return to its initial value. Note that this does not directly test whether the time evolution is being simulated accurately, only that it is reversible. Compare your result also to the exact solution and see what sorts of values for ∆x and ∆t are useful. In general we expect the difference approximations to be valid when the change in Ψ from one grid point to the next, or one time step to the next, is small compared to Ψ itself. Observe the spreading of the wavepacket with time. Try other forms for the initial wavefunction, for example the Lorentzian function

Ψ(x, 0) = A

2 (x − x 0 )^2 + γ^2 e

ip 0 x (^) (5.5)

where A^4 = 2γ^3 /π, or or a step function or “tent” function multiplied by exp(ip 0 x). Add code that will calculate the expectation value of the energy at any time, and verify that this quantity is conserved. Verify that the evolution is unitary, that is, that the wavefunction remains properly normaized as it evolves in time. Experiment with either of the unsuitable approaches to the discretization, and note what happens.

  1. Scattering

An extremely instructive application of the simulation is to study the scattering of a wavepacket from a potential barrier or well. Modify your code to do this.

where F is a constant. In classical theory this would correspond to a sinusoidal (in time) external driving force. In quantum theory it can be thought of as inducing transitions between the stationary states of the unperturbed harmonic oscillator. Let’s assume the particle starts out at t = 0 in its ground state (n = 0). The according to lowest-order perturbation theory it will be found in the first excited state (n = 1) at time t with probability

P (t) = F^

2 2¯hmω 0

(sin[(ω − ω 0 )t/2] ω − ω 0

where ω 0 ≡ √k/m is the natural frequency of the oscillator [1]. (The probability for transitions to other states is zero in lowest order.) This is actually an approximation to the full perturbation theory result, valid when we are “near resonance,” i.e., when

|ω 0 − ω|  ω 0 + ω (5.10)

The full expression may be found in the standard texts. Note that the transition probability itself oscillates in time with a period 2π/|ω − ω 0 |, and has its maximum value at t = π/|ω − ω 0 |, where

Pmax = F^

2 2¯hmω 0 (ω − ω 0 )^2 (5.11) This should be significantly less than one or perturbation theory is invalid. Modify your harmonic oscillator code from Project 3 to include such a contribution and study the transitions. Compare the “exact” transition probability (i.e., computed using your solution of the full Schr¨odinger equation) to the result of perturbation theory. Examine the transition probability as a function of time and observe its periodic behavior.

[1] For additional details on background, see a standard introductory text on quantum mechanics, e.g., D.J. Griffiths, Introduction to Quantum Mechanics, 2nd ed. (Prentice Hall, 2005); R.L. Liboff, Introductory Quantum Mechanics, 4th ed. (Addison Wesley, 2003).

[2] For example, S.E. Koonin, Computational Physics (Benjamin/Cummings, 1985); T. Pang, An Introduction to Computational Physics (Cambridge University Press, 2006); L.D. Fosdick, E.R. Jessup, C.J.C. Schauble, and G. Domik, Introduction to High-Performance Scientific Computing (MIT Press, 1996). [3] It also encodes the probabilities for any other measurement, for example of the momentum. [4] There is an important case where non-normalizable wavefunctions are useful despite their un- physicality: plane wave solutions for a free particle (V = 0), corresponding to a particle with a definite momentum. In this case the lack of normalizability is connected to the failure of such states to properly respect the uncertainty principle: a particle with a definite p would have ∆p = 0 and hence ∆x = ∞. However, normalizable states can be constructed as linear combinations of these plane waves, and, indeed, if sufficient care is exercised, the plane waves themselves can often be used directly to obtain physical results (e.g., transition and reflection probabilities for potential scattering). [5] E can be shown to be a real number, assuming V is real.

separation of variables Technique for separating a partial differential equation into ordinary differential equations. The basic assumption is a product form for the solutions.

stationary state Eigenstate of the Hamiltonian operator for a system. An energy eigenstate.

transition probability The probability that a quantum system initially in some state will be found in another state at some later time.

unitarity Property of quantum systems that insures the wavefunc- tion normalization remains constant in time; reflects con- servation of probability.

wavefunction The entity that describes the quantum state of a system. Its modulus squared gives the probability density for po- sition.

wavepacket A superposition of plane waves to produce a wavefunction that is localized in space.