# 3.2. Linear variational calculations¶

In this section, we describe two quantum mechanical problems that can be analyzed numerically with a linear variational calculation. In both cases, a generalised matrix eigenvalue problem (3.13) must be solved, which can easily be done using a program like MATLAB.

## 3.2.1 The infinitely deep potential well¶

The potential well with infinite barriers is given by It forces the wave function to vanish at the boundaries of the well (). The exact solution for this problem is known and treated in every textbook on quantum mechanics (see for example Griffiths). Here we discuss a linear variational approach to be compared with the exact solution. We take and use natural units such that .

As basis functions we take simple polynomials that vanish on the boundaries of the well: The reason for choosing this particular form of basis functions is that the relevant matrix elements can easily be calculated analytically. We start with the matrix elements of the overlap matrix, defined by There is no complex conjugate with the in the integral because we use real basis functions. Working out the integral gives for even; otherwise .

We can also calculate the Hamilton matrix elements -- you can check that they are given by: for even, else .

The exact solutions are given by with , , with corresponding energies For each eigenvector , the function should approximate an eigenfunction \eqref{Eq:EigDeep}. The variational levels are shown in table below, together with the analytical results.

The table above lists energy levels of the infinitely deep potential well. The first four columns show the variational energy levels for various numbers of basis states N. The last column shows the exact values. The exact levels are approached from above as in Figure (3.1).

## 3.2.2 Variational calculation for the hydrogen atom¶

As we shall see in further on in this course, one of the main problems of electronic structure calculations is the treatment of electron--electron interactions. Here we develop a scheme for solving the Schrödinger equation for an electron in a hydrogen atom for which the many-electron problem does not arise, so that a direct variational treatment of the problem is possible which can be compared to the analytic solution.

The electronic Schrödinger equation for the hydrogen atom reads: where the second term in the square brackets is the Coulomb attraction potential of the nucleus. The mass is the reduced mass of the proton--electron system which is approximately equal to the electron mass. The ground state is found at energy and the wave function is given by in which is the Bohr radius,

Units

When performing a calculation for such an equation, it is convenient to
use units such that equations take on a simple form, involving only
coefficients of order 1. Standard units in electronic structure physics
are so-called *atomic units*: the unit of distance is the Bohr radius
, masses are expressed in the electron mass and the charge is
measured in unit charges (). The energy is finally given in
'Hartrees' (), given by ( is
the fine-structure constant and is the electron mass) which is
roughly equal to 27.212 . In these units, the Schrödinger
equation for the hydrogen atom assumes the following simple form:

We try to approximate the ground state energy and wave function of the
hydrogen atom in a linear variational procedure. We use *Gaussian basis
functions*. For the ground state, we only need angular momentum
functions (s-functions) -- they have the form:
centred on the nucleus (which is thus
placed at the origin). We have to specify the values of the exponents
. A large defines a basis function which is
concentrated near the nucleus, whereas small characterises a
function with a long tail. Optimal values for the exponents
can be found by solving the *non-linear* variational problem including
the linear coefficients
*and* the exponents . Several
numerical methods for solving such non-linear optimisation problem exist
and the solutions can be found in textbooks or documentation of quantum
chemical software packages.

We shall use known, fixed values of the exponents: but relax the values of the coefficients . The wave
function therefore has the form
with the listed above. We now discuss how to find the best
values of the linear coefficients . To this end, we need the
overlap and Hamiltonian matrix. The advantage of using Gaussian basis
functions is that analytic expressions for these matrices can be found.
In particular, it is not so difficult to show that the elements of the
overlap matrix , the kinetic energy matrix and the
Coulomb matrix are given by: These expressions can be put into a computer program
which solves the generalised eigenvalue problem. The resulting ground
state energy is Hartree, which is amazingly close to the
exact value of Hartree, which is eV. We conclude that
four Gaussian functions can be linearly combined into a form which is
surprisingly close to the exact ground state wave function which is
known to have the so-called *Slater-type* form rather than
a Gaussian! This is shown in the figure below (3.2)

## 3.2.3. Exploiting symmetry¶

We have seen that the solution of a stationary quantum problem using linear variational calculus, in the end, boils down to solving a (generalised) matrix eigenvalue problem. Finding the eigenvalues (and eigenvectors) of an matrix, or solving a generalised eigenvalue problem, requires a number of floating point operations in the computer proportional to . This means that if we double the size of the basis set used in the variational analysis, the computer time goes up by a factor of 8. As it turns out, we are often interested in problems having some symmetry. We shall now briefly sketch how this can be used to significantly reduce the computer time for variational calculations.

In subsection 3.2.1, we considered a problem having a very simple
symmetry: replacing by does not change the potential, and
therefore the Hamiltonian is insensitive to this transformation. Let us
denote the operation by . Because
flipping the sign of twice leaves the space invariant, we have
where is as usual the
identity operator. Let us consider the eigenvalues of this
operator. From we have that
. Therefore, . Furthermore,
commutes with the Hamiltonian:
since the Hamiltonian is not affected by . We know (or
should know!) that if an operator commutes with we can always find
eigenvalues which are eigenvalues of
*and* of that operator. This
means that we can divide the eigenfunctions of into two classes: one
of symmetric eigenfunctions (symmetric meaning having eigenvalue
when acting on it with ) and one of
antisymmetric eigenfunctions ().

Now suppose we construct our variational basis set such that it can be
divided into two classes, that of symmetric and that of anti-symmetric
basis functions. Let us calculate the inner product of a symmetric and
an anti-symmetric eigenfunction. Using antisymmetry of the product of
the two may immediately convince you that this vanishes. To illustrate
the more general procedure, we consider two eigenfunctions,
and with
*different* eigenvalues and for the symmetry
operation . Then we can write
where we first let act on the left, and then on the right
function. The fact that leads to the
well-known theorem saying that two eigenvectors of a Hermitian operator
with *different* eigenvalues, are orthogonal (if the eigenvalues are the
same, the wave functions are either identical or they can be chosen
orthogonal).

The key result is that a similar conclusion can be drawn for the expectation value of the Hamiltonian. which, as directly gives

For an orthonormal basis set, we see that, if we would order the basis
functions in our set with respect to their eigenvalue of ,
the Hamiltonian becomes *block-diagonal*. For our simple
reflection-symmetric example, denoting the symmetric basis functions by
and the antisymmetric ones by
where runs from 1 to , we
have
We can diagonalise the two blocks on the diagonal independently. This
takes steps (up to a multiplicative constant), which is 4
times less than (up to the same constant) required to diagonalise
the full Hamiltonian! If there are additional symmetries, they can be
used to reduce the work required even further.

If the basis is non-orthogonal, it still holds that basis functions
having different eigenvalues under the symmetry-operator
are orthogonal and that the matrix elements of the Hamiltonian between
them vanishes. This means that the Hamiltonian matrix *and* the overlap
matrix have the same block-diagonal structure. Therefore, the respective
generalised eigenvalues for the blocks can be dealt with independently
of each other and we achieve the same speed-up.

What we have touched upon is an example of the application of group theory in physics, which is an important topic on its own.