electromagnetic wave
電磁波
electromagnetic wave (電磁波)
電磁波 (electromagnetic
wave)
電磁波 (electromagnetic
wave; EM wave)
Source:
http://mysite.du.edu/~jcalvert/phys/emwaves.htm
Electromagnetic Waves
Contents
- Introduction
- Plane Waves
- Sinusoidal Waves
- Waves in a Conducting Medium
- Reflection of Electromagnetic Waves
- The Skin Effect
- The Parallel-Wire Transmission Line
- Retarded Potentials and Radiation
- References
Recently, I thought it was about time to sharpen my rusty skills in electromagnetism, and reviewed electromagnetic waves in Abraham and Becker (see references). This was so useful and interesting that I thought some remarks on the subject might be generally useful, and preparing them instructive for me, so this article is the result. It is rather long, since I want to include everything necessary to understanding. The reader may find it useful to follow along in his or her electricity and magnetism text, since some previous study of the subject will be very helpful, and note the correspondence of the results. This material is fundamental for today's electrical engineer, since it bears on almost every current application. The references contain a wealth of exercises.
The topics generally included in an introduction to electromagnetic waves are: (1) plane waves in a nonconducting medium; (2) plane waves in a conducting medium, with absorption and skin depth; (3) the Poynting vector and averages of complex quantities; (4) skin effect in conductors; (5) fields and waves on a parallel-wire transmission line; (6) transmission lines with resistance and leakance; (7) time-dependent scalar and vector potentials, gauge transformations and retarded solutions; (8) fields and radiation from an oscillating dipole. These topics include techniques of general utility, both in physics and engineering, and are fundamental to physical optics as well. Vector calculus is usually introduced to students in an electromagnetism course at intermediate undergraduate level.
There are many texts on this subject, which is as important for engineering as it is for physics (perhaps even more so!), and you should use the one familiar to you. The references show the texts that I generally use, and I do not think there is anything better available. Though I have generally used Giorgi units for the subject of electromagnetic waves, I think they are theoretically objectionable except for the advantage of working directly with practical units. In this review, I shall use Gaussian units instead, which might be useful as an introduction for the reader.
Gaussian units use the absolute electrostatic units (esu or stat-) for electrical quantities and absolute electromagnetic units (emu or ab-) for magnetic quantities, so the factor of c ≈ 3 x 1010 m/s which converts between these systems appears explicitly in the formulas. A charge q in esu is the charge q/c in emu, so the large factor c expresses quite well the weakness of magnetic forces in comparison to electric. The Giorgi or MKS units are just emu with assorted factors of 10, plus "rationalization," which is fooling with factors of 4π. Coulomb's law in the form F = qq'/r2 is the basis of esu, defining the esu of charge in terms of mass, length and time. Force is in dynes (10-5 N) and work is in ergs (10-7 J). Units have been, and still are, the cause of great confusion, but their understanding is the key to understanding electromagnetism.
We shall take Maxwell's equations as postulates. This is a remarkable theory, relativistically correct, based on first-order partial differential equations. Maxwell conceived it around 1865 as a mathematical description of the discoveries of Faraday that implied a mechanical model of electromagnetism. Our present view is more like the "German" school mentioned by Maxwell that talked about potentials and "action at a distance," but the mathematics is exactly the same. Maxwell never was clear about the nature of electric charge, and the relations between matter and the fields, which confused the issue for some time. He did not derive Fresnel's equations, nor treat the sources of electromagnetic waves. Helmholtz took steps after 1870 to reconcile Weber's and Maxwell's theories, and inspired Heinrich Hertz and Hendrik Antoon Lorentz (1853-1928) to look into the matter. Lorentz derived Fresnel's equations on the basis of Maxwell's theory in his doctoral dissertation at Leyden in 1875. Lorentz received the Nobel Prize in Physics in 1902 with his countryman Pieter Zeeman, and was an enthusiastic promoter of popular physics.
The general view was that E and H were fields in the "ether" that produced D and B in matter as stress produced strain, and that the ether freely pervaded matter. Fizeau's observations of the speed of light in moving media complicated the situation with partial ether dragging. Hertz assumed that the ether was fixed to matter and only one set of field vectors was necessary. Lorentz developed the electron theory in 1892-1904 (which was certainly helped by Thomson's discovery of the electron in 1896), including the Lorentz contraction (1895), which accounted for the Michelson-Morley negative result on motion through the ether. All these contradictions and uncertainties were resolved by Einstein's relativity in 1905, and the modern view of electromagnetism slowly developed. The fields were recognized as distinct from matter, determined by potentials that propagated at the speed of light in vacuum. Maxwell's electromagnetism later became the basis of Quantum Electrodynamics, which is extraordinarily correct (possibly the most correct of any physical theories), but also not easy to use. Where the atomic details of the electrical behavior of matter are not significant, the classical Maxwell theory in its modern interpretation gives the right answer, and is quite sufficient. Lorentz also enunciated the correct expression for the force on a point charge q exerted by the electric and magnetic fields, that usually accompanies statements of Maxwell's equations in texts.
The Maxwell theory we will use describes matter too simply by three "constants," the dielectric constant κ, the permeability μ, and the conductivity σ, which may be functions of position, and what is worse, even functions of direction (in crystals). These "constants" summarize a large amount of complicated physics, first revealed by the electron theory of the years around 1900. These numbers are all time-dependent (and can even become non-local--that is, depending on conditions at more than one point). A Fourier transform makes them functions of the frequency, and in this form they appear usefully in practical applications. Many of the "signals" we encounter have a definite frequency, so this is very convenient.
At the beginning, we will consider κ, μ and σ as constants indeed, which take constant values in uniform, isotropic materials. To begin to understand electromagnetism, there is no need to experience the pain of nonuniform and nonisotropic materials anyway. We will carry μ through the analysis, although in nearly all cases μ = 1, or close enough to it as makes no difference. Only ferromagnetic materials have permeabilities much different from unity, and these seldom carry electromagnetic waves. Their electromagnetic properties may be of interest, but it is a very special subject and won't be discussed here.
I shall use the usual tools of vector analysis, which result in so much economy of expression and make it easy to grasp certain relations that are obscured by writing everything out in components. However, for practical work one must get down to components, and they sometimes make things clearer. Vector symbolism contains nothing that cannot be expressed in components. Browsers do not support the del (inverted triangle) symbol, so I will use grad for del, div for del dot, curl for del cross, and div grad for del squared, the Laplacian. Vectors will be in boldface, magnitudes of vectors and scalars in normal weight, unless otherwise stated. Integrals will be written with ∫ followed by the limits in parentheses where needed, and multiple integrals will not be distinguished from single ones. dV is a scalar element of volume, dS is a vector element of area, directed normally to the area and outwards from dV, and ds a vector element of arc length, directed along the arc in the positive direction that keeps dS to the left.
Plane Waves
Maxwell's equations in the form we shall use are: curl H = (4π/c)j + (1/c)∂D/∂t, curl E = (-1/c)∂B/∂t, div B = 0 and div D = 4πρ, where j is the current density in esu/cm2s and ρ is the free charge density in esu/cm3. Recall the names of the field vectors and the meaning of each term. A good exercise is to write out the eight equations in components. To use the equations, we must know the relations between the field vectors. These are B = μH, D = κE and j = σE, called theconstitutive relations. They are not part of Maxwell's equations, but express the properties of matter. The "vacuum" or free space has κ = μ = 1 and σ = 0. In this case, we can set B = H, D = E, and j = 0. It is the custom to use E and H as the basic vectors, though relativity shows that E and B are actually cognate.
Take the z-direction as the direction of propagation, and assume that the fields do not depend on the transverse dimensions x,y. Such solutions are called plane waves, and represent a portion of any wavefront, though as a whole they are nonphysical. Express Maxwell's equations in terms of E and H, and then retain only derivatives with respect to t and z. From the divergence equations, we find ∂Ez/∂z = 0 and ∂Hz/∂z = 0, so that the z, or longitudinal, fields are constant with respect to z, and therefore have nothing to do with the wave, and may be set equal to zero. The electric and magnetic fields are, therefore, transverse to the propagation direction. This easy result from Maxwell's equations resolved an old problem, the lack of "longitudinal" light that might be expected on a mechanical (ether) theory.
From the curl equations we find two equations connecting Ex and Hy: -∂Hy/∂z = (κ/c)∂Ex/∂t, ∂Ex/∂z = (-μ/c)∂Hy/∂t. Two similar equations connect Ey and Hx. The problems splits into two, one for each polarization state. This is polarization in the optical sense, not dipole moment per unit volume. When one spoke of the "direction of polarization" of an electromagnetic wave, the direction of the H vector was usually meant in the past, though now it is the direction of the Evector. There is a difference, of course, so it is best to keep whatever convention you use straight. Either the magnetic or the electric field can be eliminated between a pair of equations, and the result is the wave equation, ∂ 2Ex/∂z2 = (κμ/c2)∂Ex/∂t2, with similar equations for each of the other three field vectors. This means that we have solutions f(t - z/v) + g(t + z/v) with f and g arbitrary functions. Substitute this in the wave equation to verify that it is indeed a solution. These solutions preserve the wave shape and move at velocities ±v = c/√κμ. We have now found that the index of refraction, n, where v = c/n is related to the material properties through n2 = κμ, called Maxwell's relation (with μ = 1). This is appproximately true at lower frequencies, but fails miserably for water: n = 1.334, κ = 81. This fault was eliminated when the frequency dependence of κ was realized.
Let's suppose that Ex = Ef(t - z/v), and Hy = Hf(t - z/v). It is clear from the equations that the same function f must appear in both. The relation between the real constants E and H is H = √(κ/μ) E. In free space, H = E. The energy flux in the electromagnetic field is given by the Poynting vector, N = (c/4π)(E x H). For this wave, the average power is = (c/4π)ExHy = (c/4π)√(κ/μ)E22
>, where the angle brackets stand for a time average. This can be written v [κE
2/4π
2>], using Maxwell's relation. This is the propagation velocity v times twice the average electric energy density in the wave. From the relation between E and H it is easy to show that the electric and magnetic energy densities are equal in the wave, so the power is the propagation velocity times the sum of the electric and magnetic energy densities, a very reasonable result.
In many cases, f(t) is sinusoidal, so 2
> = 1/2 if it is taken with unit amplitude. Then we find N = (c/8π)E
2 erg/cm
2-s. It is a remarkable property of electromagnetic waves that they can carry energy to unlimited distances from the source. In these few paragraphs, we have discovered a lot about electromagnetic waves, using only Maxwell's equations.
The question of how an electromagnetic wave travelling in vacuum at speed c becomes a wave in a transparent medium travelling a speed c/n is an involved and interesting one, first elucidated by Ewald (1912) and Oseen (1915), as is the question of what happens when an electromagnetic wave suddenly arrives at a transparent medium, a subject studied by Sommerfeld (1914) and Brillouin (1914). These matters are discussed lucidly in Jackson. Briefly stated, the incoming wave causes dipoles to vibrate at the surface of the dielectric, which creates a wave travelling at speed c that interferes with the original wave and extinguishes it in the medium, and also a new wave travelling at speed c/n that propagates in the dielectric as described above. In the other problem, the first precursor of a wave travels at speed c, and is very weak. By the time the new wave arrives at speed c/n, the amplitude has increased to the normal value. Even in space, we do not receive the same wave that was emitted by a distant star, but a new wave created in the interstellar medium. "There are more things in heaven and earth, Horatio, than are dreamed of in your philosophy."
Sinusoidal waves are most conveniently described as the real part of a complex exponential, f(t) = Re[A ejωt] = Re[ aejφejωt], where A is the complex amplitude, including both magnitude and phase. The angular frequency ω is 2π times the frequency f or ν. Engineers use j instead of i for the imaginary unit to avoid confusion with the current i. Then, f(t ± z/v) = A ejω(t ± z/v) = A ej(ωt ± kz), where k is called the wave vector, with magnitude k = ω/v = 2π/λ, where λ is the wavelength. This function represents a plane sinusoidal wave traveling in the -z or the +z direction, as the sign is + or -. All this is probably very familiar to you, and is absolutely basic.
For such waves, differentiation with respect to z or t can be carried out at once, with the result that ∂/∂t → jω, and grad → ±k. This turns Maxwell's equations into algebraic equations that can be solved immediately. We have at once that k·B = 0 and k·D = 0, which shows that B and D are normal to the direction of propagation, or the waves are transverse. The curl equations express E in terms of H, and show that they are perpendicular to each other and in phase in any plane wave. Using k x (k x E) = k(k·E) - k2E, we can eliminate H and find k2E = (ω/c)2κμE, the transformed wave equation, and so k = ω/v. This dispersion relation, giving the wavelength in terms of the frequency, shows that our assumed solution does actually satisfy the wave equation.
When using complex representations, one thing we often need is the average value of the product of the real parts of two signals. If a(t) = Re[Aejωt] and b(t) = Re[Bejωt, then = Re[AB*/2], where B* is the complex conjugate of B. If you are not familiar with this useful expression, write everything in real and imaginary parts and verify that it is correct. Sometimes the imaginary part of AB*/2 may have an interesting interpretation as well. One example of the use of this formula is with the power in an electric circuit. The instantaneous power is i(t)v(t). If i(t) and v(t) are represented by the complex amplitudes I and V, then the average power is Re[VI*/2] = Re[V*I/2]. The imaginary part represents reactive volt-amperes (vars), energy that surges back and forth without representing a power dissipation. It's usually the energy in electric and magnetic fields in this case. The sign of this imaginary part is opposite in the two formulas, so the convention must be specified if you use it. The complex Poynting vector is (c/8π)[E x H*], where the real part is ohmic dissipation, and the imaginary part is the difference between the magnetic and electric field energies (zero for a plane wave in space).
Let's consider a cylindrical resistor of length L and radius a. If a current I flows through it, establishing a voltage drop V across it, the electric field at the surface is E = V/L and the magnetic field is H = 2I/ca (from Ampère's law). The Poynting vector is normal to the surface, points inward (check this!), and has a magnitude N = (c/4π)2VI/caL. Integrating over the surface of area A = 2πaL, we find the total power P = VI. Note that this works if I and V are alternating current phasors, if the complex Poynting vector is used. This power enters the resistor from the field and is dissipated as heat. However, we also know that the heat is produced by inelastic collisions of the electrons with the lattice, the electrons gaining energy when accelerated by the field. Contemplate this for a while, and marvel at the strangeness of physics. The Poynting vector may not give the mechanism of energy transfer, but when integrated over a closed surface gives the correct answer in terms of the field vectors. Energy is certainly stored in a wave propagating in space, but is it stored in a static field? It acts very much as if it were, but the whole picture must be considered.
The curl equation for H has two source terms, the conduction current and the displacement current. So far we have only considered the displacement current, which makes travelling electromagnetic waves possible. It is easy to include the conduction current if we assume a plane sinusoidal wave. Then Maxwell's equations become algebraic equations, and we can find a wave equation just as we did above, but now there are new terms. The dispersion relation becomes k2 = ω2κμ/c2+ j(4πμσω/c2). What happens is that k becomes complex, k = k' + jk", so that when we put this in the wave expression, we find e-k"zej(ωt - k'z), so that k" represents an exponential absorption of the wave, while k' gives the usual wavelength.
The dispersion relation yields (k')2 - (k")2 = ω2κμ/c2 and k'k" = 2πμσω/c2 = (1/δ2) when we equate real and imaginary parts. The new parameter δ, with the dimensions of cm, is called the skin depth, a name we shall soon justify. If v/ω = λ/2π, and r = (λ/2π)/δ, then (k')2 = (2π/&lambda)2[√(1 + r2) + 1]/2 and k" = (2π/λ)2[√(1 + r2) - 1]/2. Note that λ is not quite the wavelength, except when r = 0. Everything depends on the ratio r, which is the ratio of λ/2π to the skin depth. If r is small, then λ is the actual wavelength, and k" = (2π/λ)(r/2). This is the case of weak damping.
On the other hand, if r is large, then k' = k" = 1/δ. This means the amplitude absorption is e-z/δ, so the wave will be absorbed almost completely in the distance of a few skin depths. In this case, the conduction current completely dominates the total current, and the fields diffuse into the material, like heat. This is the case with metals at low and moderate frequencies. Multiply the conductivity in S/m by 9 x 109 to find the conductivity in esu. The conductivity of copper is 5.80 x 107 S/m, or 5.22 x 1017 esu. This makes δ = 6.61ν-1/2 cm. At 60 Hz, the skin depth is 0.85 cm. At 1 MHz, it is 66 μm.
When a plane wave strikes a plane interface between media, the wave in general is partly transmitted, partly reflected. Such a problem is solved by assuming an incident and reflected wave in the first medium, and a transmitted wave only in the second medium. The boundary conditions on the fields then determine the relative amplitudes of the three waves. To satisfy the boundary conditions, the waves must satisfy the laws of reflection and refraction as familiar from optics. If they do no, the boundary conditions cannot be satisfied. The angle of incidence is the angle between the normal to the interface and wave vector of the incident wave. The angle of reflection is the similar angle for the reflected wave. The directions of propagation must lie in the same plane, the plane of incidence, and the angle of incidence equals the angle of reflection. The wave vector of the transmitted, or refracted, wave also lies in the plane of incidence, and the ratio of the sine of the angle of incidence to sine of the angle of refraction is the same as the ratio of the propagation velocities. In terms of the index of refraction, n sin i = n' sin r, where i and r are the angles in media with indices n and n', respectively. All this should be familiar from optics, but, of course, follows from Maxwell's equations as well.
The general problem for two nonabsorbing media gives Fresnel's equations. We will explain the method through a simpler problem, the normal reflection of electromagnetic waves from a metal, where we neglect the displacement current, an approximation valid for frequencies below those of the ultraviolet. The two possible polarizations are equivalent, so we need consider only one, in which E is along the x-axis, and H along the y-axis. Let the incident wave be Eej(ωt + kz), the reflected wave E'ej(ωt - kz), and the transmitted wave E"ej[ωt + (1 + j)z/δ], where k' = k" = 1/δ in the metal. The corresponding magnetic fields are -(c/ω)kEej(ωt + kz), (c/ω)kE'ej(ωt - kz), and -(c/ω)[(1 + j)/δ]E"ej[ωt + (1 + j)/δ]. We have set μ = 1.
At the plane interface, the tangential components of E and H must be continuous. The fields involved are found by letting z = 0 in the above expressions. Factoring out common factors, we find E = E' = E" (electric fields) and k(E - E') = [(1 + j)/δ]E". These two equations are sufficient to determine the ratios E'/E and E"/E. We find E'/E = [(kδ - 1) - j]/[(kδ + 1) + j] and E"/E = 2kδ/[(kδ + 1) + j]. As kδ becomes vanishingly small (skin depth much less than a wavelength) we see that E'/E → 1 and E"/E → 0, as they should.
The intensity reflection coefficient R = (E'/E)2 = [(kδ)2 - 2kδ + 2]/[(kδ)2 + 2kδ + 2]. Since kδ is quite small for most metals, the approximate result R = 1 - 2kδ can be used. For copper, the skin depth δ for 500 nm light is about 2.7 nm, so 2kδ = 0.068, and R = 1 - 0.068 = 0.932, or 93.2%. This presumes that the conductivity is still the static conductivity, which must break down at a frequency not much higher, since copper appears reddish, which implies that the bluer light penetrates the metal and is not reflected. There is actually rough agreement between this formula and experiment if the frequency is not too high, as in the infrared. For accurate results the frequency dependence of the conductivity must be taken into account.
Here we consider the classical skin effect that occurs for alternating currents in wires when the diameter of the wire is smaller than the skin depth as we have defined it above. Since the skin depth in copper is 0.85 cm even at 60 Hz, we see that its effects can be felt even at power frequencies for large conductors, such as those used in transmission lines. At radio frequencies, current is restricted to the surface of conductors, and all the copper in the middle is wasted. This analysis also contains some interesting mathematics, so it is worth looking at briefly. We start from Maxwell's equations, in cylindrical coordinates, and neglect the displacement current. The current and the electric field have only z-components, while the magnetic field has only φ-components.
Harmonic variation at angular frequency ω is assumed. Maxwell's equations are curl H = (4&piσ/c)E, and curl E = -j(ω/c)H. In the usual way, we find that the z-components of E and i (current density), and the φ-component of H all satisfy the same wave equation, which for i(r) is div grad i(r) = -j(4πσω/c2) i = -α2i(r). If you write div grad in cylindrical coordinates, you find (1/r)d(r di/dr)/dr + α 2i = 0. This is just Bessel's equation of zero order, and the solution finite at r = 0 is i(r) = CJ0(αr), If i(0) is the surface current density, then i(r) = i(0) J0(αr)/J0(αa), where a is the radius of the wire. Finding the expression for the current is just this easy, but now the fun begins.
Now α = √(-j)√2/δ, and √(-j) = e-jπ/4 = (1 - j)/√2. Bessel functions of this kind of complex argument are not simple. However, J0(j-1/2x) = ber(x) + j bei(x), and the ber() and bei() functions and their derivatives are tabulated. Dwight, in fact, gives a useful table for x = 0 to 10. For greater values of x, an approximation valid when the skin depth is small fraction of the wire diameter can be used instead of thes exact expressions. Definitions for these functions may differ slightly.
The first thing we need to do is to find the total current I in the wire in terms of i(0). Then, if Z is the impedance of a length L of the wire, we have i(0) = σE = σZI/L and so can find the impedance Z. I is simply the integral of i(r) over the cross-section of the wire, I = 2π∫(0,a) i(r)rdr. Now, ∫rJ0dr = rJ1, so the integral is easy to do. Since the tables give the derivatives of ber() and bei(), it is best to express the integral as -rJ0'(r) instead for numerical evaluation.
The result is I = 2πi(0)aJ1/α. Substituting the value of i(0) from above, and cancelling I, we find Z = αJ0(αa)/aJ1(αa). The value of this expression is complex, so we have not only a resistance R, but an inductive reactance X = ωL as well. In terms of the dc resistance of the wire, R0, we have Z/R0 = (-αa/2)J0(αa)/[√(-j)J0'(αa)]. For √2(a/δ) = 5.0, this is evaluated using Table 1050 in Dwight as follows: Z/R0 = (-j)(2.5)(-6.23 + j0.0116)/(3.845 + j4.354) = 2.682/_40.378° = 2.04 + j1.74, which agrees with the graph in Ramo.
Let's study the parallel-wire transmission line, with the geometry and conventions as shown in the figure at the right. There are two cylindrical conductors of diameter 2b, whose axes are parallel and separated by a distance 2d. The potential between the wires is v, positive when the right-hand conductor is positive. The potential is zero on the plane x = 0. The currents i in the conductors are equal and opposite in direction, positive when the current in the right-hand conductor is out of the page. Coordinates x and y are in the transverse plane, while z is out of the page, defining a right-handed triple of unit vectors.
This is not the easiest transmission line to study--the coaxial line is simpler. However, it contains so much good stuff, and is also of practical use, so it is well worth study. Parallel-wire lines are used as tuning elements, and for connecting TV aerials with receivers ("twinlead"). An AC line cord is also a parallel-wire transmission line. They can also be used for measurements on electromagnetic waves (Lecher Wires), especially in teaching laboratory experiments. The waves we consider here are guided waves, attached to their guides, the wires of the transmission line, and not propagating in free space. The waveguide we consider here has two conductors. There are also waveguides with only one conductor, familiar from radar engineering, but these will not propagate waves of an arbitrarily low frequency, and exhibit a cutoff frequencybelow which any waves are strongly attenuated.
We postulate that the fields E(x,y,z) and H(x,y,z) lie in the xy-plane; that is, they are transverse. We will be able to show that the fields satisfy Maxwell's equations, and so specify a transmission mode. This mode is called a TEM mode (transverse electric magnetic), and exists for all frequencies, down to zero or DC. We will search for wave solutions that depend on t and z through t ± z/v, where v is the wave velocity, which will be equal to the speed of electromagnetic waves in the medium surrounding the conductors. Although μ is usually unity, we shall carry it in the equations for completeness.
In the problem we shall study, the current is restricted to the surface of the conductors, as if they were thin-walled tubes. This is always the case in most practical uses of parallel-wire lines for frequencies above 1 MHz, because of the skin effect. Even at lower frequencies, our analysis will give closely approximate results with a few corrections. An exact analysis in this case is exceedingly difficult unless numerical methods are used, and then it is difficult to see what is going on.
Maxwell's equations show that for zero frequency (DC) the z-dependence of the fields vanishes (as, of course, does the time dependence). The electric and magnetic fields become uncoupled, and can be treated independently. This is what is usually called a two-dimensional problem, but it is really still a three-dimensional problem where the fields do not depend on the third coordinate. Special methods, and special problems, characterize two-dimensional field problems. The most important difficulty is that the unlimited extension in the z-direction is distinctly unphysical, and causes difficulties when we integrate over all space. First, let's consider the electric problem.
The electric field is determined by the field equations curl E = 0 and div D = 4πρ. Here, D = κE, where κ is constant everywhere, so we really have only one vector, which we shall take to be E. The curl equation shows that the tangential component of E is zero at the surface of a conductor (since the field inside must be zero), and that the normal component defines a surface charge density σ' = En/4πκ, which we find from the divergence equation. In terms of the potential, σ' = -(κ/4π)(∂φ/∂n). The partial derivative here is the normal derivative to the surface. The curl equation implies that E can be derived from a scalar potential φ, E = - grad φ. Throughout the medium surrounding the conductors, ρ = 0, so div grad φ = 0, which is Laplace's equation, with the boundary condition that φ = constant over the surface of a conductor. A direct assault on Laplace's equation is fruitless, so we must trust to ingenuity and luck.
Let's try idealizing a charged conductor to a line charge with the same linear charge density λ, charge per unit length or esu/cm. If this line charge is alone in space, the symmetry means that we can find the field by Gauss's Law using a pillbox-shaped volume of radius r and unit length. This gives Er = 2λ/κr, so by integration with respect to r the potential is φ(r) = -(2λ/κ)log r. (The log is the natural logarithm, sometimes written ln.) This is illustrated in the figure at the left, where the equipotentials, cylinders concentric with the line charge, and the lines of force, radial lines, are shown. The electric field E is radial, outward from a positive line charge. The logarithmic potential is infinite not only at the line charge, r → 0, but also at infinity, so we cannot make it vanish there, as we can with the potential of a point charge, that only has a pole at r = 0. The logarithmic potential is zero at r = 1, but its absolute value has no significance, and we could add any constant to it that we like without changing the field.
The magnetic problem can be cast into a very similar form. We have curl H = (4π/c)j, where j is the vector current density, esu/cm2-s, and div B = 0. Since B = μH, where μ is a constant everywhere, we have only one field vector, as in the electric case, which we choose as H. The divergence equation permits us to derive B from a vector potential A, through B = curl A. A must satisfy div A = 0. In this problem, A has only a z-component, which we shall denote simply by A. For a sufficently high frequency, the magnetic field at the surface of a good conductor is tangential to the surface, and we shall apply this boundary condition here, though it really is not the case for zero frequency when magnetic fields can penetrate into the conductor. From the curl equation, we find that the surface current density ι in esu/cm-s is given by ι = (c/4π)Ht in terms of the tangential component of H. In terms of the vector potential, ι = (c/4πμ)(∂A/∂/n). Except for the constants, this is exactly the same way the surface charge density is related to the potential.
The conductor is idealized to a line current of i esu/s. Then H can be found by integrating it around a circle of radius r, and using the curl equation. We find Hθ = 2i/cr. Since Hθ = -(1/μ)∂A/∂r, we integrate with respect to r to find A = -(2μi/c)log r, as shown in the diagram at the right. Now the circles A = const give the direction ofH, and the radial lines are orthogonal to them. Comparing with the electric problem, we see that A/φ = cλ/κμi (λ is not the wavelength, and i is not the imaginary unit.). That is, the two potentials are proportional, and have the same dependence on x and y. This relation will also hold for any superposition of these potentials.
In terms of the potentials, the rectangular field components are: Ex = -∂φ/∂x, Ey = -∂φ/∂y, Bx = ∂A/∂y, By = -∂A/∂x. Since A and φ are proportional, these relations also show that E and H are orthogonal. This, also, will hold for any superposition of the two potentials.
Now we consider the problem of two equal and opposite line charges ± λ a distance 2a apart, and two equal and opposite line currents ± i superimposed on the line charges. We can find the potentials for this problem by adding the potentials for the separate lines, so that φ(P) = -(2λ/κ)log (r/r') and A(P) = -(2μi/c)log (r/r'), where r is the distance from P to the right-hand line charge, and r' the distance to the left-hand line charge. These two potentials will be proportional, with the same proportionality constant found above.
The ratio r/r' will be denoted by K1/2, so that K = [(x - a)2 + y2]/[(x + a)2 + y2]. The equipotentials are the curves K = constant, which are found by expressing this condition as [x - a(1 + K)/(1 - K)]2 + y2 = 4Ka2/(1 - K)2. This represents a circular cylinder with its center on the x-axis. What luck! The equipotential K' can coincide with one of the conductors, so the boundary conditions are satisfied here. Then, 1/K' gives a cylinder of the same radius, but an equal distance on the other side of the y-axis, which will coincide with the other conductor. Exactly the same holds for the cylinders A = constant. The potential problem is solved.
In terms of the dimensions of the line, d = a(1 + K')/(1 - K') and b = 2a√K'/(1 - K'). First, we note that d2 - b2 = a2, that locates the line charges and currents. Then we can solve for K', finding √K' = d/b - (d2/b2 - 1)1/2. From the values of the potentials on the surfaces of the conductors, we find v = (2λ/κ)log K' = (4λ/κ) log √K'. Just as the difference in the potentials gave the potential between the conductors, the difference in the values of A gives the total flux passing between them. If this total flux is Φ, then Φ = -(4μi/c) log √K'. These equations relate the line charges and currents to the potential difference and the flux linking the line. Equipotentials and lines of force are shown in the diagram at the left.
The capacitance C per unit length of the line, in cm/cm is the ratio λ/v, which is C = κ/4 log [d/b - (d2/b2 - 1)1/2]. The inductance L per unit length, in s2/cm, is the ratio Φ/ci, and comes out as L = (4μ/c2) log [(d/b - (d2/b2 - 1)1/2]. The product LC = μκ/c2, so 1/√LC = v, the phase velocity of electromagnetic waves in the medium. This is a general result for any TEM mode on a transmission line. The ratio Z = √(L/C) = (4/c)√(μ/κ)log [d/b - (d2/b2 - 1)1/2] also has an important meaning, that we shall discover later. It is called thecharacteristic impedance of the transmission line.
There is another more elegant way of finding the fields that uses the theory of complex variables. An analytic function (one whose derivative exists everywhere in a region) of z = x + jy has real and imaginary parts that obey the Cauchy-Riemann conditions and which are harmonic functions (solutions of Laplace's equation) in the xy-plane. If f(z) = f(x + jy) = u(x,y) + jv(x,y), then div grad u = 0 and div grad v = 0, and ∂u/∂x = ∂v/∂y, ∂u/∂y = -∂v/∂x. Compare with the definitions of the rectangular components of E and B, and the resemblance will be suspicious. The function u(x,y) can serve as the potential or the vector potential, and the other function will give an orthogonal set of line v(x,y) = const. Or, if u is the electric potential, then v can serve as a scalar potential for the magnetic field. This is a different way of finding the magnetic field that is equally applicable to this problem, since in the dielectric curl H = 0, so that H = - grad ψ, where ψ is a magnetic scalar potential. The only problem is finding a suitable analytic function.
The function f(z) = log [(z - a)/(z + a)] will do the job. It is analytic because it is an elementary function of z that does not involve complex conjugates or other specially complex things, but could be written as a real function of x. It arrived in a flash of brilliance; there is no easy way to find it from the statement of the problem. What people generally do is think up all possible analytic functions, and then see what problems they solve. Substitute z = x + jy, and separate real and imaginary parts. This is most easily done if (z - a)/(z + a) is expressed in polar form, and then the logarithm is taken. The real part is just the function we manufactured above by superimposing line charges, and the imaginary part is v(x,y) = tan-1[y/(x - a)] - tan-1[y/(x + a)]. By defining the two arctangents as the angles A and B, and then using the formula for the tangent of K = A - B (not the same K as before), we find the curves v = const to be x2 + (y - a/tan K)2 = (a/sin K)2. These are again circles with centers on the y-axis, and passing through the points (a,0) and (-a,0) which locate the line charges. It is something of a miracle that both the equipotentials and the lines of force are orthogonal systems of circles. Not only does it allow us to solve the problem, it also makes drawing the field rather easy.
At last we are ready to consider actual waves. The potentials and field components now become functions of z and t, and must satisfy Maxwell's equations and the boundary conditions. Write down Maxwell's equations for the transverse fields only. We find eight scalar equations, which fall into a group of four involving the x and y components of the curl equations, and a group of four involving the z components of the curl equations, and the divergence equations, which equate things to zero. These last four equations separate into electric and magnetic sets that are precisely what we used for the static fields. These fields will serve in the general case. This is remarkable, and should not be considered as trivial. There could be a transverse effect for waves propagating in the z-direction, but there is not.
The first set of four equations connect the electric and magnetic fields, making them dependent on each other. In the static case, the fields could have any relative constant values, determined by the values of λ and i, which were not restricted. Now the potential v between the conductors, and the current i in them, will be related. It happens that in free space, this makes the electric and magnetic fields equal to and in phase with each other, as in an unguided plane wave, so the waves on the wires seem to be an electromagnetic wave whose intensity varies in x,y but propagates down the wire at the speed of light, as if sliding on the wires. This is the reason that the propagation has no effect on the transverse fields. The Poynting vector (c/8π)[E x H*] points in the z-direction, and suggests that the energy flux is purely in this direction. The fields are strong only in the vicinity of the conductors, it should be remembered. We now make these consequences more precise.
At zero frequency, when the electric and magnetic fields are independent, we still have a nonzero Poynting vector, but there is no apparent energy flow in an ideal line. If the line is finite, and the line is terminated by a resistor R, The energy flow into a box containing the resistor gives the energy dissipated in the resistance (as we discussed above). The Poynting vector must be interpreted and used with care.
The electric and magnetic fields in the wave are connected by the x and y components of the curl equations, curl E = -(1/c)∂B/∂t, curl H = (1/c)∂D/∂t. The fields are found from the potentials, which we now write φ = φ'λ and A = A'i. The primed functions depend only on x,y, while λ=λ(z,t) and i=i(z,t) contain all the z and t dependence. The ratio A'/φ' = κμ/c, from what has been said above, and the same ratio exists between their partial derivatives. From ∂Hy/∂z = -(κ/c)∂Ex/∂t, using the potentials, we find ∂λ/∂t = -∂i/∂z, cancelling a factor (κμ/c) on both sides of the equation. From -∂Ey/∂z = -(1/c)∂Bx/∂t, we find (κμ/c2)∂i/∂t = -∂λ/∂z. These are coupled first-order differential equations for the functions i(z,t) and λ(z,t) that determine the magnitude of the fields. If we want the potential v between the conductors instead of the charge, then λ = Cv, where C is the capacitance per unit length.
These functions each satisfy the wave equation, div grad i = (1/u2)∂2i/∂t2 for i, with a similar equation for λ, as can be seen by eliminating first λ, then i between the two first-order equations. The propagation velocity is u = c/√κμ, as for unguided plane waves in the medium. Hence, λ(z,t) = f(t - z/u) + g(t + g/u), where f and g are arbitrary functions. Then the corresponding i(z,t) = uf(t - z/u) - ug(t + z/u). Once λ or i has been chosen, the other is automatically determined. Substitute in the first-order equations to verify that this is true.
Now let's consider a sinusoidal disturbance of angular frequency ω. Suppose that v(z,t) = Aejω(t - z/u) + Bejω(t + z/u). This represents a wave of amplitude A travelling in the +z direction and a wave of amplitude B travelling in the -z direction. Then, the current is (1/Cu)i(z,t) = Aejω(t - z/u) - Bejω(t + z/u). The factor 1/Cu = (1/C)√LC = √(L/C) = Z, which has already been defined as the wave impedance. It is seen to connect the magnitudes of the voltage and current in a sinusoidal wave on the transmission line.
Let us now choose A = aej0 and B = bejβ, and put 2ωz/a = z/(λ/4π) = ζ, where now λ is the wavelength 2πc/ω. Then we have v = ejω(t - z/u)[a + bej(β + ζ)] and Zi = ejω(t - z/u)[a - qej(β + ζ)]. Except for the common factor that varies sinusoidally with time at any particular position, these quantities can be represented simply in the complex plane, as shown in the diagram. Lay out AC = a and make the radius of the circle BC = b. Then the voltage V is given by the vector AP, and the current I by the vector AQ. As we proceed along the line, ζ changes, and with it the angle β + ζ. The magnitudes of V and I go through maxima and minima, and the phase angle θ = angle PAQ betweeen them changes.
The power is the same at every position. It is given by (AP)(AQ)cos θ. AD = AP cos θ (the angle in a semicircle is a right angle), so the power is the product AD x AQ = AB x AE = constant. (The product of the lengths of a secant from a given external point to the two points on the circumference is constant.) We get a little help from geometry here to prove the assertion, showing how superior a little thought is to dogged grinding.
The ratio of B to A is determined by conditions at the termination of the line. If we send a wave down an infinite line, nothing ever comes back, so B = 0. The diagram reduces to the line AC. If we terminate the line by a resistance R = Z, then B = 0 as well, and the line appears to be infinite from the sending end. If the line is abruptly terminated by an open circuit or an reactance, then |B| = |A| and the wave is reflected with the same amplitude. In this case, the circle passes through A, and we have standing waves from zero amplitude to a maximum. All these things are familiar to those who have worked with transmission lines.
A real transmission line, especially a long one, will exhibit a series resistance R per unit length, and a leakage conductance G per unit length. These introduce an extra series potential difference iR, and a current shunt vG, per unit length, which will modify the equations for v and i as functions of z and t. The equations can be written down without explicit considerations of the fields at all, since the parameters v and i stand in for them here. This is an example of the power of circuit theory, one of the best tools of electrical engineering. R and G may include other effects than a physical resistance or conductance, such as dielectric losses.
Take a short length of line dz, as shown in the figure at the left, and write Kirchhoff's voltage law and current law for it. The result is C(∂v/∂t) + Gv + ∂i/∂z = 0 and L(∂i/∂t) + Ri + ∂v/∂z = 0, which reduces to our previous equations if R = G = 0. If either v or i is eliminated between the two equations, the resulting equation is called the Telegrapher's equation, which first received attention in the theory of the transatlantic cable in the mid-19th century, notably by William Thompson, later Lord Kelvin. If a sinusoidal signal ej(ωt - kz) is substituted in it, with a complex k = k' - k", as we did for a plane wave in an absorbing medium, we can obtain the phase constant k' and the absorption coefficient k" by equating real and imaginary parts. Dreary algebra yields k'2 = (1/2)(S + ω2LC - GR), k"2 = (1/2)(S - ω2LC + GR), where S2 = (ω2LC - GR)2 + ω2(CR + LG)2. It is really difficult to see anything in this complication, but if R and G are small, then k" ≈ (R/Z + GZ)/2, where Z is the wave impedance. Also, k'2 = ω2LC + (R/Z - ZG)2/4. This makes things much clearer! It shows immediately that if R/Z = GZ, or Z = √(R/G), then the attenuation is a minimum at k" = R/2Z, and the wave velocity is constant, very desirable conditions indeed.
In the early days of the telephone, there were no electronic amplifiers, and what amplification there was came from the carbon microphones. By shouting, communication was possible over perhaps twenty miles or so, but long distance was out of the question. The circuits, wires on poles, could be made with relatively small R and G, but the large capacitance of a long circuit was unavoidable. (Any other kind of circuit was worse, incidentally). As on the transatlantic cable, the result was strong attenuation and distortion because of the variation of velocity with frequency. Michael Pupin found that if the series inductance L of the circuit was raised, so was Z, and the circuit was greatly improved. In fact, with a proper choice of L, the condition R/Z = GZ could be obtained, and the distance of communication increased by a factor of 4 or so, making limited long-distance communication possible. This was done by connecting loading coils in series with the line at regular intervals. Transcontinental long distance had to wait for vacuum-tube amplifiers, which finally arrived around 1915. The development of feedback amplifiers in the 1920's completely solved the problem, since the frequency response could be tailored to the line, and distortion was corrected at the same time the signal strength was restored.
As is usual, we have considered infinite lines and signals that have existed for eternity, but neither case is physical. The phenomena that arise when the extent of signal is limited in space, or a line limited in length and terminated in one way or another, or when a signal begins and stops, are very complicated and very interesting, and very seldom treated or mentioned. It is best to keep these considerations in mind, and to be prepared for startling results. There are approximate or empirical methods of handling real situations that give satisfactory results in practice, however.
The two most remarkable revelations of physics are that matter behaves differently on a small scale (quantum mechanics) and that there is electromagnetic radiation (electrodynamics and relativity). The idea that energy could be transmitted without the intervention of matter was a very difficult one for physicists to accept, and was never revealed to any religion. Even J. C. Maxwell (1831-1879) could not embrace it, as he states quite definitely at the end of his treatise (see References). In this work, he devotes 20 pages out of 999 to electromagnetic waves, and does not consider their sources. Heinrich Hertz (1857-1894) greatly clarified the subject after his 1886 experiments, and connected radiation with its sources. An outstanding result of Maxwell's and Hertz's studies was that electromagnetic interactions propagate in vacuum at a finite velocity equal to the ratio of electromagnetic to electrostatic absolute units, which we now know as the speed of light, c = 2.998 x 1010 cm/s, and a fundamental constant in relativity.
The first step is to introduce the scalar and vector potentials φ and A into the full system of Maxwell's equations. For ease of writing, we shall let vectors be denoted by ordinary-weight type where there can be no confusion, and express time derivatives by a prime. That is, ∂f/∂t = f'. Also, we will assume free space where μ = κ = 1, and take the basic field vectors as E and H.
Then, we have curl H = (4π/c)j + (1/c)E', div H = 0, curl E = -(1/c)H', div E = 4πρ. Since div H = 0, we are permitted to put H = curl A. Then curl (E + A'/c) = 0, so we can put E + A'/c = -grad φ, or E = -grad φ - A'/c. Now div curl H = (4π/c)div j + (4π/c)ρ' = 0. Hence div j + ρ' = 0, which expresses the conservation of charge, showing that the equations are consistent with this. Further, -div (grad φ + A'/c) = 4πρ, and curl curl A = (4π/c)j - (1/c)(grad φ' + A"/c), or grad div A - div grad A - (4π/c)j - (1/c)grad φ + A"/c2. This sorts itself out to div grad A - A"/c2 - grad(div A + φ'/c) = -4πj. We will have the wave equation with phase velocity c for A, provided that div A + φ'/c = 0. We are at liberty to redefine A so that this relation holds by making a suitable gauge transformation A → A + grad ψ, which does not change the fields, as can be seen by substituting the new A in the formulas for the fields in terms of A. When this is done, the first equation we found becomes div grad φ - φ"/c2 = -4πρ
Now we have wave equations for A and φ, showing that they propagate at speed c, plus the Lorentz condition div A + φ'/c = 0. Since the equations for the three rectangular components of A and φ are the same, we need only look at the solution for φ, which we can then apply to the components of A. In electrostatics, or if c → ∞, the solution for φ is ∫ ρ(u,v,w)dV/R, where R is the distance from the observation point P(x,y,z) to the source point (u,v,w), and dV = dudvdw. We now generalize this for finite c by φ = ∫ [ρ]dV/R, where [ρ] = ρ(u,v,w,t - R/c), called the retarded value of ρ at P. It is not difficult to show that this integral satisfies the inhomogeneous wave equation for φ. Separate the region of integration into two parts, one near the source, which we assume is small compared to a wavelength, and the other consisting of the rest of space. In the first integral, we can neglect the retardation, and the Laplacian of this part will just be -4πρ. In the second integral, The Laplacian with respect to the observation coordinates is the same as the Laplacian with respect to the source coordinates, so we can take it inside the integral. [ρ] certainly satisifes the wave equation div grad ρ - ρ"/c2 = 0, so we can replace the Laplacian by the second time derivative. Now, adding the two contributions, we have div grad φ = -4πρ + φ"/c2, so the retarded integral for φ does satisfy the inhomogeneous wave equation for φ. This analysis can be made much more rigorous with some more powerful mathematics, but this should remove suspicions.
We have just seen something remarkable. The dependence of the potentials on the sources is exactly the same as in the static case, only with the sources replaced by their retarded values. This shows clearly that influences in the electromagnetic field propagate at the finite speed of light, and not instantaneously. There is no "action at a distance" here! We should also note that the fields do not behave in this manner; they are not simply the retarded static fields. If they were, there would be no radiation. All this seems to give the potentials a more basic nature than the fields, and in advanced work we use the potentials only. Even the gauge transformations have a deep significance that is not reflected in classical electrodynamics, but requires quantum field theory for its elucidation.
Now we can consider the radiation from a small source successfully, largely following Hertz. This is not a general analysis, but tells us a lot about electromagnetic radiation anyway, and it is not hard to follow. We start by making a cunning choice for the vector potential A. We assume that it has only a z-component A = [p']/cr, where p'(t) is an arbitrary source function located at the origin, and is not a function of direction. By making this choice, we determine the nature of the source, which we shall investigate later. If we substitute this expression in the homogeneous wave equation (the only sources are near the origin, and we are considering the fields in free space), it is found to satisfy it. Note that ∂[p']/∂r = ∂{p'(t - r/c)}/∂r = -[p"]/c, where now r = R. This is differentiation with respect to the "Maxwell" r in the retarded function. Any other r is called a "Coulomb" r.
The scalar potential can be obtained from div A +φ'/c = ∂A/∂z + φ'/c. We find φ' = -{[p"]/cr + [p']/r2}(z/r), which integrates to φ = -{[p']/cr + [p]/r2}(z/r). Note that ∂A/∂z = (∂A/∂r)(z/r). From A and φ we can determine the fields by differentiation, taking care to include both Maxwell and Coulomb r's. This is a little more complicated than might appear at first sight, but it is straightforward. We choose spherical polar coordinates, with the axis in the z-direction, so that z/r = cos θ. The results (which the reader should verify) are Hφ = {[p"]/c2r + [p']/cr2} sin θ, Eθ = {[p"]/c2r + [p']/cr2 [p]/r3} sin θ, and Er = {2[p']/cr2 + 2[p]/r3} cos θ. All other field components are zero. These are the exact fields at any distance r from the source.
The fields are the sum of three contributions, which may be called the radiation,induction and Coulomb fields. Each depends on a different time derivative of the source function p(t), and each has a characteristic dependence on r. The radiation fields dominate for r >> λ, induction for r ≈ λ and the Coulomb fields for r << λ. For a sinusoidally varying source of angular frequency ω, the wavelength λ = 2πc/ω. The directions of the electric and magnetic radiation fields are shown at the right.
The Coulomb fields, depending on r-3, are precisely those of a static electric dipole of moment p(t). This was determined when we chose the form of A, which we now recognize as applying to electric dipole radiation. The radiation fields consist of only two components, E in the meridional direction and H in the latitudinal direction. Their magnitudes are equal, and are proportional to sin θ. The radiation is zero in the polar direction, and a maximum in the equatorial direction. The radiation fields arise entirely from differentiating the Maxwell r. Finally, the induction fields depend on p'(t) and fall off as r-2. They resemble the Coulomb dipole fields, but are a quarter-wavelength out of phase and are still considerable at a greater distance than the Coulomb fields. These fields have been used for short-distance communication links that do not require wires.
The Poynting vector is N = (c/4π)EH = (c/4π)[p"]2sin2/c4r2 erg/cm2/s, pointing radially outward. The total radiated energy U is obtained by integrating over the surface of a large sphere using the surface element 2πr2 sin θ dθ, with the result that U = (2/3c3)[p"]2 erg/s. This is independent of the radius of the sphere, showing that the energy becomes well and truly detached from its source. If p(t) = p sin ωt, then p" = -ω2p sin ωt, and
2
> = ω
4p
2/2, and the average rate of radiation is U = p
2ω
4/3c
3 erg/s.
If p = ql, where l is the distance between the charges -q and +q, then p' = il. Let p' = il sin ωt. Then U = (8π2/3c)(l/λ)2(irms)2 = Rr(irms)2, where Rr is called the radiation resistance. The power of the antenna current working into this equivalent resistance gives the power radiated.
If we let H → E and E → -H in the source-free field equations curl E = -H'/c, curl H = E'/c, div H = 0 and div E = 0, we see that the same equations are recovered. This means that if we interchange E and H in the fields we have found for an electric dipole, the resulting fields are still possible radiation fields. By examining the Coulomb terms, we see that the source is now a magnetic dipole of magnitude m = iS/c, where i is the current in a circuit of area S. Note that m has the same dimensions as p, esu-cm. H is now meridional, E is latitudinal, and the Poynting vector is the same. If the current is i(t) = i sin ωt, then the average power radiated is U = (S2ω4/3c5)i2. This can also be put in the form U = (8π2/3c)(2πS/λ2)2 (irms)2, for comparison with the electric dipole result. A magnetic dipole of area S is equivalent to an electric dipole of length l = 2πS/λ. Our analysis only applies for sources whose maximum linear extent is small compared to a wavelength, so that retardation across the source is negligible.
It was impossible to construct a model of the atom that radiated as described by Maxwell's theory. A moving electron would simply spiral in to the nucleus, and normal matter could not exist. The trouble was in the atom model, not in Maxwell's theory. When a proper expression for the fluctuating dipole moment was obtained, it worked when substituted in the equations we have derived above. The vibrations are concerned with the transitions between two states, and there is no fluctuating dipole moment in a stationary state. This explained the many frequencies observed, and also the stability of atoms. Classical theories were used successfully in quantum mechanics through the correspondence principle before the proper methods of quantum calculation were devised.
Note: if you spot any errors or misprints, I would as usual be grateful for notification via e-mail. If you consider the HTML, you may forgive me a few misprints.
M. Abraham and R. Becker, Classical Electricity and Magnetism 2nd ed.(New York: Hafner, 1949). Chapter X is an excellent review of electromagnetic waves at an intermediate level, and the source that prompted this article. It also contains a good introduction to vector calculus, using hydrodynamic analogies. A later edition has been reprinted by Dover.
J. D. Jackson, Classical Electrodynamics, 2nd ed. (New York: John Wiley & Sons, 1975). Chapter 4. The third edition, I understand, uses Giorgi units, a regrettable concession to fashion. Ewald and Oseen are on p. 512. The precursors are on p. 313ff. There is an excellent bibliography.
M. Born and E. Wolf, Principles of Optics (London: Pergamon Press, 1959). Chapters 1 and 2. A good source for Fresnel's equations. The Ewald-Oseen extinction theorem is treated on pp 99ff.
S. Ramo, J. R. Whinnery and T. van Duzer, Fields and Waves in Communication Electronics (New York: John Wiley & Sons, 1965). An unsurpassed engineering introduction to the subject, which, naturally, uses Giorgi units. The graph of the skin effect in wires is on page 297.
H. B. Dwight, Tables of Integrals and Other Mathematical Data, 4th ed. (New York: Macmillan, 1961). Table 1050, pp 324-325, and Chapter 10. It is extremely regrettable that this useful handbook is out of print.
J. C. Maxwell, A Treatise on Electricty and Magnetism, 3rd ed., 2 Vols. (New York: Dover, 1954). Reprint of the 1891 edition. Vol II, Chapter XX, and pp 492-493.
Composed by J. B. Calvert
Created 2 October 2002
Last revised 13 October 2002