湯偉晉 管理的部落格

2013年1月5日 星期六

Science.gov - Your Gateway to U.S. Federal Science


Science.gov - Your Gateway to U.S. Federal Science
http://www.science.gov/
Science.gov - Your Gateway to U.S. Federal Science
Science.gov: USA.gov for Science - Government Science Portal
Your Gateway to U.S. Federal Science



crystal structure (晶體的結構)


crystal structure
晶體的結構
crystal structure (
晶體的結構)
晶體的結構
(crystal structure)

The Norton Operational Amplifier, LM3900



The Norton Operational Amplifier, LM3900


Source:
http://mysite.du.edu/~etuttle/electron/elect21.htm


The Norton Operational Amplifier, LM3900


The LM3900 is a 14-pin DIP containing four identical op-amps, each with inverting and noninverting inputs and an output. However, these op-amps are very different from the usual op-amp, and must be used in a completely different way. The usual op-amp responds to a differential voltage at its inputs, but the LM3900 responds to a differential current. Instead of a differential amplifier, the input stage is the current-differencing circuit shown at the right. The current mirror at the noninverting input subtracts the current at that input, I+, from the current at the inverting input, I-, to form the difference current I- - I+ that is furnished to the amplifier with overall gain of 70 dB. If I- is greater than I+, the output saturates low, and if I- is less, the output saturates high. Feedback from the output to the inverting input acts to reduce the difference current, which in normal operation is very small. This is just like the usual op-amp, except with current instead of voltage. The output, however, is a voltage as with the usual op-amp. The label "Norton" refers to the Norton equivalent circuit for a current source.
Another difference that is immediately obvious is that the inputs are one diode drop, about 0.55 V in the LM3900, above ground, and vary little from that voltage. There is no common-mode voltage range at all! It has been replaced by a common-mode current range by the current differencing. Equal currents supplied to the two inputs are not amplified. The LM3900 is specially useful when only a single power supply is available. Its output swings from very near ground to a diode drop below the positive supply. In these experiments, we'll use a +12 V supply. The LM3900 can use from 4 to 36 V total supply voltage. There is one set of power connections for the four amplifiers, but they are otherwise independent. Connections for the LM3900 are shown at the left.
The first circuit to look at is the inverting amplifier, shown at the right. Note the symbol for the Norton op-amp, that has a current source between the inputs, and an arrow on the noninverting input. This is an AC amplifier, with coupling capacitors at input and output so there is no worry about DC bias levels, which can be chosen as necessary. The input and feedback resistors are as with the familiar circuit, and the gain in this case should be -2. The difference is the 39k resistor at the noninverting input. It supplies a current (12 - 0.55)/39 mA to this input, and the output endeavors to supply an equal current to the inverting input, which requires a DC output voltage of (20/39)(12 - 0.55) + 0.55 = 6.4 V, a convenient bias. The bias is set by the current I+, not by a voltage divider, as it would be with the usual op-amp. The LM3900 is very convenient for AC amplifiers. When you test this circuit, the scope traces for input and output can be superimposed with proper setting of the gain, and the inversion is obvious.
A unity-gain buffer is shown at the left. It looks like the same circuit for the usual op-amp, but here the resistors are necessary. The input resistor turns the applied voltage into a current, while the feedback resistor causes the output to rise to a value supplying an equal current to the inverting input. The circuit is not as precise as the usual one, with small shifts due to the diode drops at the input, but the circuit is useful and furnishes a reasonable input resistance of 1M. The extremely high input resistance of the voltage or FET-input op-amp is not obtained, however.
The unity-gain buffer is a special case of the noninverting amplifier, shown at the right with biasing by means of a voltage divider. The divider supplies a current into what is effectively a unity-gain buffer to reproduce its voltage at the output. This amplifier has a gain of +10, of course, or 20 dB. Again, the high input impedance of the voltage op-amp is not obtained.
Another biasing scheme for the inverting amplifier is shown at the left. It relies on the diode drop at the inverting input to supply a current proportional to Vbe, which is then amplified by the feedback resistor. In this case, the bias is about 10 x 0.55 = 5.5V. Actual measurement gave 5.7 V. Here, the current supplied by the output is returned to the inverting input, where it flows out the 120k bias resistor, so the bias is positive. In this case, the bias does not depend on the supply voltage.
A voltage regulator with a Zener in the feedback loop is shown at the right. The 510Ω resistor supplies a bias current of 0.55/0.51 = 1 mA to the Zener to reduce noise and improve stability. This voltage is added to the Zener voltage at the output. A 4.2 V Zener (1N5230) gave 4.76 V, a 2.3 V Zener (1N5226) gave 2.84 V. A pass transistor can be added at the output for additional current capacity (the LM3900 can be depended on only for about 20 mA when sinking current).
A relaxation oscillator is shown at the left. With the values shown, this circuit oscillated at 3840 Hz, the output swinging from near zero to 11.3 V. The slew rate of the LM3900 was evident in the output waveform. To understand this circuit, assume for simplicity that the output swings from 0 to the supply voltage V. When the output is 0, a current I+ = V/10 μA is supplied to the noninverting input. When the output is V, a larger current I+ = V/5 is supplied. These are the trip levels expressed in currents. When the capacitor is discharged, I- = 0, so the output is high, and the trip level is 3(V/5) = 0.6V. The capacitor charges until its voltage reaches this level, but then the ouput goes low and it begins to discharge again. The lower trip level is 3(V/10) = 0.3V. When the capacitor voltage reaches this level, the ouput again goes high, and the cycle repeats. For a 12V supply, these levels are about 7.2 and 3.6 volts. This is a very rough calculation, so we expect only general agreement. The measured values (from the scope) were 3.0 V and 8.4 V, and the capacitor voltage waveform was quite rounded on top and bottom--in fact, it would be a passable sine wave if you were not too particular. Buffered, this would be an easy way to get a sine wave.
The relaxation oscillator makes use of a Schmitt trigger, with positive feedback. An inverting Schmitt trigger is shown at the right. The input and supply voltages can be exchanged to give a noninverting circuit. The output goes from high to low when the input reaches 7 V, and from low to high when it goes below 6 V, with a hysteresis of 1 V. The hysteresis can be displayed directly on the scope, using the X-Y display. Connect Ch 1 to the input voltage, and Ch 2 to the output. A 100 Hz triangle wave from a function generator makes a suitable input. Alternatively, use 0.5 Hz and watch the dot move. This is a very graphic illustration of Schmitt trigger operation.
The Norton op-amp can also be used to make active filters and oscillators; see the National Semiconductors application note AN-72 for details.

References

The National Semiconductors application note AN-72 of Sept. 1972, revised June 1986, is a good introduction to the LM3900, with a great variety of applications.

Return to Electronics Index
Composed by J. B. Calvert
Created 10 August 2001
Last revised 19 September 2008

Operational Amplifier (運算放大器)


Operational Amplifier
Operational Amplifier (OP Amplifier)
運算放大器
Operational Amplifier (
運算放大器)
運算放大器 (Operational Amplifier; OP Amplifier)

Source:
http://mysite.du.edu/~etuttle/electron/elect3.htm

Operational Amplifiers--DC


Op-Amp Fundamentals

Operational amplifiers, or op-amps for short, got their name from the modules used in analog computers to perform "operations" such as adding, multiplying and so forth. Now they are integrated circuits for application as general feedback amplifiers. They seem easy to use, but the many types available and the great variety of ratings hint that their use requires considerable knowledge and skill, which is very true. In this page, I will present a dozen or so circuits that will give a good understanding of how to use op-amps in DC applications, where an oscilloscope is not needed. Most of the really interesting uses of op-amps involve changing signals, but this topic will be reserved for later.
The power supply for an op-amp is normally bipolar, with voltages above and below ground, called +V and -V. Most common op-amps can stand up to 36V, or ±18V. It is convenient for our experiments to use ±12V, which is usually available from multi-ouput power supplies. You can always arrange a bipolar supply from two ordinary supplies. Ground in this case is merely a voltage between the supply rails, as they are called, of no special significance. Op-amps have no ground terminal, since this reference is unnecessary. If you have trouble remembering polarity, have lots of op-amps around, since they are instantly destroyed by any mistake.
The pinouts of several common op-amps are shown in the figure. These are the usual DIP packages, as seen from the top, with pins numbered from upper left, down one side and up the other to upper right. The 411 is a JFET-input op-amp quite suitable for our purposes. The 741 is a bipolar op-amp, long a standard. These two op-amps can be used interchangeably in most circuits. The dual 411 is the 412, and the dual 741 is the 1458. The 351 is another popular JFET-input op-amp. The 324 comes four to a package, and two to a package in the 358. The LM10 includes a voltage reference and an op-amp to buffer it, in addition to the main op-amp. The cheapest LM10 is the LM10CL version, which has a 7V limitation on the power supply. It is useful when only +5 is available. Don't use it with the usual ±12V supply!
The connections marked + and - are the inputs to the op-amp, and the connection from the point of the triangle is the output. The output can go from a value near +V to a value near -V. When the ouput is near one of these limits and can go no farther, it is said to besaturated. You can short-circuit an output if you want, since it is internally protected against too much current. On the other hand, the output will handle only up to about 20mA at best. Op-amps are not for power applications, but can drive a power amplifier (usually transistors) if power is needed. The output is proportional to the difference in voltage v+ - v- between the two inputs, where v+ is the voltage at the + or noninverting input, and v- the voltage at the - or inverting input. The voltage gain of the amplifier is perhaps 100,000 or 100 dB at low frequencies. With such a gain, the voltage between the inputs must be very small if the output voltage is not to be at saturation. This amounts to a rule: the voltages at the inputs are equal when a circuit is working properly.
In order to make the voltages at the inputs equal to each other, it is necessary to arrange this by feedback. All op-amp circuits use feedback, and the properties of the circuit are determined by the feedback, not by the properties of the op-amp. It's best to study op-amp circuits with no reliance on feedback theory, and to use the results to understand and appreciate feedback theory instead. Then one can come back with greater knowledge to handle more difficult cases.
The common-mode input signal is the average of the potentials of the two input connections. Since they are usually at the same voltage, this voltage is the common-mode input voltage. The op-amp ignores the common-mode input, and determines its output only by the difference signal. Nevertheless, it is important to look at the common-mode input voltage and see that it does not leave its permissible range. The common-mode range of an op-amp is less than from +V to -V, and the op-amp usually does something unpleasant when the range is exceeded (the 411, for example, goes from a large negative output suddenly to a large positive output when this happens). Some early op-amps had a very limited common-mode range. In the LM10 and the 324, the common-mode range goes to -V (as does the output voltage), which makes these op-amps suitable for a +V to 0 supply.
That the inputs are usually at the same voltage does not mean that they can be connected to each other. If you do this, the output usually saturates. The voltages must be held equal by the active participation of the output, acting through the feedback network. The inputs also carry a small dc bias or leakage current that must have a route to the power supply. With bipolar op-amps, this current is actually the base bias current for the input transistors, and sometimes has to be considered in the circuit design. JFET's, on the other hand, have a much smaller input current that is largely leakage, and does not affect the circuit much--except that it has to have a route to ground. In ordinary circuit analysis, the bias currents can be neglected, and it can be assumed that the inputs carry no current. Don't forget that this is only approximate!
The most important factor hidden from the casual user of op-amps is the question of stability. Stability is always important with high-gain amplifiers, and when feedback is applied. The feedback loop can become the route for a signal to be fed back to the input in the proper phase to cause oscillation, called instability. Without some care, feedback always results in instability, which is always fatal. The oscillation can occur either at a higher or a lower frequency than that for which the circuit is designed, usually higher (like the feedback with a microphone and speaker). With the ordinary op-amps, stability is guaranteed by making the gain fall off at 20 dB per decade of frequency, beginning at about 10 Hz, so that the gain of the amplifier falls to unity at around 1 MHz. Unless you have capacitors in unfortunate places, this guarantees that the circuits you put together will be stable, no matter what you do. What you pay for this is a severe restriction on the bandwidth of op-amp circuits, and overcoming it is advanced work.
It may be useful to restate here the assumptions that make the analysis of op-amp circuits easy. First, the inputs carry no current. Second, proper feedback acts to make the voltages at the inputs equal. To check that the feedback is proper, suppose the noninverting (+) input to rise slightly in potential. This makes the output voltage increase; if this increase acts to raise the potential of the inverting (-) input, then the feedback is correct (negative). Finally, there must always be a conductive path from either input to ground.

Circuits

You will need, in addition to the bipolar power supply, three op-amps (more if you cannot keep power polarity straight), two 10k pots, and resistors of 100, 1k, 10k and 100k values. A couple of the circuits involve transistors, and a 100Ω, 1W resistor is needed as a load in one circuit. Most of the circuits are simple, but a few can be a challenge. There are two ways to construct a fairly complex circuit on the breadboard. One way, the one I use, is to wire things in logically, looking up the proper pins at the time. The other way is to sit down with the pinouts and the circuit diagram and label each connection with its pin number, then make the connections accordingly, not worrying about what you are connecting. The first method is faster and more intuitive, while the second can avoid mistakes, especially when you are dealing with integrated circuits with lots of pins. In either case, the circuit diagram is the basic reference.
The first example is a precision voltage divider. The voltage output of a divider depends on its load, and if the slider of the potentiometer happens to be grounded, the pot will be destroyed should the slider inadvertently (and inevitably) be moved to the wrong end. This circuit overcomes both these problems, and you may want to use it as a voltage source for the later examples. The inverting input is tied to the output, and the noninverting input is tied to the pot slider. The output will be driven to a voltage equal to that of the slider as it strives to keep the inputs at the same voltage. If it goes a little too high, the - input will drive it lower; if it goes too low, the opposite will happen. The feedback is negative and stable. No current is drawn by the pot slider (the tiny bias currents leave through the potentiometer resistance causing negligible voltage drops). The common-mode input is equal to the pot setting. If you try to drive it too, far, the op-amp will go wild. With a ±12V supply, the circuit is good for ±10V in any case.
This is the familiar inverting amplifier circuit. The output must strive to keep the - input at ground whatever voltage you apply at the input. The circuit is easily analyzed to give a gain equal to the negative of the ratio of the feedback resistors. In this case, G = -1. Try a few values of the input, positive and negative, and measure the output voltage. As usual, two meters make this work much more pleasant. The - input is at ground (which can be verified with your meter), but connecting it with ground would destroy the functioning. This is called a virtual ground. If you connect more inputs to it, each with its resistor, you will get a summing circuit. What is the common-mode input voltage in this circuit?
The feedback is again from the output to the - input, through a voltage divider. The output strives to hold the - input equal in voltage to the + input, but here the voltage of the + input is the input to the circuit. This is exactly the same circuit that we considered in the preceding paragraph, except that the input is now at the terminal that was grounded there, and the old input is now grounded. This circuit is as easily analyzed, with the result that the gain is positive, and is 1 plus the ratio of the resistors. So, here we expect a gain of G = +2. Make some measurements to verify this. What is the common-mode voltage here?
The input resistance of a circuit is the ratio of an applied voltage to the current that results. In the inverting amplifier, it is equal to the input resistor, while in the noninverting amplifier, it is very large. However, bias currents must still have a way out (they do not affect the input resistance) even if they are not appreciable. The output resistance of an amplifier is the ratio of the voltage drop when the amplifier is loaded, to the current drawn from it. Attach a load of 1k to 10k to the amplifiers above, and see how much the output voltage drops. The output resistances are very small in either case. The drawback of the inverting amplifier is the low input resistance, while the drawback of the noninverting amplifier is the common-mode input.
To get a large voltage gain, a large feedback resistor is needed. These large resistors can be troublesome, and it is possible to use much smaller resistors in the T-network shown in the figure. Here we have used rather small resistors so that the gain will not become inconveniently large for us to measure, but the equivalent resistance R3(1 + R5/R4) = 110k, though the largest resistor used is 10k. The idea is simply to make the output less effective at the - input. Check that the gain of this amplifier is G = -12. Using this trick allows us to use a larger input resistor, making the input resistance larger.
A voltage standard, whose potential does not vary with load or temperature, is easily achieved with an op-amp. In this case, a noninverting amplifier with a gain of +2 is used, but a simple unity gain buffer is also possible. The voltage reference is an LM336, shown as a Zener diode. By means of the adjustment terminal, the voltage can be trimmed to any desired close-by value. The exact value is usually not as important as the stability of whatever value occurs. The pinout of the LM336, which is furnished in a TO-92 package, is shown at the left. By means of the 10k pot, the output voltage of the circuit was varied from 4.69 to 5.25V. The LM10 has an internal voltage reference of 200 mV, which is very convenient. Zener diodes can also be used as voltage references, but the integrated circuits designed for this purpose are much more stable and accurate.
A potentiometer is used in the feedback loop to give a variable gain to the amplifier. The gain does not vary linearly with potentiometer position, since it is equal to 1/K, where K is the fraction of the pot resistance to the left of the slider. Hence, the gain varies from 1 to infinity. It would possibly be a good idea to add a fixed resistance to the left of the pot to limit the gain to some large value. Measure the gain for some position of the slider, and then used your meter to find KR, and hence K, and compare the measured gain with 1/K.
This circuit is another kind of gain control, where the gain can vary from -1 to +1. The input impedance is rather low, but the control could be preceded by a buffer. Also, an amplifier of fixed gain could follow, so that the gain could be varied from -G to +G.

Sometimes we want a voltage to current converter, that will turn a voltage into a proportional current. Of course, a simple resistor can do this, but then the current will depend on the load resistance. Ideally, a current source should have an infinite output resistance (so that any load resistance would be a negligible fraction, keeping the current constant), and a simple resistor does not satisfy this requirement very well. The circuit shown here has a nearly infinite output resistance, since the output current is equal to the current through resistor R, which is v1/R. The output of the op-amp changes its voltage so that this current flows, whatever the load resistance. Use your meter on a current scale for the load. Connect a resistance (say, 1k) in series with the meter to see that you get the same current.
The previous circuit had the disadvantage that neither terminal of the load was at ground. Such a load is said to float, and must be allowed to vary as the circuit requires. The circuit shown at the right has a load with one terminal grounded, which is usually much more convenient. This circuit is called a Howland current source. It is not really a very good current source; even trimming the resistors does not improve it very much. I found a Norton resistance of about 5kΩ. However, it is an interesting circuit. The analysis of this circuit can start by assuming some unknown voltage v at nodes a and b. The feedback will act to keep these nodes at the same potential, in the usual fashion. Now the currents in the branches to the left of these nodes can be written down, and the voltage at the third node, c, can be determined. Now all currents flowing into node b can be found, and the load current is found to be the input voltage divided by the resistance in the other input, 1k in this case, provided the ratios of the resistances are equal as shown. Test this circuit to see that is produces the advertised current in response to the input voltage.
The circuit shown in the figure at the left is for determining the beta of transistors, consisting of a voltage-to-current converter on the left, and a current-to-voltage converter on the right. A resistor is also a current to voltage converter, but we would like a zero input resistance so that the voltage at the input will not change if we vary the current. The circuit shown to the right of the transistor in this figure is a current-to-voltage converter, simply an inverting amplifier with no input resistor. The current flows into a virtual ground, whose potential is not affected by the current, while the output voltage is proportional to the input current.
The circuit to the left of the transistor is a voltage-to-current converter that supplies emitter current to the transistor, while base current is supplied by the current-to-voltage converter on the right. The base of the transistor is held at ground, so the output of the emitter current source must be at -Vbe. The emitter current is set by the voltage input to the V-to-I converter, and the resulting base current measured by the voltage output by the I-to-V converter. The beta of the transistor is one plus the ratio of the emitter to the base current (the emitter current is the collector current plus the base current, of course). It is really simple and quick to measure the beta of transistors with this circuit, and you might try it on several examples. The dependence of beta on collector current is easy to observe with this circuit.
The output of an op-amp will source or sink only about 20 mA, not enough really to provide any kind of power. However, transistors can be used to help, and to provide as much current as is necessary. The circuit shown uses an npn and a complementary pnp transistor to supply current for both polarities. Only one of the transistors is conducting at a time. If the supply is unipolar, only one transistor is necessary. It is good to be careful that the base-emitter junction is not broken down by too large a reverse voltage (greater than about 5V). In this case, the op-amp could at most create a voltage drop of 2V in the 100Ω resistor, so the emitter junctions are safe in this circuit. Actually, even if the resistor were not there, the reverse voltage would not exceed the base to emitter drop of the other transistor.
The important thing to note in this circuit is that the feedback loop is closed around the whole circuit, so that the output voltage is accurate. The outputs must supply an additional voltage to turn the transistors on. When the output is 10V, the op-amp is supplying 10.7V (close to saturation). The load in this circuit is a 100Ω resistor. With 10V across it, the current will be 100 mA, and it will dissipate 1W. Therefore, use at least a 1W resistor for the load, not the usual 1/4W resistors, which will certainly get too hot. Test the circuit, observing that you can get more than 20 mA output, and that the output can swing both positive and negative.
Take a 324 op-amp and connect +5 and ground as power. The connections are rather unexpected, so take care. The +5 seems to me to be on the wrong side! Note how the power supply is represented in a circuit diagram when necessary. Usually, the power supply is not shown explicitly. Construct a unity-gain buffer as shown, and feed it with a potentiometer also connected between +5 and ground. Find the range of the output voltage (or common mode range). I found 0 to 3.89V. The range 0 to V - 1.5V is guaranteed in the specifications. It is quite remarkable that the op-amp will operate on such a small voltage, and that both the common mode and the output voltage range go to zero.
Here is another test circuit, while you have a 324 connected to +5. It is a voltage to current converter driving an LED. Measure the voltages at nodes a, b and c and consider them. I did not get the voltages at nodes a and b quite equal; this is probably a result of an input voltage offset, or bias currents. The 324 has a bias current of about 1 μA, which I determined by experiment on a sample.
This is an important circuit, the so-called three-op-amp "instrumentation" amplifier. Before building it, change back to a bipolar ±12V supply for convenience. It requires seven resistors, here all 10k, which gives a gain of 3. The gain is changed by changing R1. For example, if R1 = 1k, the gain becomes 21. It is best to leave R3 = R4 in any case, since this gives the best common-mode rejection. It is a differential amplifier, with inverting and noninverting inputs of very high impedance, so they will not load the circuit to which they are connected. The output should not change if both inputs are connected to the same voltage and this voltage varies--this is what is meant by common-mode rejection. If you use 1% resistors, this circuit gives very good results, and is quite practical. 1% resistors are not only precise in value, but are much more stable than the usual 5% resistors. It is best not to make R1 a potentiometer, since potentiometers are much less reliable than simple resistors. There should be no need to trim the gain to some exact value. Of course, this can be done if desired. With the 324, there are enough op-amps in a single 14-pin DIP, with one left over for other duties.

References

  1. J. R. Hufault, Op Amp Network Design (New York: John Wiley and Sons, 1986). An extensive collection of practical op-amp circuits, with many excellent hints and ideas. On p. 136, the polarity of the inputs of the upper op-amp is reversed.

Return to Electronics Index
Composed by J. B. Calvert
Created 4 July 2001
Last revised 19 September 2008


electromagnetic wave (電磁波)



electromagnetic wave
電磁波
electromagnetic wave (
電磁波)
電磁波 (electromagnetic wave)
電磁波 (electromagnetic wave; EM wave)


Source:
http://mysite.du.edu/~jcalvert/phys/emwaves.htm

Electromagnetic Waves


Contents

  1. Introduction
  2. Plane Waves
  3. Sinusoidal Waves
  4. Waves in a Conducting Medium
  5. Reflection of Electromagnetic Waves
  6. The Skin Effect
  7. The Parallel-Wire Transmission Line
  8. Retarded Potentials and Radiation
  9. References

Introduction

Recently, I thought it was about time to sharpen my rusty skills in electromagnetism, and reviewed electromagnetic waves in Abraham and Becker (see references). This was so useful and interesting that I thought some remarks on the subject might be generally useful, and preparing them instructive for me, so this article is the result. It is rather long, since I want to include everything necessary to understanding. The reader may find it useful to follow along in his or her electricity and magnetism text, since some previous study of the subject will be very helpful, and note the correspondence of the results. This material is fundamental for today's electrical engineer, since it bears on almost every current application. The references contain a wealth of exercises.
The topics generally included in an introduction to electromagnetic waves are: (1) plane waves in a nonconducting medium; (2) plane waves in a conducting medium, with absorption and skin depth; (3) the Poynting vector and averages of complex quantities; (4) skin effect in conductors; (5) fields and waves on a parallel-wire transmission line; (6) transmission lines with resistance and leakance; (7) time-dependent scalar and vector potentials, gauge transformations and retarded solutions; (8) fields and radiation from an oscillating dipole. These topics include techniques of general utility, both in physics and engineering, and are fundamental to physical optics as well. Vector calculus is usually introduced to students in an electromagnetism course at intermediate undergraduate level.
There are many texts on this subject, which is as important for engineering as it is for physics (perhaps even more so!), and you should use the one familiar to you. The references show the texts that I generally use, and I do not think there is anything better available. Though I have generally used Giorgi units for the subject of electromagnetic waves, I think they are theoretically objectionable except for the advantage of working directly with practical units. In this review, I shall use Gaussian units instead, which might be useful as an introduction for the reader.
Gaussian units use the absolute electrostatic units (esu or stat-) for electrical quantities and absolute electromagnetic units (emu or ab-) for magnetic quantities, so the factor of c ≈ 3 x 1010 m/s which converts between these systems appears explicitly in the formulas. A charge q in esu is the charge q/c in emu, so the large factor c expresses quite well the weakness of magnetic forces in comparison to electric. The Giorgi or MKS units are just emu with assorted factors of 10, plus "rationalization," which is fooling with factors of 4π. Coulomb's law in the form F = qq'/r2 is the basis of esu, defining the esu of charge in terms of mass, length and time. Force is in dynes (10-5 N) and work is in ergs (10-7 J). Units have been, and still are, the cause of great confusion, but their understanding is the key to understanding electromagnetism.
We shall take Maxwell's equations as postulates. This is a remarkable theory, relativistically correct, based on first-order partial differential equations. Maxwell conceived it around 1865 as a mathematical description of the discoveries of Faraday that implied a mechanical model of electromagnetism. Our present view is more like the "German" school mentioned by Maxwell that talked about potentials and "action at a distance," but the mathematics is exactly the same. Maxwell never was clear about the nature of electric charge, and the relations between matter and the fields, which confused the issue for some time. He did not derive Fresnel's equations, nor treat the sources of electromagnetic waves. Helmholtz took steps after 1870 to reconcile Weber's and Maxwell's theories, and inspired Heinrich Hertz and Hendrik Antoon Lorentz (1853-1928) to look into the matter. Lorentz derived Fresnel's equations on the basis of Maxwell's theory in his doctoral dissertation at Leyden in 1875. Lorentz received the Nobel Prize in Physics in 1902 with his countryman Pieter Zeeman, and was an enthusiastic promoter of popular physics.
The general view was that E and H were fields in the "ether" that produced D and B in matter as stress produced strain, and that the ether freely pervaded matter. Fizeau's observations of the speed of light in moving media complicated the situation with partial ether dragging. Hertz assumed that the ether was fixed to matter and only one set of field vectors was necessary. Lorentz developed the electron theory in 1892-1904 (which was certainly helped by Thomson's discovery of the electron in 1896), including the Lorentz contraction (1895), which accounted for the Michelson-Morley negative result on motion through the ether. All these contradictions and uncertainties were resolved by Einstein's relativity in 1905, and the modern view of electromagnetism slowly developed. The fields were recognized as distinct from matter, determined by potentials that propagated at the speed of light in vacuum. Maxwell's electromagnetism later became the basis of Quantum Electrodynamics, which is extraordinarily correct (possibly the most correct of any physical theories), but also not easy to use. Where the atomic details of the electrical behavior of matter are not significant, the classical Maxwell theory in its modern interpretation gives the right answer, and is quite sufficient. Lorentz also enunciated the correct expression for the force on a point charge q exerted by the electric and magnetic fields, that usually accompanies statements of Maxwell's equations in texts.
The Maxwell theory we will use describes matter too simply by three "constants," the dielectric constant κ, the permeability μ, and the conductivity σ, which may be functions of position, and what is worse, even functions of direction (in crystals). These "constants" summarize a large amount of complicated physics, first revealed by the electron theory of the years around 1900. These numbers are all time-dependent (and can even become non-local--that is, depending on conditions at more than one point). A Fourier transform makes them functions of the frequency, and in this form they appear usefully in practical applications. Many of the "signals" we encounter have a definite frequency, so this is very convenient.
At the beginning, we will consider κ, μ and σ as constants indeed, which take constant values in uniform, isotropic materials. To begin to understand electromagnetism, there is no need to experience the pain of nonuniform and nonisotropic materials anyway. We will carry μ through the analysis, although in nearly all cases μ = 1, or close enough to it as makes no difference. Only ferromagnetic materials have permeabilities much different from unity, and these seldom carry electromagnetic waves. Their electromagnetic properties may be of interest, but it is a very special subject and won't be discussed here.
I shall use the usual tools of vector analysis, which result in so much economy of expression and make it easy to grasp certain relations that are obscured by writing everything out in components. However, for practical work one must get down to components, and they sometimes make things clearer. Vector symbolism contains nothing that cannot be expressed in components. Browsers do not support the del (inverted triangle) symbol, so I will use grad for del, div for del dot, curl for del cross, and div grad for del squared, the Laplacian. Vectors will be in boldface, magnitudes of vectors and scalars in normal weight, unless otherwise stated. Integrals will be written with ∫ followed by the limits in parentheses where needed, and multiple integrals will not be distinguished from single ones. dV is a scalar element of volume, dS is a vector element of area, directed normally to the area and outwards from dV, and ds a vector element of arc length, directed along the arc in the positive direction that keeps dS to the left.

Plane Waves

Maxwell's equations in the form we shall use are: curl H = (4π/c)j + (1/c)∂D/∂t, curl E = (-1/c)∂B/∂t, div B = 0 and div D = 4πρ, where j is the current density in esu/cm2s and ρ is the free charge density in esu/cm3. Recall the names of the field vectors and the meaning of each term. A good exercise is to write out the eight equations in components. To use the equations, we must know the relations between the field vectors. These are B = μHD = κE and j = σE, called theconstitutive relations. They are not part of Maxwell's equations, but express the properties of matter. The "vacuum" or free space has κ = μ = 1 and σ = 0. In this case, we can set B = HD = E, and j = 0. It is the custom to use E and H as the basic vectors, though relativity shows that E and B are actually cognate.
Take the z-direction as the direction of propagation, and assume that the fields do not depend on the transverse dimensions x,y. Such solutions are called plane waves, and represent a portion of any wavefront, though as a whole they are nonphysical. Express Maxwell's equations in terms of E and H, and then retain only derivatives with respect to t and z. From the divergence equations, we find ∂Ez/∂z = 0 and ∂Hz/∂z = 0, so that the z, or longitudinal, fields are constant with respect to z, and therefore have nothing to do with the wave, and may be set equal to zero. The electric and magnetic fields are, therefore, transverse to the propagation direction. This easy result from Maxwell's equations resolved an old problem, the lack of "longitudinal" light that might be expected on a mechanical (ether) theory.
From the curl equations we find two equations connecting Ex and Hy: -∂Hy/∂z = (κ/c)∂Ex/∂t, ∂Ex/∂z = (-μ/c)∂Hy/∂t. Two similar equations connect Ey and Hx. The problems splits into two, one for each polarization state. This is polarization in the optical sense, not dipole moment per unit volume. When one spoke of the "direction of polarization" of an electromagnetic wave, the direction of the H vector was usually meant in the past, though now it is the direction of the Evector. There is a difference, of course, so it is best to keep whatever convention you use straight. Either the magnetic or the electric field can be eliminated between a pair of equations, and the result is the wave equation, ∂ 2Ex/∂z2 = (κμ/c2)∂Ex/∂t2, with similar equations for each of the other three field vectors. This means that we have solutions f(t - z/v) + g(t + z/v) with f and g arbitrary functions. Substitute this in the wave equation to verify that it is indeed a solution. These solutions preserve the wave shape and move at velocities ±v = c/√κμ. We have now found that the index of refraction, n, where v = c/n is related to the material properties through n2 = κμ, called Maxwell's relation (with μ = 1). This is appproximately true at lower frequencies, but fails miserably for water: n = 1.334, κ = 81. This fault was eliminated when the frequency dependence of κ was realized.
Let's suppose that Ex = Ef(t - z/v), and Hy = Hf(t - z/v). It is clear from the equations that the same function f must appear in both. The relation between the real constants E and H is H = √(κ/μ) E. In free space, H = E. The energy flux in the electromagnetic field is given by the Poynting vector, N = (c/4π)(E x H). For this wave, the average power is = (c/4π)ExHy = (c/4π)√(κ/μ)E22
>, where the angle brackets stand for a time average. This can be written v [κE2/4π2>], using Maxwell's relation. This is the propagation velocity v times twice the average electric energy density in the wave. From the relation between E and H it is easy to show that the electric and magnetic energy densities are equal in the wave, so the power is the propagation velocity times the sum of the electric and magnetic energy densities, a very reasonable result.
In many cases, f(t) is sinusoidal, so 2
> = 1/2 if it is taken with unit amplitude. Then we find N = (c/8π)E2 erg/cm2-s. It is a remarkable property of electromagnetic waves that they can carry energy to unlimited distances from the source. In these few paragraphs, we have discovered a lot about electromagnetic waves, using only Maxwell's equations.
The question of how an electromagnetic wave travelling in vacuum at speed c becomes a wave in a transparent medium travelling a speed c/n is an involved and interesting one, first elucidated by Ewald (1912) and Oseen (1915), as is the question of what happens when an electromagnetic wave suddenly arrives at a transparent medium, a subject studied by Sommerfeld (1914) and Brillouin (1914). These matters are discussed lucidly in Jackson. Briefly stated, the incoming wave causes dipoles to vibrate at the surface of the dielectric, which creates a wave travelling at speed c that interferes with the original wave and extinguishes it in the medium, and also a new wave travelling at speed c/n that propagates in the dielectric as described above. In the other problem, the first precursor of a wave travels at speed c, and is very weak. By the time the new wave arrives at speed c/n, the amplitude has increased to the normal value. Even in space, we do not receive the same wave that was emitted by a distant star, but a new wave created in the interstellar medium. "There are more things in heaven and earth, Horatio, than are dreamed of in your philosophy."

Sinusoidal Waves


Sinusoidal waves are most conveniently described as the real part of a complex exponential, f(t) = Re[A ejωt] = Re[ aeejωt], where A is the complex amplitude, including both magnitude and phase. The angular frequency ω is 2π times the frequency f or ν. Engineers use j instead of i for the imaginary unit to avoid confusion with the current i. Then, f(t ± z/v) = A ejω(t ± z/v) = A ej(ωt ± kz), where k is called the wave vector, with magnitude k = ω/v = 2π/λ, where λ is the wavelength. This function represents a plane sinusoidal wave traveling in the -z or the +z direction, as the sign is + or -. All this is probably very familiar to you, and is absolutely basic.
For such waves, differentiation with respect to z or t can be carried out at once, with the result that ∂/∂t → jω, and grad → ±k. This turns Maxwell's equations into algebraic equations that can be solved immediately. We have at once that k·B = 0 and k·D = 0, which shows that B and D are normal to the direction of propagation, or the waves are transverse. The curl equations express E in terms of H, and show that they are perpendicular to each other and in phase in any plane wave. Using k x (k x E) = k(k·E) - k2E, we can eliminate H and find k2E = (ω/c)2κμE, the transformed wave equation, and so k = ω/v. This dispersion relation, giving the wavelength in terms of the frequency, shows that our assumed solution does actually satisfy the wave equation.
When using complex representations, one thing we often need is the average value of the product of the real parts of two signals. If a(t) = Re[Aejωt] and b(t) = Re[Bejωt, then = Re[AB*/2], where B* is the complex conjugate of B. If you are not familiar with this useful expression, write everything in real and imaginary parts and verify that it is correct. Sometimes the imaginary part of AB*/2 may have an interesting interpretation as well. One example of the use of this formula is with the power in an electric circuit. The instantaneous power is i(t)v(t). If i(t) and v(t) are represented by the complex amplitudes I and V, then the average power is Re[VI*/2] = Re[V*I/2]. The imaginary part represents reactive volt-amperes (vars), energy that surges back and forth without representing a power dissipation. It's usually the energy in electric and magnetic fields in this case. The sign of this imaginary part is opposite in the two formulas, so the convention must be specified if you use it. The complex Poynting vector is (c/8π)[E x H*], where the real part is ohmic dissipation, and the imaginary part is the difference between the magnetic and electric field energies (zero for a plane wave in space).

Let's consider a cylindrical resistor of length L and radius a. If a current I flows through it, establishing a voltage drop V across it, the electric field at the surface is E = V/L and the magnetic field is H = 2I/ca (from Ampère's law). The Poynting vector is normal to the surface, points inward (check this!), and has a magnitude N = (c/4π)2VI/caL. Integrating over the surface of area A = 2πaL, we find the total power P = VI. Note that this works if I and V are alternating current phasors, if the complex Poynting vector is used. This power enters the resistor from the field and is dissipated as heat. However, we also know that the heat is produced by inelastic collisions of the electrons with the lattice, the electrons gaining energy when accelerated by the field. Contemplate this for a while, and marvel at the strangeness of physics. The Poynting vector may not give the mechanism of energy transfer, but when integrated over a closed surface gives the correct answer in terms of the field vectors. Energy is certainly stored in a wave propagating in space, but is it stored in a static field? It acts very much as if it were, but the whole picture must be considered.

Waves in a Conducting Medium


The curl equation for H has two source terms, the conduction current and the displacement current. So far we have only considered the displacement current, which makes travelling electromagnetic waves possible. It is easy to include the conduction current if we assume a plane sinusoidal wave. Then Maxwell's equations become algebraic equations, and we can find a wave equation just as we did above, but now there are new terms. The dispersion relation becomes k2 = ω2κμ/c2+ j(4πμσω/c2). What happens is that k becomes complex, k = k' + jk", so that when we put this in the wave expression, we find e-k"zej(ωt - k'z), so that k" represents an exponential absorption of the wave, while k' gives the usual wavelength.
The dispersion relation yields (k')2 - (k")2 = ω2κμ/c2 and k'k" = 2πμσω/c2 = (1/δ2) when we equate real and imaginary parts. The new parameter δ, with the dimensions of cm, is called the skin depth, a name we shall soon justify. If v/ω = λ/2π, and r = (λ/2π)/δ, then (k')2 = (2π/&lambda)2[√(1 + r2) + 1]/2 and k" = (2π/λ)2[√(1 + r2) - 1]/2. Note that λ is not quite the wavelength, except when r = 0. Everything depends on the ratio r, which is the ratio of λ/2π to the skin depth. If r is small, then λ is the actual wavelength, and k" = (2π/λ)(r/2). This is the case of weak damping.
On the other hand, if r is large, then k' = k" = 1/δ. This means the amplitude absorption is e-z/δ, so the wave will be absorbed almost completely in the distance of a few skin depths. In this case, the conduction current completely dominates the total current, and the fields diffuse into the material, like heat. This is the case with metals at low and moderate frequencies. Multiply the conductivity in S/m by 9 x 109 to find the conductivity in esu. The conductivity of copper is 5.80 x 107 S/m, or 5.22 x 1017 esu. This makes δ = 6.61ν-1/2 cm. At 60 Hz, the skin depth is 0.85 cm. At 1 MHz, it is 66 μm.

Reflection of Electromagnetic Waves


When a plane wave strikes a plane interface between media, the wave in general is partly transmitted, partly reflected. Such a problem is solved by assuming an incident and reflected wave in the first medium, and a transmitted wave only in the second medium. The boundary conditions on the fields then determine the relative amplitudes of the three waves. To satisfy the boundary conditions, the waves must satisfy the laws of reflection and refraction as familiar from optics. If they do no, the boundary conditions cannot be satisfied. The angle of incidence is the angle between the normal to the interface and wave vector of the incident wave. The angle of reflection is the similar angle for the reflected wave. The directions of propagation must lie in the same plane, the plane of incidence, and the angle of incidence equals the angle of reflection. The wave vector of the transmitted, or refracted, wave also lies in the plane of incidence, and the ratio of the sine of the angle of incidence to sine of the angle of refraction is the same as the ratio of the propagation velocities. In terms of the index of refraction, n sin i = n' sin r, where i and r are the angles in media with indices n and n', respectively. All this should be familiar from optics, but, of course, follows from Maxwell's equations as well.
The general problem for two nonabsorbing media gives Fresnel's equations. We will explain the method through a simpler problem, the normal reflection of electromagnetic waves from a metal, where we neglect the displacement current, an approximation valid for frequencies below those of the ultraviolet. The two possible polarizations are equivalent, so we need consider only one, in which E is along the x-axis, and H along the y-axis. Let the incident wave be Eej(ωt + kz), the reflected wave E'ej(ωt - kz), and the transmitted wave E"ej[ωt + (1 + j)z/δ], where k' = k" = 1/δ in the metal. The corresponding magnetic fields are -(c/ω)kEej(ωt + kz), (c/ω)kE'ej(ωt - kz), and -(c/ω)[(1 + j)/δ]E"ej[ωt + (1 + j)/δ]. We have set μ = 1.
At the plane interface, the tangential components of E and H must be continuous. The fields involved are found by letting z = 0 in the above expressions. Factoring out common factors, we find E = E' = E" (electric fields) and k(E - E') = [(1 + j)/δ]E". These two equations are sufficient to determine the ratios E'/E and E"/E. We find E'/E = [(kδ - 1) - j]/[(kδ + 1) + j] and E"/E = 2kδ/[(kδ + 1) + j]. As kδ becomes vanishingly small (skin depth much less than a wavelength) we see that E'/E → 1 and E"/E → 0, as they should.
The intensity reflection coefficient R = (E'/E)2 = [(kδ)2 - 2kδ + 2]/[(kδ)2 + 2kδ + 2]. Since kδ is quite small for most metals, the approximate result R = 1 - 2kδ can be used. For copper, the skin depth δ for 500 nm light is about 2.7 nm, so 2kδ = 0.068, and R = 1 - 0.068 = 0.932, or 93.2%. This presumes that the conductivity is still the static conductivity, which must break down at a frequency not much higher, since copper appears reddish, which implies that the bluer light penetrates the metal and is not reflected. There is actually rough agreement between this formula and experiment if the frequency is not too high, as in the infrared. For accurate results the frequency dependence of the conductivity must be taken into account.

The Skin Effect


Here we consider the classical skin effect that occurs for alternating currents in wires when the diameter of the wire is smaller than the skin depth as we have defined it above. Since the skin depth in copper is 0.85 cm even at 60 Hz, we see that its effects can be felt even at power frequencies for large conductors, such as those used in transmission lines. At radio frequencies, current is restricted to the surface of conductors, and all the copper in the middle is wasted. This analysis also contains some interesting mathematics, so it is worth looking at briefly. We start from Maxwell's equations, in cylindrical coordinates, and neglect the displacement current. The current and the electric field have only z-components, while the magnetic field has only φ-components.
Harmonic variation at angular frequency ω is assumed. Maxwell's equations are curl H = (4&piσ/c)E, and curl E = -j(ω/c)H. In the usual way, we find that the z-components of E and i (current density), and the φ-component of H all satisfy the same wave equation, which for i(r) is div grad i(r) = -j(4πσω/c2) i = -α2i(r). If you write div grad in cylindrical coordinates, you find (1/r)d(r di/dr)/dr + α 2i = 0. This is just Bessel's equation of zero order, and the solution finite at r = 0 is i(r) = CJ0(αr), If i(0) is the surface current density, then i(r) = i(0) J0(αr)/J0(αa), where a is the radius of the wire. Finding the expression for the current is just this easy, but now the fun begins.
Now α = √(-j)√2/δ, and √(-j) = e-jπ/4 = (1 - j)/√2. Bessel functions of this kind of complex argument are not simple. However, J0(j-1/2x) = ber(x) + j bei(x), and the ber() and bei() functions and their derivatives are tabulated. Dwight, in fact, gives a useful table for x = 0 to 10. For greater values of x, an approximation valid when the skin depth is small fraction of the wire diameter can be used instead of thes exact expressions. Definitions for these functions may differ slightly.
The first thing we need to do is to find the total current I in the wire in terms of i(0). Then, if Z is the impedance of a length L of the wire, we have i(0) = σE = σZI/L and so can find the impedance Z. I is simply the integral of i(r) over the cross-section of the wire, I = 2π∫(0,a) i(r)rdr. Now, ∫rJ0dr = rJ1, so the integral is easy to do. Since the tables give the derivatives of ber() and bei(), it is best to express the integral as -rJ0'(r) instead for numerical evaluation.
The result is I = 2πi(0)aJ1/α. Substituting the value of i(0) from above, and cancelling I, we find Z = αJ0(αa)/aJ1(αa). The value of this expression is complex, so we have not only a resistance R, but an inductive reactance X = ωL as well. In terms of the dc resistance of the wire, R0, we have Z/R0 = (-αa/2)J0(αa)/[√(-j)J0'(αa)]. For √2(a/δ) = 5.0, this is evaluated using Table 1050 in Dwight as follows: Z/R0 = (-j)(2.5)(-6.23 + j0.0116)/(3.845 + j4.354) = 2.682/_40.378° = 2.04 + j1.74, which agrees with the graph in Ramo.

The Parallel-Wire Transmission Line


Let's study the parallel-wire transmission line, with the geometry and conventions as shown in the figure at the right. There are two cylindrical conductors of diameter 2b, whose axes are parallel and separated by a distance 2d. The potential between the wires is v, positive when the right-hand conductor is positive. The potential is zero on the plane x = 0. The currents i in the conductors are equal and opposite in direction, positive when the current in the right-hand conductor is out of the page. Coordinates x and y are in the transverse plane, while z is out of the page, defining a right-handed triple of unit vectors.
This is not the easiest transmission line to study--the coaxial line is simpler. However, it contains so much good stuff, and is also of practical use, so it is well worth study. Parallel-wire lines are used as tuning elements, and for connecting TV aerials with receivers ("twinlead"). An AC line cord is also a parallel-wire transmission line. They can also be used for measurements on electromagnetic waves (Lecher Wires), especially in teaching laboratory experiments. The waves we consider here are guided waves, attached to their guides, the wires of the transmission line, and not propagating in free space. The waveguide we consider here has two conductors. There are also waveguides with only one conductor, familiar from radar engineering, but these will not propagate waves of an arbitrarily low frequency, and exhibit a cutoff frequencybelow which any waves are strongly attenuated.
We postulate that the fields E(x,y,z) and H(x,y,z) lie in the xy-plane; that is, they are transverse. We will be able to show that the fields satisfy Maxwell's equations, and so specify a transmission mode. This mode is called a TEM mode (transverse electric magnetic), and exists for all frequencies, down to zero or DC. We will search for wave solutions that depend on t and z through t ± z/v, where v is the wave velocity, which will be equal to the speed of electromagnetic waves in the medium surrounding the conductors. Although μ is usually unity, we shall carry it in the equations for completeness.
In the problem we shall study, the current is restricted to the surface of the conductors, as if they were thin-walled tubes. This is always the case in most practical uses of parallel-wire lines for frequencies above 1 MHz, because of the skin effect. Even at lower frequencies, our analysis will give closely approximate results with a few corrections. An exact analysis in this case is exceedingly difficult unless numerical methods are used, and then it is difficult to see what is going on.
Maxwell's equations show that for zero frequency (DC) the z-dependence of the fields vanishes (as, of course, does the time dependence). The electric and magnetic fields become uncoupled, and can be treated independently. This is what is usually called a two-dimensional problem, but it is really still a three-dimensional problem where the fields do not depend on the third coordinate. Special methods, and special problems, characterize two-dimensional field problems. The most important difficulty is that the unlimited extension in the z-direction is distinctly unphysical, and causes difficulties when we integrate over all space. First, let's consider the electric problem.
The electric field is determined by the field equations curl E = 0 and div D = 4πρ. Here, D = κE, where κ is constant everywhere, so we really have only one vector, which we shall take to be E. The curl equation shows that the tangential component of E is zero at the surface of a conductor (since the field inside must be zero), and that the normal component defines a surface charge density σ' = En/4πκ, which we find from the divergence equation. In terms of the potential, σ' = -(κ/4π)(∂φ/∂n). The partial derivative here is the normal derivative to the surface. The curl equation implies that E can be derived from a scalar potential φ, E = - grad φ. Throughout the medium surrounding the conductors, ρ = 0, so div grad φ = 0, which is Laplace's equation, with the boundary condition that φ = constant over the surface of a conductor. A direct assault on Laplace's equation is fruitless, so we must trust to ingenuity and luck.
Let's try idealizing a charged conductor to a line charge with the same linear charge density λ, charge per unit length or esu/cm. If this line charge is alone in space, the symmetry means that we can find the field by Gauss's Law using a pillbox-shaped volume of radius r and unit length. This gives Er = 2λ/κr, so by integration with respect to r the potential is φ(r) = -(2λ/κ)log r. (The log is the natural logarithm, sometimes written ln.) This is illustrated in the figure at the left, where the equipotentials, cylinders concentric with the line charge, and the lines of force, radial lines, are shown. The electric field E is radial, outward from a positive line charge. The logarithmic potential is infinite not only at the line charge, r → 0, but also at infinity, so we cannot make it vanish there, as we can with the potential of a point charge, that only has a pole at r = 0. The logarithmic potential is zero at r = 1, but its absolute value has no significance, and we could add any constant to it that we like without changing the field.
The magnetic problem can be cast into a very similar form. We have curl H = (4π/c)j, where j is the vector current density, esu/cm2-s, and div B = 0. Since B = μH, where μ is a constant everywhere, we have only one field vector, as in the electric case, which we choose as H. The divergence equation permits us to derive B from a vector potential A, through B = curl AA must satisfy div A = 0. In this problem, A has only a z-component, which we shall denote simply by A. For a sufficently high frequency, the magnetic field at the surface of a good conductor is tangential to the surface, and we shall apply this boundary condition here, though it really is not the case for zero frequency when magnetic fields can penetrate into the conductor. From the curl equation, we find that the surface current density ι in esu/cm-s is given by ι = (c/4π)Ht in terms of the tangential component of H. In terms of the vector potential, ι = (c/4πμ)(∂A/∂/n). Except for the constants, this is exactly the same way the surface charge density is related to the potential.
The conductor is idealized to a line current of i esu/s. Then H can be found by integrating it around a circle of radius r, and using the curl equation. We find Hθ = 2i/cr. Since Hθ = -(1/μ)∂A/∂r, we integrate with respect to r to find A = -(2μi/c)log r, as shown in the diagram at the right. Now the circles A = const give the direction ofH, and the radial lines are orthogonal to them. Comparing with the electric problem, we see that A/φ = cλ/κμi (λ is not the wavelength, and i is not the imaginary unit.). That is, the two potentials are proportional, and have the same dependence on x and y. This relation will also hold for any superposition of these potentials.
In terms of the potentials, the rectangular field components are: Ex = -∂φ/∂x, Ey = -∂φ/∂y, Bx = ∂A/∂y, By = -∂A/∂x. Since A and φ are proportional, these relations also show that E and H are orthogonal. This, also, will hold for any superposition of the two potentials.
Now we consider the problem of two equal and opposite line charges ± λ a distance 2a apart, and two equal and opposite line currents ± i superimposed on the line charges. We can find the potentials for this problem by adding the potentials for the separate lines, so that φ(P) = -(2λ/κ)log (r/r') and A(P) = -(2μi/c)log (r/r'), where r is the distance from P to the right-hand line charge, and r' the distance to the left-hand line charge. These two potentials will be proportional, with the same proportionality constant found above.
The ratio r/r' will be denoted by K1/2, so that K = [(x - a)2 + y2]/[(x + a)2 + y2]. The equipotentials are the curves K = constant, which are found by expressing this condition as [x - a(1 + K)/(1 - K)]2 + y2 = 4Ka2/(1 - K)2. This represents a circular cylinder with its center on the x-axis. What luck! The equipotential K' can coincide with one of the conductors, so the boundary conditions are satisfied here. Then, 1/K' gives a cylinder of the same radius, but an equal distance on the other side of the y-axis, which will coincide with the other conductor. Exactly the same holds for the cylinders A = constant. The potential problem is solved.
In terms of the dimensions of the line, d = a(1 + K')/(1 - K') and b = 2a√K'/(1 - K'). First, we note that d2 - b2 = a2, that locates the line charges and currents. Then we can solve for K', finding √K' = d/b - (d2/b2 - 1)1/2. From the values of the potentials on the surfaces of the conductors, we find v = (2λ/κ)log K' = (4λ/κ) log √K'. Just as the difference in the potentials gave the potential between the conductors, the difference in the values of A gives the total flux passing between them. If this total flux is Φ, then Φ = -(4μi/c) log √K'. These equations relate the line charges and currents to the potential difference and the flux linking the line. Equipotentials and lines of force are shown in the diagram at the left.
The capacitance C per unit length of the line, in cm/cm is the ratio λ/v, which is C = κ/4 log [d/b - (d2/b2 - 1)1/2]. The inductance L per unit length, in s2/cm, is the ratio Φ/ci, and comes out as L = (4μ/c2) log [(d/b - (d2/b2 - 1)1/2]. The product LC = μκ/c2, so 1/√LC = v, the phase velocity of electromagnetic waves in the medium. This is a general result for any TEM mode on a transmission line. The ratio Z = √(L/C) = (4/c)√(μ/κ)log [d/b - (d2/b2 - 1)1/2] also has an important meaning, that we shall discover later. It is called thecharacteristic impedance of the transmission line.
There is another more elegant way of finding the fields that uses the theory of complex variables. An analytic function (one whose derivative exists everywhere in a region) of z = x + jy has real and imaginary parts that obey the Cauchy-Riemann conditions and which are harmonic functions (solutions of Laplace's equation) in the xy-plane. If f(z) = f(x + jy) = u(x,y) + jv(x,y), then div grad u = 0 and div grad v = 0, and ∂u/∂x = ∂v/∂y, ∂u/∂y = -∂v/∂x. Compare with the definitions of the rectangular components of E and B, and the resemblance will be suspicious. The function u(x,y) can serve as the potential or the vector potential, and the other function will give an orthogonal set of line v(x,y) = const. Or, if u is the electric potential, then v can serve as a scalar potential for the magnetic field. This is a different way of finding the magnetic field that is equally applicable to this problem, since in the dielectric curl H = 0, so that H = - grad ψ, where ψ is a magnetic scalar potential. The only problem is finding a suitable analytic function.
The function f(z) = log [(z - a)/(z + a)] will do the job. It is analytic because it is an elementary function of z that does not involve complex conjugates or other specially complex things, but could be written as a real function of x. It arrived in a flash of brilliance; there is no easy way to find it from the statement of the problem. What people generally do is think up all possible analytic functions, and then see what problems they solve. Substitute z = x + jy, and separate real and imaginary parts. This is most easily done if (z - a)/(z + a) is expressed in polar form, and then the logarithm is taken. The real part is just the function we manufactured above by superimposing line charges, and the imaginary part is v(x,y) = tan-1[y/(x - a)] - tan-1[y/(x + a)]. By defining the two arctangents as the angles A and B, and then using the formula for the tangent of K = A - B (not the same K as before), we find the curves v = const to be x2 + (y - a/tan K)2 = (a/sin K)2. These are again circles with centers on the y-axis, and passing through the points (a,0) and (-a,0) which locate the line charges. It is something of a miracle that both the equipotentials and the lines of force are orthogonal systems of circles. Not only does it allow us to solve the problem, it also makes drawing the field rather easy.
At last we are ready to consider actual waves. The potentials and field components now become functions of z and t, and must satisfy Maxwell's equations and the boundary conditions. Write down Maxwell's equations for the transverse fields only. We find eight scalar equations, which fall into a group of four involving the x and y components of the curl equations, and a group of four involving the z components of the curl equations, and the divergence equations, which equate things to zero. These last four equations separate into electric and magnetic sets that are precisely what we used for the static fields. These fields will serve in the general case. This is remarkable, and should not be considered as trivial. There could be a transverse effect for waves propagating in the z-direction, but there is not.
The first set of four equations connect the electric and magnetic fields, making them dependent on each other. In the static case, the fields could have any relative constant values, determined by the values of λ and i, which were not restricted. Now the potential v between the conductors, and the current i in them, will be related. It happens that in free space, this makes the electric and magnetic fields equal to and in phase with each other, as in an unguided plane wave, so the waves on the wires seem to be an electromagnetic wave whose intensity varies in x,y but propagates down the wire at the speed of light, as if sliding on the wires. This is the reason that the propagation has no effect on the transverse fields. The Poynting vector (c/8π)[E x H*] points in the z-direction, and suggests that the energy flux is purely in this direction. The fields are strong only in the vicinity of the conductors, it should be remembered. We now make these consequences more precise.
At zero frequency, when the electric and magnetic fields are independent, we still have a nonzero Poynting vector, but there is no apparent energy flow in an ideal line. If the line is finite, and the line is terminated by a resistor R, The energy flow into a box containing the resistor gives the energy dissipated in the resistance (as we discussed above). The Poynting vector must be interpreted and used with care.
The electric and magnetic fields in the wave are connected by the x and y components of the curl equations, curl E = -(1/c)∂B/∂t, curl H = (1/c)∂D/∂t. The fields are found from the potentials, which we now write φ = φ'λ and A = A'i. The primed functions depend only on x,y, while λ=λ(z,t) and i=i(z,t) contain all the z and t dependence. The ratio A'/φ' = κμ/c, from what has been said above, and the same ratio exists between their partial derivatives. From ∂Hy/∂z = -(κ/c)∂Ex/∂t, using the potentials, we find ∂λ/∂t = -∂i/∂z, cancelling a factor (κμ/c) on both sides of the equation. From -∂Ey/∂z = -(1/c)∂Bx/∂t, we find (κμ/c2)∂i/∂t = -∂λ/∂z. These are coupled first-order differential equations for the functions i(z,t) and λ(z,t) that determine the magnitude of the fields. If we want the potential v between the conductors instead of the charge, then λ = Cv, where C is the capacitance per unit length.
These functions each satisfy the wave equation, div grad i = (1/u2)∂2i/∂t2 for i, with a similar equation for λ, as can be seen by eliminating first λ, then i between the two first-order equations. The propagation velocity is u = c/√κμ, as for unguided plane waves in the medium. Hence, λ(z,t) = f(t - z/u) + g(t + g/u), where f and g are arbitrary functions. Then the corresponding i(z,t) = uf(t - z/u) - ug(t + z/u). Once λ or i has been chosen, the other is automatically determined. Substitute in the first-order equations to verify that this is true.
Now let's consider a sinusoidal disturbance of angular frequency ω. Suppose that v(z,t) = Aejω(t - z/u) + Bejω(t + z/u). This represents a wave of amplitude A travelling in the +z direction and a wave of amplitude B travelling in the -z direction. Then, the current is (1/Cu)i(z,t) = Aejω(t - z/u) - Bejω(t + z/u). The factor 1/Cu = (1/C)√LC = √(L/C) = Z, which has already been defined as the wave impedance. It is seen to connect the magnitudes of the voltage and current in a sinusoidal wave on the transmission line.
Let us now choose A = aej0 and B = be, and put 2ωz/a = z/(λ/4π) = ζ, where now λ is the wavelength 2πc/ω. Then we have v = ejω(t - z/u)[a + bej(β + ζ)] and Zi = ejω(t - z/u)[a - qej(β + ζ)]. Except for the common factor that varies sinusoidally with time at any particular position, these quantities can be represented simply in the complex plane, as shown in the diagram. Lay out AC = a and make the radius of the circle BC = b. Then the voltage V is given by the vector AP, and the current I by the vector AQ. As we proceed along the line, ζ changes, and with it the angle β + ζ. The magnitudes of V and I go through maxima and minima, and the phase angle θ = angle PAQ betweeen them changes.
The power is the same at every position. It is given by (AP)(AQ)cos θ. AD = AP cos θ (the angle in a semicircle is a right angle), so the power is the product AD x AQ = AB x AE = constant. (The product of the lengths of a secant from a given external point to the two points on the circumference is constant.) We get a little help from geometry here to prove the assertion, showing how superior a little thought is to dogged grinding.
The ratio of B to A is determined by conditions at the termination of the line. If we send a wave down an infinite line, nothing ever comes back, so B = 0. The diagram reduces to the line AC. If we terminate the line by a resistance R = Z, then B = 0 as well, and the line appears to be infinite from the sending end. If the line is abruptly terminated by an open circuit or an reactance, then |B| = |A| and the wave is reflected with the same amplitude. In this case, the circle passes through A, and we have standing waves from zero amplitude to a maximum. All these things are familiar to those who have worked with transmission lines.
A real transmission line, especially a long one, will exhibit a series resistance R per unit length, and a leakage conductance G per unit length. These introduce an extra series potential difference iR, and a current shunt vG, per unit length, which will modify the equations for v and i as functions of z and t. The equations can be written down without explicit considerations of the fields at all, since the parameters v and i stand in for them here. This is an example of the power of circuit theory, one of the best tools of electrical engineering. R and G may include other effects than a physical resistance or conductance, such as dielectric losses.
Take a short length of line dz, as shown in the figure at the left, and write Kirchhoff's voltage law and current law for it. The result is C(∂v/∂t) + Gv + ∂i/∂z = 0 and L(∂i/∂t) + Ri + ∂v/∂z = 0, which reduces to our previous equations if R = G = 0. If either v or i is eliminated between the two equations, the resulting equation is called the Telegrapher's equation, which first received attention in the theory of the transatlantic cable in the mid-19th century, notably by William Thompson, later Lord Kelvin. If a sinusoidal signal ej(ωt - kz) is substituted in it, with a complex k = k' - k", as we did for a plane wave in an absorbing medium, we can obtain the phase constant k' and the absorption coefficient k" by equating real and imaginary parts. Dreary algebra yields k'2 = (1/2)(S + ω2LC - GR), k"2 = (1/2)(S - ω2LC + GR), where S2 = (ω2LC - GR)2 + ω2(CR + LG)2. It is really difficult to see anything in this complication, but if R and G are small, then k" ≈ (R/Z + GZ)/2, where Z is the wave impedance. Also, k'2 = ω2LC + (R/Z - ZG)2/4. This makes things much clearer! It shows immediately that if R/Z = GZ, or Z = √(R/G), then the attenuation is a minimum at k" = R/2Z, and the wave velocity is constant, very desirable conditions indeed.
In the early days of the telephone, there were no electronic amplifiers, and what amplification there was came from the carbon microphones. By shouting, communication was possible over perhaps twenty miles or so, but long distance was out of the question. The circuits, wires on poles, could be made with relatively small R and G, but the large capacitance of a long circuit was unavoidable. (Any other kind of circuit was worse, incidentally). As on the transatlantic cable, the result was strong attenuation and distortion because of the variation of velocity with frequency. Michael Pupin found that if the series inductance L of the circuit was raised, so was Z, and the circuit was greatly improved. In fact, with a proper choice of L, the condition R/Z = GZ could be obtained, and the distance of communication increased by a factor of 4 or so, making limited long-distance communication possible. This was done by connecting loading coils in series with the line at regular intervals. Transcontinental long distance had to wait for vacuum-tube amplifiers, which finally arrived around 1915. The development of feedback amplifiers in the 1920's completely solved the problem, since the frequency response could be tailored to the line, and distortion was corrected at the same time the signal strength was restored.
As is usual, we have considered infinite lines and signals that have existed for eternity, but neither case is physical. The phenomena that arise when the extent of signal is limited in space, or a line limited in length and terminated in one way or another, or when a signal begins and stops, are very complicated and very interesting, and very seldom treated or mentioned. It is best to keep these considerations in mind, and to be prepared for startling results. There are approximate or empirical methods of handling real situations that give satisfactory results in practice, however.

Retarded Potentials and Radiation


The two most remarkable revelations of physics are that matter behaves differently on a small scale (quantum mechanics) and that there is electromagnetic radiation (electrodynamics and relativity). The idea that energy could be transmitted without the intervention of matter was a very difficult one for physicists to accept, and was never revealed to any religion. Even J. C. Maxwell (1831-1879) could not embrace it, as he states quite definitely at the end of his treatise (see References). In this work, he devotes 20 pages out of 999 to electromagnetic waves, and does not consider their sources. Heinrich Hertz (1857-1894) greatly clarified the subject after his 1886 experiments, and connected radiation with its sources. An outstanding result of Maxwell's and Hertz's studies was that electromagnetic interactions propagate in vacuum at a finite velocity equal to the ratio of electromagnetic to electrostatic absolute units, which we now know as the speed of light, c = 2.998 x 1010 cm/s, and a fundamental constant in relativity.
The first step is to introduce the scalar and vector potentials φ and A into the full system of Maxwell's equations. For ease of writing, we shall let vectors be denoted by ordinary-weight type where there can be no confusion, and express time derivatives by a prime. That is, ∂f/∂t = f'. Also, we will assume free space where μ = κ = 1, and take the basic field vectors as E and H.
Then, we have curl H = (4π/c)j + (1/c)E', div H = 0, curl E = -(1/c)H', div E = 4πρ. Since div H = 0, we are permitted to put H = curl A. Then curl (E + A'/c) = 0, so we can put E + A'/c = -grad φ, or E = -grad φ - A'/c. Now div curl H = (4π/c)div j + (4π/c)ρ' = 0. Hence div j + ρ' = 0, which expresses the conservation of charge, showing that the equations are consistent with this. Further, -div (grad φ + A'/c) = 4πρ, and curl curl A = (4π/c)j - (1/c)(grad φ' + A"/c), or grad div A - div grad A - (4π/c)j - (1/c)grad φ + A"/c2. This sorts itself out to div grad A - A"/c2 - grad(div A + φ'/c) = -4πj. We will have the wave equation with phase velocity c for A, provided that div A + φ'/c = 0. We are at liberty to redefine A so that this relation holds by making a suitable gauge transformation A → A + grad ψ, which does not change the fields, as can be seen by substituting the new A in the formulas for the fields in terms of A. When this is done, the first equation we found becomes div grad φ - φ"/c2 = -4πρ
Now we have wave equations for A and φ, showing that they propagate at speed c, plus the Lorentz condition div A + φ'/c = 0. Since the equations for the three rectangular components of A and φ are the same, we need only look at the solution for φ, which we can then apply to the components of A. In electrostatics, or if c → ∞, the solution for φ is ∫ ρ(u,v,w)dV/R, where R is the distance from the observation point P(x,y,z) to the source point (u,v,w), and dV = dudvdw. We now generalize this for finite c by φ = ∫ [ρ]dV/R, where [ρ] = ρ(u,v,w,t - R/c), called the retarded value of ρ at P. It is not difficult to show that this integral satisfies the inhomogeneous wave equation for φ. Separate the region of integration into two parts, one near the source, which we assume is small compared to a wavelength, and the other consisting of the rest of space. In the first integral, we can neglect the retardation, and the Laplacian of this part will just be -4πρ. In the second integral, The Laplacian with respect to the observation coordinates is the same as the Laplacian with respect to the source coordinates, so we can take it inside the integral. [ρ] certainly satisifes the wave equation div grad ρ - ρ"/c2 = 0, so we can replace the Laplacian by the second time derivative. Now, adding the two contributions, we have div grad φ = -4πρ + φ"/c2, so the retarded integral for φ does satisfy the inhomogeneous wave equation for φ. This analysis can be made much more rigorous with some more powerful mathematics, but this should remove suspicions.
We have just seen something remarkable. The dependence of the potentials on the sources is exactly the same as in the static case, only with the sources replaced by their retarded values. This shows clearly that influences in the electromagnetic field propagate at the finite speed of light, and not instantaneously. There is no "action at a distance" here! We should also note that the fields do not behave in this manner; they are not simply the retarded static fields. If they were, there would be no radiation. All this seems to give the potentials a more basic nature than the fields, and in advanced work we use the potentials only. Even the gauge transformations have a deep significance that is not reflected in classical electrodynamics, but requires quantum field theory for its elucidation.
Now we can consider the radiation from a small source successfully, largely following Hertz. This is not a general analysis, but tells us a lot about electromagnetic radiation anyway, and it is not hard to follow. We start by making a cunning choice for the vector potential A. We assume that it has only a z-component A = [p']/cr, where p'(t) is an arbitrary source function located at the origin, and is not a function of direction. By making this choice, we determine the nature of the source, which we shall investigate later. If we substitute this expression in the homogeneous wave equation (the only sources are near the origin, and we are considering the fields in free space), it is found to satisfy it. Note that ∂[p']/∂r = ∂{p'(t - r/c)}/∂r = -[p"]/c, where now r = R. This is differentiation with respect to the "Maxwell" r in the retarded function. Any other r is called a "Coulomb" r.
The scalar potential can be obtained from div A +φ'/c = ∂A/∂z + φ'/c. We find φ' = -{[p"]/cr + [p']/r2}(z/r), which integrates to φ = -{[p']/cr + [p]/r2}(z/r). Note that ∂A/∂z = (∂A/∂r)(z/r). From A and φ we can determine the fields by differentiation, taking care to include both Maxwell and Coulomb r's. This is a little more complicated than might appear at first sight, but it is straightforward. We choose spherical polar coordinates, with the axis in the z-direction, so that z/r = cos θ. The results (which the reader should verify) are Hφ = {[p"]/c2r + [p']/cr2} sin θ, Eθ = {[p"]/c2r + [p']/cr2 [p]/r3} sin θ, and Er = {2[p']/cr2 + 2[p]/r3} cos θ. All other field components are zero. These are the exact fields at any distance r from the source.
The fields are the sum of three contributions, which may be called the radiation,induction and Coulomb fields. Each depends on a different time derivative of the source function p(t), and each has a characteristic dependence on r. The radiation fields dominate for r >> λ, induction for r ≈ λ and the Coulomb fields for r << λ. For a sinusoidally varying source of angular frequency ω, the wavelength λ = 2πc/ω. The directions of the electric and magnetic radiation fields are shown at the right.
The Coulomb fields, depending on r-3, are precisely those of a static electric dipole of moment p(t). This was determined when we chose the form of A, which we now recognize as applying to electric dipole radiation. The radiation fields consist of only two components, E in the meridional direction and H in the latitudinal direction. Their magnitudes are equal, and are proportional to sin θ. The radiation is zero in the polar direction, and a maximum in the equatorial direction. The radiation fields arise entirely from differentiating the Maxwell r. Finally, the induction fields depend on p'(t) and fall off as r-2. They resemble the Coulomb dipole fields, but are a quarter-wavelength out of phase and are still considerable at a greater distance than the Coulomb fields. These fields have been used for short-distance communication links that do not require wires.
The Poynting vector is N = (c/4π)EH = (c/4π)[p"]2sin2/c4r2 erg/cm2/s, pointing radially outward. The total radiated energy U is obtained by integrating over the surface of a large sphere using the surface element 2πr2 sin θ dθ, with the result that U = (2/3c3)[p"]2 erg/s. This is independent of the radius of the sphere, showing that the energy becomes well and truly detached from its source. If p(t) = p sin ωt, then p" = -ω2p sin ωt, and
2
> = ω4p2/2, and the average rate of radiation is U = p2ω4/3c3 erg/s.
If p = ql, where l is the distance between the charges -q and +q, then p' = il. Let p' = il sin ωt. Then U = (8π2/3c)(l/λ)2(irms)2 = Rr(irms)2, where Rr is called the radiation resistance. The power of the antenna current working into this equivalent resistance gives the power radiated.
If we let H → E and E → -H in the source-free field equations curl E = -H'/c, curl H = E'/c, div H = 0 and div E = 0, we see that the same equations are recovered. This means that if we interchange E and H in the fields we have found for an electric dipole, the resulting fields are still possible radiation fields. By examining the Coulomb terms, we see that the source is now a magnetic dipole of magnitude m = iS/c, where i is the current in a circuit of area S. Note that m has the same dimensions as p, esu-cm. H is now meridional, E is latitudinal, and the Poynting vector is the same. If the current is i(t) = i sin ωt, then the average power radiated is U = (S2ω4/3c5)i2. This can also be put in the form U = (8π2/3c)(2πS/λ2)2 (irms)2, for comparison with the electric dipole result. A magnetic dipole of area S is equivalent to an electric dipole of length l = 2πS/λ. Our analysis only applies for sources whose maximum linear extent is small compared to a wavelength, so that retardation across the source is negligible.
It was impossible to construct a model of the atom that radiated as described by Maxwell's theory. A moving electron would simply spiral in to the nucleus, and normal matter could not exist. The trouble was in the atom model, not in Maxwell's theory. When a proper expression for the fluctuating dipole moment was obtained, it worked when substituted in the equations we have derived above. The vibrations are concerned with the transitions between two states, and there is no fluctuating dipole moment in a stationary state. This explained the many frequencies observed, and also the stability of atoms. Classical theories were used successfully in quantum mechanics through the correspondence principle before the proper methods of quantum calculation were devised.

References


Note: if you spot any errors or misprints, I would as usual be grateful for notification via e-mail. If you consider the HTML, you may forgive me a few misprints.
M. Abraham and R. Becker, Classical Electricity and Magnetism 2nd ed.(New York: Hafner, 1949). Chapter X is an excellent review of electromagnetic waves at an intermediate level, and the source that prompted this article. It also contains a good introduction to vector calculus, using hydrodynamic analogies. A later edition has been reprinted by Dover.
J. D. Jackson, Classical Electrodynamics, 2nd ed. (New York: John Wiley & Sons, 1975). Chapter 4. The third edition, I understand, uses Giorgi units, a regrettable concession to fashion. Ewald and Oseen are on p. 512. The precursors are on p. 313ff. There is an excellent bibliography.
M. Born and E. Wolf, Principles of Optics (London: Pergamon Press, 1959). Chapters 1 and 2. A good source for Fresnel's equations. The Ewald-Oseen extinction theorem is treated on pp 99ff.
S. Ramo, J. R. Whinnery and T. van Duzer, Fields and Waves in Communication Electronics (New York: John Wiley & Sons, 1965). An unsurpassed engineering introduction to the subject, which, naturally, uses Giorgi units. The graph of the skin effect in wires is on page 297.
H. B. Dwight, Tables of Integrals and Other Mathematical Data, 4th ed. (New York: Macmillan, 1961). Table 1050, pp 324-325, and Chapter 10. It is extremely regrettable that this useful handbook is out of print.
J. C. Maxwell, A Treatise on Electricty and Magnetism, 3rd ed., 2 Vols. (New York: Dover, 1954). Reprint of the 1891 edition. Vol II, Chapter XX, and pp 492-493.


Composed by J. B. Calvert
Created 2 October 2002
Last revised 13 October 2002