(E10) Alternating Current (AC)
In a flashlight the electric current always flows in the same direction. By an old convention dating back to Ben Franklin the flow is viewed as going from the positive (+) end of the battery, through the electrical load (e.g. lightbulb) whose resistance limits the intensity of the current, to the negative (–) end, closed by a return current inside the battery, which also generates the voltage by a chemical process. We call such a single-direction flow direct current or DC in short.
The current flowing from outlets in the home is of a completely different kind. In the US it reverses direction 120 times each second, or 60 back-and-forth cycles each second (50 in Europe). That sort is known as alternating current or AC. It is not any harder for power stations to produce AC (indeed, it may be easier) and as will be explained, it is much more economical to send AC to greater distances than to send DC. We will try to describe AC here using the water flow analogy. The analogy may be somewhat strained (skip this part if you wish) but then again, it is just a help to understanding.
The flow of DC in a wire resembles a one-directional flow of water in a pipe. To simulate AC, imagine that the pump wheel which drives the flow, instead of rotating in a constant direction, swirls back and forth like the agitator in a top-loading washing machine (fluid and pump are both just an analogy; the sloshing of this imaginary fluid can be very fast, even at 60 cycles a second, without encountering inertia). The fluid will accordingly slosh back and forth at the same frequency (drawing).
That is definitely NOT a good way to deliver water to users, but with electricity what is being delivered is not the fluid but its energy. If, in the location where we want to use that energy, we place a wheel driven back and forth by the flow, then a ratchet wheel can extract constant rotation from the back-and-forth motion--a wheel with sloping gears which only turns in one direction, being stopped by a lever ("pawl") from turning in the opposite direction.
AC flow is somewhat like this. If you are comfortable with trigonometric functions, you might note that the voltage V and the current I both vary in wave-like fashion, like
The above relation between f and t shows that in each 1/60 second, f goes through a complete cycle of 360°, and I and V both complete a full wave cycle, like this:
The value of I in any direction varies between a maximum I0 (the "amplitude" of the AC current) and –I0, a maximum current in the opposite direction. At the same time the voltage V varies between V0 (the "amplitude" of the AC voltage) and –V0,Ohm's law still holds, at least if the circuit only includes resistances (when capacitors and magnetic coils are also included, or machinery which uses them, the calculation gets more complex). At any instant
with R the same value as the one you would obtain with DC. However, because both I and V fluctuate rapidly, so does the power delivered
What is more significant (and is measured by the wattmeters which determine your electric bill) is not the instantaneous power, which can be anywhere between zero and I0V0, but the average power delivered, which turns out to be in this case 0.5 I0V0.
The peak of the voltage V obtained from wall outlets is not 110 volts, but is larger by a factor equal the square root of 2, about 1.4142...., which makes it about
Similarly, if we connect to the outlet a resistance R, the peak current I0 is V0/R = 155.6/R. The average power however is
because the factor (1.4142..) 2 = 2 cancels exactly the factor 0.5 . In technical terms, if 155.6v is the peak voltage, then 110v is the r.m.s. ("root mean square") voltage, the DC voltage which gives the same amount of power.To remember:
(E11) The Reason for Using Alternating Current
The use of AC makes the transmission of electrical power to great distances much more economical.
Suppose we need to transmit power W (watts, or more likely megawatts, millions of watts) from the power station where it is produced (by steam turbines or water turbines) to distant users. To make things simple, assume that all currents or voltages are DC (or else, if this is AC, replace them by their r.m.s. values, which leads to the same expressions for power) and assume all circuits are closed.
If the distance (drawing below) is great, the cables connecting the two may be quite long, and may have a fair amount of electrical resistance R.
The total power delivered by the power station is
("twinkle, twinkle..."). That power is a net loss to the customer, whose voltage (by Ohm's law, applied to that part of the circuit) is now diminished by
and the user does not get the full supply voltage V, only V – V1.
Obviously, the power company would like to make W1 as small as possible. One obvious way is to reduce R--make the power wire thicker. But there exists an economic limit: thicker cables use more copper and cost more, and they are also heavier, making it hard to suspend them from slender towers, widely spaced.
But suppose one could somehow change voltage V and current I without changing the rate W at which electric power is provided--say, increase V until it is 1000 times larger, while I becomes 1000 times smaller. The power IV delivered is then the same as before, but the power loss W1 = I2R is reduced a million times!
True, the power lines may find it harder to prevent leaks from an "electric pressure" V that is 1000 times greater. Such a problem could well exist with water pipes! However, as already noted air is an excellent insulator and even voltages of 110,000 volts and more can be transmitted along power lines hanging in the open. Only the insulators which hold those wires need to resist a higher voltage--e.g by shaping them as a long stack of ceramic plates which sheds off rainwater.
It is also true that the voltage must be "stepped down" back to 110 volts before entering the home: wires hanging high above the ground may hold back very high voltages, but wires in homes and factories are wrapped in thin plastic and inside one's walls, safe at 110 volts but not voltages many times larger. Still, it remains an attractive idea.
Can it be done? Not with DC. However, it turns out that AC allows such exchanges--raise the voltage, lower the current--by means of devices known as electric transformers. Power companies all over the world use this trick. Transformers at the power station step up the voltage to 100,000 or 200,000 volts (sometimes even more), and well-insulated "high voltage lines" carry the power across the country, with high V but small I.
Then at the destination the voltage is again stepped down, usually in several steps; transformers do so quite efficiently and consume very little power (which is just as good--if they absorbed energy, they would get too hot). The process ends with fairly small transformers, each supplying a block of apartments, a small group of dwellings or perhaps a single factory or farm; some can be seen attached (with input and output wires isolated by ceramic insulators) to the top of the poles that carry electric supply lines (image here is from "Principles of Physics" by Millikan and Gale, 1927). Often they are encased in (electrically grounded) metal containers filled with oil, which insulates electrically but helps carry away whatever heat is generated.
On the History of AC
The first system distributing electricity to the general public was devised by Thomas Alva Edison in the 1880s and used direct current. Electricity had few uses then--mainly, for light (using Edison's lightbulb) and primarily in cities where distances were small. Long-distance telegraph used batteries.
George Westinghouse, aided by the Serbian engineer Nicola Tesla (reluctantly followed later by Charles Proteus Steinmetz at General Electric which Edison helped start), realized that with AC, long distance transmission lines could operate at higher voltage, transmitting energy much more economically. Tesla also introduced polyphase AC, where the same generator created several related AC waves, a more efficient scheme. Around that time electric motors were entering wide use--in industry and on trolley cars--and ultimately AC prevailed, after a nasty publicity "war of currents" between Edison and Westinghouse. See
You may also look up a correspondence with these web pages Who Invented AC?
(E12) Electricity and Chemistry
On the scale of atoms and molecules, electric forces rule the behavior of matter.
Thus, as may be expected, chemistry is quite relevant to the study of electricity.
This is not a course on chemistry, and will therefore assume the user has previous acquaintance with fundamental chemical facts, without going into many details. One such fact is that matter consists of tiny atoms, which come in 91 stable kinds or "elements" (not 92, since Technetium is unstable), plus some unstable ones. Each kind of atoms has its distinct chemical behavior.
Some materials consist of a single variety of atoms, but many more are built of molecules, each combining several (different or identical) atoms ("chemical compounds"). The type of atom or molecule determines the properties of the material, which can also depend on temperature and other factors.
Each element has a name, and chemical formulas label their atoms by one or two letters (first one always a capital letter). For example:
O for Oxygen
N for Nitrogen
C for Carbon
He for Helium
Ar for Argon
Fe for iron (Ferrum in Latin)
Na for sodium (Natrium)
Cl for Chlorine,
K for Potassium (Kalium)
P for phosphorus
Molecules are denoted by combinations of letters, identifying the atoms which make them up. If the molecule contains more than a single atom of certain kind, a subscript in the appropriate formula tells how many: Examples:
NaCl for table salt ("sodium chloride")
H2O for water (no one calls it dihydrogen oxide--though one could)
CO2 carbon dioxide (a gas produced by burning and by breathing)
HCl Hydrochloric accid
H2SO4 Sulfuric acid (S represents the sulfur atom)
NaOH Sodium hydroxide also known as lye. Lye and fats combine to soap.
Ca(OH)2 Calcium hydroxide, from quicklime and water-- quicklime
being the powder left after roasting limestone in a furnace ("kiln").
Important in mortar and building materials.
CaSO4 Gypsum, used in walls ("drywall" sheets)
N2 Nitrogen, a gas whose atoms combine in pairs.
O2 Oxygen, another such gas. Air is about 78% nitrogen, 21% oxygen,
and most of the rest is argon Ar, whose atoms combine only rarely.
H2 Hydrogen, the lightest gas of all.
CH4 Methane, which burns and is found in natural gas of oil wells.
It should be realized that the above statements distill the work of many generations. We also won't go into the history of those discoveries, because the goal here is to explain how electric cells and batteries were discovered, and how electricity was harnessed to chemical processes
It also dissolves a wide class of compounds. Mixing different water solutions (sometimes adding heat--as in the cooking of food) was one easy way of creating new compounds, and three groups of solutions turned out to be good conductors of electricity--acids, bases (or alkalis) and salts.
Since the early 1900s much more is known about matter, especially that (discovery by Rutherford, 1911) each atom consists of a compact positively charged nucleus, surrounded by negative electrons. The electrons are relatively lightweight, and most of the mass or weight of matter is in the nuclei of its atoms. They are held together by electrical attraction between (+) and (–) charges, a situation which has sometimes been compared to the way planets are held by the gravity of the Sun.
This analogy is not really accurate, because on the atomic scale, certain new properties ("quantum effects") begin dominating the laws of physics. The location of electrons in atoms is not predictable: all we have is the likelihood of an electron being found at various points, given by a wave function defining (for atoms at rest) certain standing waves.
The atoms are stable only if the wave function takes one of a number of symmetric patterns, each with its energy. When atoms combine to molecules, such patterns also exist, and when atoms or molecules combine to form solids, such laws often determine crystal structure, electric conductivity and other properties.
The lightest atom is hydrogen, whose positive nucleus is a distinct particle, the proton, 1840 times heavier than the electron. Heavier atoms seem to contain many protons, surrounded by the same number of electrons, so that electrically, the atom (usually) is neither positive nor negative. Actually, the weight of atoms (atomic weight, in units of the proton) is generally double the weight of its protons, or more than double. It turned out (1932) that nuclei also contains neutrons, particles somewhat similar to the protons but with no electric charges, in numbers equal to that of protons, or in heavy atoms, in slightly larger numbers.
Again, this has been the work of many generations. Over the 1800s, scientists generated electricity by chemical means, separated compounds in solutions by electrical means and formulated laws governing such process, but did not fully understand their reasons After all, the electron was only discovered in 1896!
(the next few paragraphs follow parts of "Positive Ions--History")
The unique chemistry associated with water was explained in 1884 by Svante Arrhenius (1859-1927), a many-talented Swede who received the 1903 Nobel prize for chemistry and who (among his many achievements) first suggested the "greenhouse effect". Arrhenius proposed that when a compound such as table salt NaCl (sodium chloride) was dissolved in water, it broke up into electrically charged "ions" (Greek for "the ones that move") Na+ and Cl-. Electric forces made Na+ ions move in one direction, Cl- ions in the opposite one, and that was how an electric current could be carried.
Although at first this seemed like a strange idea, today it is quite well understood. Many chemical molecules are formed when atoms share electrons (the "covalent" chemical bond), but molecules such as those of NaCl are different. There, the sodium atom (Na) gives up an electron to the chlorine (Cl), creating ions Na+ and Cl-, which in solid salt are held together by their electric attraction ("ionic bond"). Water, however, greatly weakens electric forces (on the atomic scale), allowing some ions to drift free whenever salt is dissolved in water, and allowing the water to conduct electricity.
The smallest atomic positive ion is the proton, the nucleus of hydrogen. Substances which when dissolved in water produce ions of hydrogen are known as acids and any such solution (e.g. HCl, H2SO4, vinegar) has a sour taste. Of course, the fraction of acid molecules which actually breaks up into ions in a water solution can vary--it is large in "strong" acids and small in "weak" acids. Even in pure water a tiny fraction of the molecules is split up into ions H+ and (OH)– ("hydroxyl") ions at any time. The degree of "sourness" depends on the concentration of the acid in the water and on its strength; carbonated water (for instance) has a slightly sour taste, because some CO2 is dissolved in it, creating the weak "carbonic acid" H2CO3.
In addition, many other ionic compounds exist, dissolving in water at least partially. Metals are attacked by acids, replacing their hydrogen atoms with metal ones--e.g. copper and sulfuric acid give CuSO4 (green-blue crystals), and sulfuric acid from burning fuel creates "acid rain", which turns the surface of marble to gypsum and thus erodes buildings and outdoor art.
Any solution of an ionic compound contains equal charges of positive or negative ions, and therefore carries no net electric charge, but still can conduct an electric current by moving ions across the solution. As ions arrived at the other electrode, they sometimes are deposited (for instance, "electroplating" it with silver or copper) and in general further chemical reactions may occur.
It is worth noting that water can also dissolve some non-ionic substances whose atoms contain weak bonds-- sugar or alcohol, for instance. And of course, many compounds are formed by sharing of electrons in a molecule ("covalent bond") and not by the ionic bond, and these (e.g. compounds in glass or in plastics) are usually not dissolved in water.
(E13) Where Electricity and Chemistry Meet
The link between electricity and chemistry began with a strange discovery by Luigi Galvani, a professor of anatomy in Bologna, Italy, possibly in 1786 (though he had made related observations earlier).
Previous experiments with static electricity (e.g. by passing a charge through a chain of people holding hands) showed that it caused sudden muscle contractions, probably by triggering nerves which control those muscles. Galvani was dissecting frogs, and when his assistant touched a frog leg with a metal scalpel, it contracted in a similar way. Touching the frog with different metals also initiated contractions, depending on the metal, giving us today's English phrase "galvanize into action". Galvani studied the phenomenon and was convinced the electricity came from biological processes.
Alessandro Volta was professor of science in Pavia, not far from Bologna. Volta had considerable experience with static electricity (in 1775 he introduced an improved version of the "electrophorus", to be discussed in a later section). Volta suggested that Galvani's electricity was not of biological origin but arose from the contact between metal and wet, salty material such as organic tissue .
Volta dispensed with the frog, experimented with metal dipped in a conducting solution, and showed that chemistry alone could produce a steady flow of electric current.
Let now a second plate be inserted into the solution, separated from the first and made of copper, and the two plates connected by an outside wire. Copper is less reactive and for simplicity, we ignore completely the interaction between it and the acid (though it does exist). The connecting wire makes the second plate, too, slightly negative relative to the fluid, so zinc ions will flow to it and will be neutralized by electrons arriving through the wire. This allows new zinc ions to enter the solution, displacing hydrogen which bubbles up near the copper.
(If the second plate were also of zinc, the set-up would be symmetric, with no reason for a current to prefer flowing in either direction in the wire.)
The end result is that some of the zinc ends up as dissolved zinc chloride, replacing hydrogen which bubbles up, and that an electric current flows in the wire from the copper to the zinc--the copper being electrically positive ("anode") relative to the zinc ("cathode").
The energy driving the current comes from the chemical reaction of the zinc in the solution. Many different variation of such "electric cells" have been devised using different metals, dipping either into acids or into alkalis. (Popularly they are called "electric batteries," but at least originally, "battery" meant a combination of more than one cell.)
The technology of cells can be complicated. "Dry cells" used in flashlights (for instance) are not really dry--they contain liquid, too, but it is soaked into granular or fibrous material which holds it in place. Cells also contain chemicals to absorb the hydrogen H2 produced on the electrodes, whose bubbles may block the flow of current.
Compared to static electricity, which can easily reach hundreds and thousands of volts and create visible sparks (though the charge itself is small), cells typically produce about one volt, though they give a continuous flow of electric charge ("electric current"). Volta in 1800 connected a large number of cells with the (+) side of one connected by wire to the (–) side of the other, so the voltage across this entire "voltaic pile" was the sum of the contributions of all its cells. . This was the first electric battery, and was used extensively in electric experiments.
This way a storage battery may be obtained, like the ones used in motor vehicles or in portable computers. Of course, the devil is in the details: the re-deposition must restore the orderly structure of the electrodes, otherwise the storage is inefficient. Again, this calls for sophisticated (and sometimes expensive) technology: ordinary dry cells are not suitable for recharging, but rechargable cells exist using nickel and cadmium, or other metals. Automotive batteries contain liquid sulfuric acid and lead (making them quite heavy), and are nowadays sealed.
It was quickly found that when electric currents from a battery are driven through an ionic solution ("electrolyte"), they can produce "electrolysis", a breakdown of the chemical compound in liquid solution. Drive a current between two electrodes dipping into a solution of table salt NaCl in water: the current will split the salt ions Na+ and Cl– apart. However the experiment will not yield either sodium or chlorine. Instead the products react with the water in solution, and what comes out are hydrogen and oxygen gases, the result of splitting up the H2O molecule of water, while the sodium and chlorine recombine.
Sir Humphry Davy, an early leader in the study of electrolysis, prevented this secondary reaction by heating table salt until it melted. Molten salt is also an ionic fluid, and in 1807 Davy drove a current through it and separated sodium, a soft metal which decomposed water on contact (it can be preserved immersed in kerosene). Earlier that year he decomposed potassium chloride KCl (very similar) and obtained potassium, an even more reactive soft metal which not only decomposed water but also ignited the hydrogen and oxygen thus produced.
Davy made many other discoveries and was a popular lecturer, though his fame was later eclipsed by his gifted assistant and successor, Michael Faraday, who discovered the basic law of electrolysis. Expressed in modern terms, what Faraday showed was that to separate an amount of material, one always needed to pass through it a total electric charge equal to the total charge of its chemically active electrons. Look up a chemistry text for the details!
Electrolysis today is the standard method of producing aluminum, by a process invented in 1886 by Charles Martin Hall, a student at the Oberlin college in Ohio, and simultaneously by Paul Héroult in France. Prior to that aluminum was extracted using metallic potassium, a very expensive process. The Hall-Héroult process uses electrolysis of an aluminum mineral, bauxite, dissolved in a molten solution of the mineral cryolite. Because a great amount of electric energy is needed to separate aluminum, recycling it saves considerable energy.
In many cases, the circuit is not clear--some have claimed that small variations in composition have a role, also contact with the ground. Rusting is certainly more pronounced in the presence of salt water--which is why the US navy has anchored reserve ships in river estuaries, and why ships on the Great Lakes of the US tend to outlast those of the ocean. Salty spray on bridges etc. located near seawater enhances corrosion.
Stainless steel contains nickel and resists rusting by forming a tough oxide layer where it contacts the air, and aluminum resists corrosion is a similar way. Because metal corrodes, in the era of sailing, wooden hulls of ships were not joined by nails which rust, but by wooden pegs or "trunnels" ("tree nails").
The effect is much more pronounced where two different metals come in contact. One popular way of protecting iron is by galvanizing it--dipping it in molten zinc to give it a zinc coating. A scratch in the coating will promote corrosion--but it is the more reactive zinc coating which wears away, while the iron stays protected (since the two metals have similar color, many users hardly notice such corrosion,). Only after most of the zinc is worn off does rusting suddenly accelerate. Similarly, home water heaters contain "sacrificial electrodes" of zinc (not connected to any source of electricity!) to draw corrosion away from vulnerable metal parts.
An opposite effect occurs when iron is protected by coating it with tin, as is often done with preserved ("tinned") foods. Tin is less reactive than iron, so a tinned iron can looks shiny as long as the coating is intact, but scratches in it quickly tend to rust.
Other examples exist, For some time, manufacturers of electric wiring for residences promoted the use of aluminum wiring. Weight for weight, aluminum wires are better conductors of electricity than copper. The problem is that the ends of wires are attached to outlets, switches and other devices, often made of copper or brass (alloy of copper and zinc). At those spots the more reactive aluminum tends to corrode and become covered by an insulating oxide layer, creating the possibility of high electric resistivity. Resistivity promotes local heating, and in rare extreme cases, that could cause fires. Remedies exist, but most home builders play it safe and continue to use copper.
The Statue of Liberty in New York's harbor was assembled from carefully shaped sheets of copper, held in place by an iron tower (designed by Gustave Eiffel, creator of another famous tower). The plates were held by iron brackets, and though paint was used to separate iron from copper, over the century during which the statue has stood on Liberty Island in New York's harbor, the paint wore off and contact between iron and copper began. Luckily, copper is the less reactive of the two, so when the statue was renovated, heavily rusted iron brackets could be replaced (and insulated), while the original copper, though covered with soft green patina, was intact.
Home Page "All Things Electric and Magnetic"
Home Page "From Stargazers to Starships"
Timeline of "All Things Electric and Magnetic"
Timeline (from "Stargazers")
Master index file
Author and Curator: Dr. David P. Stern