History Lesson

Pop quiz illustrates how far measurement methods have come

by Christopher L. Grachanen

During my 30-plus years working in the metrology field, I have been frequently reminded of how technological advancements affect the design and functionality of inspection, measurement and test equipment (IM&TE) and the methods used for making measurements.

Outwardly, the biggest changes to IM&TE have been to displays and controls. With humble beginnings in D’Arsonval-type meter movements, IM&TE displays evolved from Nixie tubes to discrete liquid crystal displays (LCDs) to flat-screen LCD displays. The same can be said of IM&TE controls, beginning with single-pole, single-throw switches replaced by auto-increment rocker switches, eventually advancing to touch controls.

Internally, the impact of technological advances on IM&TE has been much more profound. Hardwire logic was replaced by complete computer systems, and large measurement transducers were reduced to integrated circuit chips. To make a long story short, the digital revolution had a major impact on IM&TE design from a hardware perspective.

Technological advancements in IM&TE capabilities and functionality have often resulted in adapting a completely different measurement method from those previously used to perform a measurement. As a collector of antique IM&TE, I can appreciate the time savings and improved accuracy and reliability afforded by today’s IM&TE.

This recently hit home when I came across an old book that contains many state-of-the-art methods—given the technologies of the day—for performing precision physical measurements. The book, Handbook of Physics Measurements, Vol. II: Vibratory Motion, Sound, Heat, Electricity and Magnetism, second edition, was published in 1924.

The book’s narrative is: "A self-contained manual of the theory and manipulation of those measurements in physics, which experience has shown to be most available for college and industrial laboratories."1

Of the many procedures describing how to measure various physical phenomena, I came across one procedure for which I could not easily determine what phenomenon was being measured. I shared this procedure with several measurement colleagues, many of whom also could not correctly identify what the procedure was actually measuring. Let’s see how you do:

The cathode shall take the form of a platinum bowl not less than 10 cm. in diameter and 5 cm. deep. The anode shall be a plate of pure silver of 30 sq. cm. in area and 2 to 3 mm. thick. This is to be supported horizontally near the upper edge of the bowl by platinum wires.

This anode must be wrapped with filter paper to prevent silver from falling into the bowl. The electrolyte shall consist of 15 parts of pure nitrate silver to 85 parts of distilled water by weight. The resistance of the metallic circuit shall not be less than 10 ohms.

The method of taking the measurement is as follows: Wash the bowl first with nitric acid and then with distilled water. Dry by heating. Afterward, cool in a desiccator.

When at room temperature, weigh and fill nearly with the electrolyte. Connect to the circuit by placing the bowl on a clean copper ring to which a binding post is soldered. Immerse the anode in the solution, close the circuit, noting the hour, minute and second.

In not less than one-half hour, break the circuit, again noting the hour, minute and second. Empty the bowl, rinse with distilled water, and allow the bowl to soak in the distilled water for at least six hours. Rinse with absolute alcohol, dry in a hot air bath, cool in a desiccator and weigh. The gain in mass is the amount of silver deposited.2

That’s it. From start to finish, I estimated this procedure would have easily taken eight hours to make a single measurement. What do you think is being measured? Take a few moments to ponder the solution before reading further.

Still don’t know? I’ll give you a hint: The uncertainties for this measurement were deemed low enough, given the technologies of the day, that this measurement procedure was used internationally for measuring (realizing) a fundamental measurement unit.

At this point, some readers may have already surmised that the measurement procedure was describing the measurement of constant current. The formula for determining the procedure’s current in amperes, in which grams is expressed as m and seconds as t, is:

I = (m * t) / 0.001118.

In 1924, the definition for the international ampere was "the unvarying current, which when passed through an aqueous solution of silver nitrate in accordance with the specifications in the following paragraph (the aforementioned measurement procedure), deposits silver at a rate of 0.00111800 gram per second."3

Contrast this with the current definition of the international ampere: "The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross section and placed 1 meter apart in vacuum, would produce between these conductors a force equal to 2 x 10−7 newtons per meter of length."4

You may want to try this "What is being measured?" challenge on some of your measurement-savvy colleagues and see if it doesn’t draw a few puzzled looks.


  1. E.S. Ferry, O.W. Silvey, G.W. Sherman Jr. and D.C. Duncan, Handbook of Physics Measurements, Vol. II: Vibratory Motion, Sound, Heat, Electricity and Magnetism, second edition, 1926.
  2. Ibid.
  3. Ibid.
  4. National Institute of Standards and Technology, NIST Special Publication 330, 2008 Edition, "The International System of Units," www.nist.gov/pml/div684/fcdc/upload/sp330-2.pdf.

Christopher L. Grachanen is a master engineer and operations manager at Hewlett-Packard Co. in Houston. He earned an MBA from Regis University in Denver. Grachanen is a co-author of The Metrology Handbook (ASQ Quality Press), a senior member of ASQ, an ASQ-certified calibration technician and the treasurer of the Measurement Quality Division.

Average Rating


Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ

Featured advertisers