Chapter 7 Sound underwater and underground(1 / 1)

Second sight

If our world had always been blanketed in fog, we would probably make little use of sight, and rely on our ears instead. If the fog proved opaque to radio waves, our telecommunication systems would no doubt have taken an acoustic turn, too. The ocean is a world of just this kind: visibility is poor due to the many suspended particles in the water, radio waves quickly die. And the water is full of sound: fish, marine mammals, human divers, submarines, and underwater robots all use it for subtle and complex forms of sensation, communication, and sometimes attack.

All this would be news to a scientist living only a century ago. It’s true that Leonardo da Vinci pointed out around 1490 that ‘If you cause your ship to stop, and place the head of a long tube in the water, and place the other extremity to your ear, you will hear ships at a great distance from you’, but no one took much notice of him.

It was not until World War I that listening to the underwater sounds of vessels was attempted, and not until World War II that the richness of the underwater soundscape became clear. This happened partly through two bizarre events. In 1942, acoustic buoys in Chesapeake Bay, which had been deployed to detect German submarines, all alerted at once. A ffotilla of destroyers enthusiastically depth-charged the area—but no tell-tale oil slicks appeared and the only victims were thousands of fish. Later that same year, all along the western coasts of the US, every acoustic mine (set to protect ports from anything with a propeller, in fear of a Japanese invasion) detonated at once—and again, the sole result was a multitude of dead marine life.

If the US military had known a little more about a fish called the croaker, much embarrassment and many dead fish could have been avoided. For all its undistinguished appearance, this small brownish creature has a very loud voice indeed—something like a magnified woodpecker—and croaker colonies give voice every dawn, all together, much like birds.

Noisy fish were an almost complete surprise to biologists. There was a hitherto unquestioned belief that the depths were as silent as they were proverbially said to be. In fact, while the oceans had been routinely exploited for coastal and international travel and as a source of food, what lay beneath the surface was largely unknown and apparently rarely even speculated upon, beyond wild tales of sea monsters and legends of sunken cities. The dark underwater world seems to have been regarded as wholly alien and no place for people—even sailors regarded the depths as forbidden territory and most of them did not even learn to swim. While artists and writers—and psychologists too—often viewed the sea as the wild domain of raw nature, their attention was confined to its rough surface alone. Beneath that surface was a negative region, assumed to be as devoid of sound as it was of light.

However, from the early 1940s, listening with hydrophones (underwater microphones) revealed just how noisy the ‘silent deep’ really is, with a cacophony of sounds extending well above and below the human frequency range, from a vast multitude of sources, most then unidentifiable. The loudest source in some areas turned out to be colonies of snapping shrimp. In fact, the upper layer of the sea is never quiet, with a background generated by breaking waves, rain, and lightning as well as by marine life. In shallow areas, swirling sediment is an additional, omnipresent source of sound.

Hearing underwater

There are two reasons for the apparent silence of the sea: one physical, the other biological. The physical one is the impedance mismatch between air and water, in consequence of which the surface acts as an acoustic mirror, reffecting back almost all sound from below, so that land-dwellers hear no more than the breaking of the waves.

Submerge your head, and the biological reason manifests itself: underwater, the eardrum has water on one side and air on the other, and so impedance mismatching once more prevents most sound from entering. If we had no eardrums (nor air-filled middle ears) we would probably hear very well underwater.

Underwater animals don’t need such complicated ears as ours: since the water around them is a similar density to their ffesh, sound enters and passes through their whole bodies easily, without the need of pinnae to coax it in, nor of drums or windows to transfer it from one medium to another. Fish do have ear bones, called otoliths. They are made of calcium carbonate, the high density of which provides a suflcient impedance difflerence to allow vibration by sound waves. This motion is transferred to stereocilia growing on hair cells, which, like ours, send nerve signals to the brain. Other patches of hair cells, called neuromats, are distributed on the skins of fish.

Two other structures augment the hearing of some fish. The first, the swim bladder, is an air-filled sac which functions like the ballast tanks of a submarine, changing the buoyancy of its owner as required, so that the fish can sink or rise without muscular effort. The swim bladder vibrates readily to sound waves, and functions as a fairly sensitive hearing organ up to about 3 kHz. It has, however, a great disadvantage: being a single, symmetrical organ, the swim bladder provides no information about the direction from which sounds arrive.

It is supplemented by a second structure: the lateral line, a fluid-filled tube that runs along the sides of the fish and functions as a direction-sensitive sound detector at low frequencies (~160 Hz to 200 Hz). Unlike the stereocilia in our ears, which wobble when sound waves induce vibrations in the basilar membrane to which they are attached, lateral line stereocilia are pushed and pulled directly by incoming sound waves, which means that the direction of the sounds are sensed directly (and also means that fish sense the motion of water molecules, rather than the pressure changes that we do). This makes fish very hard to creep up on.

Techniques and transducers

The earliest serious technological use of sound underwater was the sounding bell system, in which underwater bells placed near ports could be detected by ships fitted with primitive hydrophones in the form of carbon microphones in waterproof casings. A watchman listened on the ship in stereo and could guide the ship to port even in poor visibility. This system was fitted to many ships in the period 1875–1930, including the Titanic and Lusitania.By 1923, there were thirty bells around the UK coast. From the 1910s, the system was gradually replaced by echo sounding, which involves making a sound underwater, timing its echo, and working out the distance from a knowledge of the velocity of underwater sound: another example of the pulse-echo technique.

From echo sounding, modern sonar (sound navigation and ranging) systems developed. In an active sonar system, short-duration sound impulses (‘pings’) are projected from a vessel to echo from objects in their path. As well as determining distance, changes in the frequency of the received pulses are used to calculate the relative velocity of the source (using the Doppler efflect).

Passive sonar systems simply listen for underwater sounds, especially those of shipping. Automatic recognition techniques allow difflerent types of shipping to be identified from their engine sounds or even from the humming of their electrical systems: each vessel, in fact, has its unique signature or acoustic fingerprint. This technique became very important in the Cold War for recognizing and tracking enemy ships and submarines. Active and passive sonar systems are sometimes deployed on buoys (sonobuoys), which are equipped with radio systems to report their findings.

The hydrophone is the key instrument for underwater acoustic work; almost all those in use today are piezoelectric, usually based on a synthetic ceramic called PZT (lead and titanium zirconate). Unlike microphones, hydrophones must sometimes be very large, in order to achieve directionality at low frequencies. For this reason, the sides of some submarines are almost completely plated with hydrophones. These ffank arrays, as they are called, are usually made of polyvinylidene diffuoride (PVDF).

The underwater equivalent of the loudspeaker is known as a projector. Projectors have a limitation that does not apply to loudspeakers: when the surface moves inwards to create the rarefaction phase of a sound wave, cavitation results if the rarefactional pressure is lower than that of the surrounding water. The bubbles scatter and absorb the sound, silencing the projector. The greater the depth, the higher the water pressure becomes, so the more sound power the projector can produce before the onset of cavitation.

For high-power applications, such as geophysical surveying for oil and gas exploration, airguns are used to produce underwater sound. In these, a small cavity is charged with compressed air and a relay releases the pressure suddenly, rapidly forming a large air bubble and an accompanying burst of sound. The pulse frequency is around 20 Hz to 200 Hz and the amplitude is very high (probably the ‘loudest’ man-made source in the ocean excluding large explosions). The sound travels through the seabed and reflects from the interfaces between the rock layers below. Very long (up to 10 km) arrays of hydrophones are towed near the surface to image the reflected sound, and the results are processed by computers to provide a three-dimensional map.

Though projectors are in principle just hydrophones in reverse, their physical design is often difflerent. One of the most widely used types is the Tonpilz (German for ‘singing mushroom’) transducer, in which several piezoelectric PZT discs are sandwiched between electrodes to make a stack which is terminated by a conical or cylindrical headmass, the business end of the projector. Tonpilz transducers can generate frequencies in the 2 kHz to 50 kHz range.

For many underwater applications, including signalling and distance sensing, directional sounds are required. Just as in air, a sound source will be naturally directional if the sound waves it produces have smaller wavelengths than the transducer face. But because the velocity of sound is about five times greater in water than in air, the wavelength corresponding to a particular frequency is also about five times greater than its airborne equivalent, so directionality is harder to come by.

An elegant way to make a directional sound source is the parametric array. If two sound sources generate waves which diffler just a little in frequency, then waves with that difflerence frequency will be produced, along with others whose frequency is the sum of those of the sources. The wavelength of the difflerence wave can be as long as required, but it maintains the directionality of its parent waves.

Parametric arrays exploit the fact that sound velocity depends on density. At high sound powers, the pressure in the compressions becomes very large, increasing density significantly and therefore brieffy speeding up the sound wave; the reverse happens in the rarefactions. The efflect of these velocity changes is to distort the wave from its usual sinusoidal form.

This is a common efflect in high-power ultrasound, too. As Auguste Fourier showed, a non-sinusoidal wave is equivalent to a sum of component sinusoids. In the case of the parametric array, these components include the original waves, together with the sum and difflerence waves: the difflerence wave being the one of interest. Parametric arrays can also be used in air, to make audio-frequency sound more directional.

Although there is little that electromagnetic radiation does above water that sound cannot do below it, sound has one unavoidable disadvantage: its velocity in water is much lower than that of electromagnetic radiation in air, which means that scanning takes far longer. Also, when waves are used to send data, the rate of that data transmission is directly proportional to the wave frequency—and audio sound waves are around 1,000 times lower in frequency than radio waves. For this reason ultrasound is used instead, since its frequencies can match those of radio waves. Another advantage is that it is easier to produce directional beams at ultrasonic frequencies to send the signal in only the direction you want. However, a disadvantage is that absorption increases with sound frequency, so the range is limited.

Sounds worldwide

The distances over which sound can travel underwater are amazing. It is believed that before the proliferation of engine-powered vessels, Antarctic whales could be heard by their Arctic cousins. Such vast ranges are possible partly because sound waves are absorbed far less in water than in air. At 1 kHz, absorption is about 5dB/km in air (at 30 per cent humidity) but only 0.06 dB/km in seawater. Also, underwater sound waves are much more confined; a noise made in mid-air spreads in all directions, but in the sea the bed and the surface limit vertical spreading.

Box 11

Sound velocity in water: the most accurate (to 0.2 ms1) equation is the Leroy/Robinson formula:c= 1402.5+5T-0.0544T2+0.00021T3 + 1.33S-0.0123ST+0.000087ST2+0.0156Z+0.000000255Z2-0.0000000000073Z3 + 0.0000012(Φ-45) -0.00000000000095TZ3 + 0.0000003T2Z+0.0000143SZ;Ttemperature (°C), Ssalinity (per cent), Zdepth (metres),Φlatitude (degrees).

The range of sound velocities underwater is far larger than in air, because of the enormous variations in density, which is affected by temperature, pressure, and salinity (Box 11). Levels at which sound velocity changes rapidly with temperature are called thermoclines, and they follow the same pattern in most seas: when the weather is calm, the uppermost layer of the sea is characterized by a rapid fall in temperature, and hence sound velocity, with depth. Because calm conditions are more common in summer, this is known as the seasonal thermocline. Below this is the main thermocline, where the temperature and sound velocity continue to fall with depth, independent of season. At the base of the main thermocline (the depth of which varies greatly with latitude), a steady temperature of around 4°C is reached and hardly changes at deeper levels. In this deep isothermal layer, pressure becomes the dominant factor in determining sound velocity—which consequently increases with depth, as shown in Figure 22.

22. Underwater sound velocity profile.

So somewhere under all oceans there is a layer at which sound velocity is low, sandwiched between regions in which it is higher. By refraction, sound waves from both above and below are diverted towards the region of minimum sound velocity, and are trapped there. This is the deep sound channel, a thin spherical shell extending through the world’s oceans.

Since sound waves in the deep sound channel can move only horizontally, their intensity falls in proportion only to the distance they travel, rather than to the square of the distance, as they would in air or in water at a single temperature (in other words, they spread out in circles, not spheres). Sound absorption in the deep sound channel is very low (it is strongly frequency dependent, but around 0.2 dB per km for 4 kHz waves), and sound waves in the deep channel can readily circumnavigate the Earth.

The deep sound channel was exploited to set up the SOFAR (sound fixing and ranging) system, which was initiated in 1960 by the Australia-Bermuda Sound Transmission Experiment, in which explosions were set offl near Heard Island in the Indian Ocean offl the coast of Australia. They were detected in Bermuda, at a distance of 20,000 km. A new, unexpected sound was discovered by the SOFAR researchers, and later identified as the calls of fin whales (Balaenoptera physalus), who long ago discovered the existence and properties of the deep sound channel and regularly visit it to signal their distant kin.

SOFAR opened the door to the ATOC (Acoustical Thermometry of Ocean Climate) system, which calculates global sea temperatures by measuring average sound velocity over large ranges, thus helping to quantify climate changes.

Changing weather causes changing sea conditions, which lead to a range of temporary acoustic states of the ocean, including shadow zones from which most sound is excluded, and temporary sound channels, which allow long-distance propagation. Sounds which travel down the latter, like those which travel around the deep sound layer, are highly distorted en route, due to the many changes of velocity on their journeys (due in turn to variations in temperature and salinity). In the 1990s, various strange sounds of unknown origin (the Bloop is perhaps the best known) led to a range of imaginative interpretations, including some literally monstrous ones. However, the most likely source is the greatly modified sounds of distant icebergs calving.

Earth sounds

Sound waves travel readily in solids (see Box 12), as do pressure waves of other kinds. But not all seismic waves are sound waves. P(for primary) waves are longitudinal: they are sequences of compressions and rarefactions whose velocity is determined by the density and elasticity of the ground, and therefore are sound waves.

However, S (secondary) waves, being transverse, are not sound waves. Both P and S waves are body waves—they travel through the Earth, and their refraction by the layers underground provide us with information about our planet’s structure. There are also a variety of surface seismic waves, but none are sound waves.

Many large animals make and hear low-frequency sounds. The sounds make by African elephants are low simply because their vocal cords are so large and hence relatively slow-moving—so much so that some of their calls are infrasonic. This is of considerable advantage to them since infrasound travels far with little attenuation (in air, very roughly, a 10 Hz signal travels one hundred times further than a 100 Hz one, and 10,000 times further than one at 1,000 Hz).

Underground, attenuation is highly variable, but usually much lower than in air: female elephants use infrasounds to attract males (over 3 km away by air or over 10 km underground) and to contact their young. Elephants also use infrasound to detect thunderstorms (a useful water source) over 500 km away. In 2004, Sri Lankan elephants ffed the coast, probably because they detected the infrasounds of the oncoming tsunami. Elephants generate their signals either by making rumbling sounds or by stamping their feet, which also detect ground-borne sound using vibration receptors called Pacinian corpuscles.