|photo taken from Gray's Anatomy of the Human Body (public domain)|
While at first it may seem out of place in a noise blog, often musicians, sound artists, and listeners alike overlook the most important part of any musical experience: the human ear. An incredible piece of machinery, this complex, multipart system creates the perception we know as sound simply from vibrations in the air, but how do we get from air vibrations to the visceral emotion elicited by a favorite piece of music or sound art? The answer lies in both physical conduction and neural sensation.
Though this information can be found in many textbooks, Peter W. Alberti’s World Health Organization (WHO) article, “The Anatomy and Physiology of the Ear” provides an in-depth look at how hearing works.
The human ear can be divided into three parts: the outer ear, the middle ear, and the inner ear. The outer and middle ear deal primarily with continuing the resonances of air vibrations, physically conducting sound, while the inner ear converts these vibrations into neural information. Essentially, the ear works like a pickup or a microphone, transforming physical waves into electrical information. It is this electrical information that is organized by the brain into “sound.” (If we define “sound” this way, then the answer to the old, tired question “If a tree falls in the forest and no one is around to hear it, does it make a sound?” then the answer is a hard “no,” albeit the fall would produce sound waves…)
The outer ear consists of the part of the ear that can be observed outside the body. The outer ear cups to capture as many waves at as high an intensity as possible, so as to provide the best description of the external environment. Evolutionarily, this was used to sense danger from afar. The middle ear, not visible from outside the body, continues the conduction of air waves first through the auditory canal. The important parts of this division are the tympanic membrane (ear drum), which vibrates physically when hit by sound waves, and further vibrates a group of three tiny bones: the malleus, the incus, and the stapes. It is at this point where sound conduction becomes incredibly interesting, changing from the simple vibrations of bones and membranes to fluid vibration and electrical stimulation.
The inner ear, the mysterious converter of the physical to the neural, comprises mainly of the cochlea, a conch-shaped bone filled with perilymph, a gelatinous fluid that further transmits sound vibration and is used for balance. When this perilymph vibrates as a result of the stapes beating on the oval window of the cochlea, it moves hair cells on the basilar membrane that resonate at different frequencies. Essentially, different notes and frequencies are just “on” or “off” signals at each different section of the membrane, like keys on a piano. If a hair cell that vibrates under 440hz is moved by the vibration, your brain perceives this “switch” as an “A” note, and of course this works for all different frequencies and tones.
The physiology and anatomy of the ear are much more complicated than as described above, but this certainly serves as an introduction to the process of hearing and can be used to discuss some other more interesting topics pertaining to noise and music alike. For instance, human hearing ranges from 16-32hz on the low end and about 18,000hz on the high end, and, according to WHO, the auditory canal transmits best at about 3000-4000hz. What this means is that sounds at this frequency can be very sensitive, and even damaging to the human ear. Many smoke detectors and other alarms emit sound at this frequency, which accounts for their irritating sound and effective use for demanding attention or waking sleepers. Perhaps this is why certain frequencies of feedback can be so painful.
Other interesting frequencies are those used to induce different processes in the brain. The brain operates in different stages: beta, alpha, theta, and delta. Ned Herrmann’s article in Scientific American, “What is the function of the various brainwaves?” provides a detailed look at how these states of brain activity work, but essentially, beta waves are emitted in productive brain processes, alpha in more relaxed states, and theta and delta occur in deep meditation and sleep. This is worth mentioning because these brain states can be elicited through use of specific activities such as meditation and yoga or by specific frequencies of sound. In simple terms, a sound artist could potentially lull the audience into a deep trance state with the proper frequencies.
Contrastingly, Science Magazine reports that exposure to some sounds below the human perception range can still damage hearing; these can be the low oscillations of jet engines and wind turbines, or even arena crowds and the vibration of an open car window on the highway. Referring to a study by Markus Drexl, Sarah C. P. Williams’ article, “Sounds You Can’t Hear Can Still Hurt Your Ears,” discusses how these low sounds can unknowingly damage a person’s ear.
Human hearing is an incredibly complex system and often overlooked by artists and listeners of both noise and music, and can serve to be a great inspiration in the creation of sound. Some sounds damage, while some nurture, some even used as weapons. Perhaps careful attention to the science of hearing can be used to produce more interesting music and sound that stimulate personal pleasure and biological satisfaction. While this topic can be addressed ad nauseum, hopefully this serves as an interesting first look into the science of sound. Future “Sound Science” posts will investigate deeper.
Written by Jesse James Kaufman