Audio Glossary

Every industry has its own slang and the audio industry is no different. The Danley team has put together this robust list of audio terms and their definitions to help you along your audio and sound career.


A-Type Plug

An A-Type plug is a type of electrical plug commonly used in North America, Central America, and parts of South America, Asia, and the Caribbean. It’s characterized by two flat parallel pins, with or without a grounding pin (depending on the variant). The A-type plug is designed for use with corresponding type A electrical outlets. It’s typically used for low-power devices like smartphones, laptops, and small appliances.


A-Weighting is a type of frequency weighting commonly used in audio measurements and equipment. It’s a method to adjust or filter the measured sound levels to approximate the sensitivity of the human ear to different frequencies. The A-weighting curve emphasizes frequencies in the midrange while reducing the contribution of low and high frequencies, reflecting the fact that human hearing is less sensitive to very low and very high frequencies.

This weighting is often used in sound level meters and other audio equipment to provide measurements that better correspond to how humans perceive sound. It’s particularly useful in environments where the noise contains a mix of frequencies and you want to assess its impact on human hearing accurately. A-weighted measurements are denoted with the unit “dBA” (A-weighted decibels).



Analog to Digital conversion. The A/D conversion process involves taking samples of the analog signal at regular intervals and assigning numerical values to represent the amplitude of the signal at each sample point. These numerical values are then stored digitally for processing, storage, or transmission.

A/D conversion is a fundamental process in digital audio recording, playback, and processing. It allows analog audio signals to be stored, manipulated, and transmitted using digital technology, offering benefits such as improved fidelity, easier editing, and the ability to apply various digital audio effects.


Absolute Phase

Absolute phase refers to the timing relationship between different audio signals or between different components within a single audio signal. It’s about the alignment of the waveform’s peaks and troughs across the audio spectrum.

Simply put, absolute phase determines whether the waveform of an audio signal is in sync or out of sync with a reference point. This reference point is often considered to be the point of origin or starting point of the waveform.

It is important to note that absolute phase is a topic of debate among some audio engineers. While some argue that it’s critical for preserving the integrity of the audio signal and ensuring accurate reproduction, others contend that it’s less important than factors like frequency response, distortion, and dynamic range.

Practically speaking, absolute phase discrepancies are typically only noticeable in certain situations, such as when mixing or mastering audio, or when using stereo equipment with highly accurate reproduction capabilities. In everyday listening environments, the impact of absolute phase is often negligible compared to other factors influencing audio quality.



Absorption refers to the process by which sound energy is converted into other forms of energy, typically heat, within a material. This conversion reduces the reflection of sound waves and helps to dampen or attenuate sound within a space.

Absorption materials are commonly used in acoustic treatments for rooms, studios, theaters, and other environments where controlling reverberation and echo is important. These materials are designed to absorb sound waves rather than reflect them back into the space, thereby improving the acoustics and reducing unwanted noise.

Various materials can be used for sound absorption, including acoustic foam, fiberglass panels, fabric-wrapped panels, perforated panels, and specialized acoustic tiles. The effectiveness of an absorption material depends on factors such as its thickness, density, porosity, and surface characteristics, as well as the frequency range of the sound it’s intended to absorb.


“AC” stands for “alternating current.” In the context of audio equipment, it refers to the type of electrical current used to power devices. Alternating current is characterized by a continuous and cyclical change in the direction of the flow of electric charge.

Most audio equipment, such as amplifiers, loudspeakers, subwoofers, and other electronic devices, operate using AC power. This power is typically supplied by electrical outlets in homes and buildings, where AC voltage is the standard.

In contrast, “DC” stands for “direct current,” which has a constant flow of electric charge in one direction. While some audio equipment may use DC power internally (such as batteries or power supplies converting AC to DC), the majority of audio devices are designed to operate with AC power sources.


AC Coupling

AC coupling in audio refers to a method of removing or blocking the DC (direct current) component of an audio signal while allowing the AC (alternating current) component to pass through unaffected. This is typically achieved using capacitors in the signal path.

In audio systems, AC coupling is often used in various stages of signal processing or transmission to eliminate DC offsets and ensure proper operation of the equipment. DC offsets can occur due to various reasons, such as imperfections in electronic components or biases in signal sources, and can lead to undesirable effects such as distortion or instability.

By employing AC coupling, the DC offset is blocked or attenuated, while the audio signal’s varying voltage levels, representing the audio waveform, are allowed to pass through. This ensures that the audio signal remains centered around zero volts, which is often necessary for compatibility with other audio equipment and for proper performance throughout the signal chain.

AC coupling is commonly used in audio preamplifiers, amplifiers, mixers, and other signal processing devices to maintain signal integrity and prevent any DC-related issues from affecting audio quality.


An accelerometer is a sensor used to measure acceleration forces, including the force of gravity. It’s commonly used in various applications to detect changes in velocity, orientation, or vibration.

Accelerometers typically consist of one or more small, sensitive elements that respond to changes in acceleration by generating electrical signals proportional to the acceleration they experience. These signals can then be processed and used to determine the device’s movement, tilt, or vibration.

In audio, accelerometers are not typically used directly to capture or process sound signals like microphones or other audio sensors. Instead, accelerometers can be employed in audio equipment and devices in various ancillary roles. Here are a few examples:

  1. Vibration Monitoring: In high-end audio equipment, accelerometers might be used to monitor and mitigate vibration. Vibrations can negatively impact audio quality by causing mechanical noise or interfering with delicate components. By using accelerometers to detect and analyze vibrations, audio equipment can adjust internal components or activate damping mechanisms to reduce unwanted noise and vibrations, thus improving audio performance.
  2. Motion-Activated Controls: Some audio devices, particularly portable or wearable ones like headphones or smart speakers, may incorporate motion-activated controls. Accelerometers can be used to detect gestures or movements, allowing users to control playback, adjust volume, or perform other functions by tapping, shaking, or tilting the device.
  3. Spatial Audio Processing: In virtual reality (VR) or augmented reality (AR) audio systems, accelerometers can be used in conjunction with other sensors to track the movement and orientation of the user’s head or body. This information is then used to adjust the spatial audio processing, providing a more immersive audio experience that corresponds to the user’s movements within the virtual environment.
Accent Mic

This refers to a microphone used to capture specific sounds or instruments in a recording or performance, adding emphasis or detail to the overall audio mix.

In audio engineering and recording, different microphones are often employed to capture various elements of a performance or sound environment. While some microphones, such as those used for vocals or as overheads for drums, may capture the primary elements of the sound, accent mics are used to capture additional detail, specific instruments, or particular sonic characteristics.

For example, in a live music recording, accent mics might be used to capture the sound of individual instruments like a guitar amplifier, a specific percussion instrument, or a solo instrument within a larger ensemble. In a studio recording setting, accent mics might be used to capture unique sounds, effects, or textures that complement the main elements of the mix.

The choice of accent mic and its placement can significantly influence the overall sound and texture of a recording or performance, allowing audio engineers and producers to tailor the mix to achieve the desired artistic or sonic goals.

Acorn Tube

An “acorn tube,” also known as a “button tube” or “thimble tube,” refers to a type of vacuum tube used in electronic circuits. It gets its name from its distinctive shape, which resembles an acorn or a button.

Acorn tubes were primarily used in early electronic equipment, particularly in radio and television sets manufactured during the mid-20th century. They were commonly employed in applications where space was limited, as their compact size made them suitable for use in small or portable devices.

While acorn tubes were once popular, they have largely been replaced by more modern semiconductor technologies such as transistors and integrated circuits. However, they still hold historical significance and are sometimes used in vintage electronics restoration projects or by enthusiasts interested in early electronic technology. There is a niche market for vintage audio equipment and tube-based amplifiers, where acorn tubes might occasionally be encountered in restoration projects or custom-built tube amplifiers designed to replicate vintage sound characteristics.

Acorn tubes typically contain the same basic components as other vacuum tubes, including an anode (plate), cathode, and control grid, housed within a glass envelope. They function by controlling the flow of electrons through a vacuum, allowing them to amplify or switch electrical signals within a circuit.


Acoustics is the branch of physics that deals with the study of sound, including its production, transmission, propagation, and reception. It encompasses a wide range of phenomena related to the behavior of sound waves in various mediums, such as air, water, and solids.

Key aspects of acoustics include:

  1. Sound Waves: Acoustics examines the physical properties of sound waves, including their frequency, wavelength, amplitude, and velocity. Sound waves are mechanical vibrations that propagate through a medium, such as air, as variations in pressure.
  2. Sound Sources: Acoustics studies the generation of sound by vibrating objects or sources, such as musical instruments, speakers, and vocal cords. It explores how different types of sources produce sound waves with distinct characteristics.
  3. Propagation: Acoustics investigates how sound waves travel through different mediums and environments, including the effects of reflection, refraction, diffraction, and absorption. Understanding sound propagation is essential for predicting how sound behaves in architectural spaces, outdoor environments, and underwater.
  4. Room Acoustics: Room acoustics focuses on the interaction between sound waves and enclosed spaces, such as concert halls, recording studios, and classrooms. It examines factors like reverberation, resonance, and diffusion, which affect the quality and clarity of sound within a room.
  5. Noise Control: Acoustics addresses the mitigation and control of unwanted noise, including noise pollution from sources such as transportation, industrial machinery, and HVAC systems. It involves techniques such as sound insulation, soundproofing, and noise barriers to reduce the impact of noise on human health and the environment.
  6. Psychoacoustics: Psychoacoustics explores the psychological and physiological aspects of sound perception, including how humans perceive pitch, loudness, timbre, and spatial location. It investigates factors like auditory masking, sound localization, and the perception of musical harmony and rhythm.

Acoustics has applications in various fields, including engineering, architecture, music, telecommunications, medicine, and environmental science. By understanding the principles of acoustics, researchers and engineers can design better sound systems, improve building designs, enhance communication technologies, and create more pleasant and comfortable environments for human activities.

Acoustic Amplifier

An acoustic amplifier or acoustic amp is a type of amplifier designed specifically to amplify the sound of acoustic instruments, such as acoustic guitars, violins, mandolins, and acoustic-electric guitars.

Unlike electric guitar amplifiers, which are optimized for amplifying the signal from electric guitars and typically include features like distortion and overdrive, acoustic amplifiers are tailored to preserve the natural tone and characteristics of acoustic instruments and typically have more of a flat frequency response.

Acoustic Echo Chamber

An acoustic echo chamber is a space designed to produce reverberation and echo effects for audio recordings or live performances. It’s typically a room with hard, reflective surfaces like walls, floors, and ceilings, which reflect sound waves, creating a rich and prolonged reverberation effect. Musicians, audio engineers, and producers use acoustic echo chambers to add depth and spaciousness to recordings, particularly for vocals and instruments. These chambers were more prevalent in earlier recording techniques but are still used today, either physically or emulated digitally.

Acoustic Envelope

An acoustic envelope refers to the overall characteristics of the sound produced by a musical instrument or a sound source over time. It encompasses aspects such as the attack (the initial transient sound when a note is played), sustain (the period during which the sound remains audible), decay (the gradual decrease in volume after the initial attack), and release (the way the sound ends or fades out).

Understanding the acoustic envelope of a sound is crucial in music production, sound design, and instrument manufacturing. It allows musicians, engineers, and designers to manipulate and shape the sound to achieve specific artistic or technical goals. For example, adjusting the attack of a note on a synthesizer can make it sound sharper or softer, while modifying the release can make it fade out quickly or linger longer.

Acoustic Foam

Acoustic foam, also known as soundproofing foam or sound-absorbing foam, is a material designed to reduce the reflection of sound waves. It is typically made from open-cell polyurethane foam or melamine foam, which are porous materials that trap and absorb sound energy.

Acoustic foam is commonly used in recording studios, home theaters, offices, and other spaces where controlling sound reflections and reverberations is important. By absorbing sound waves instead of reflecting them, acoustic foam helps to reduce echoes, improve clarity, and create a more acoustically controlled environment.

The foam is often shaped into panels or tiles that can be easily mounted on walls, ceilings, or other surfaces. It comes in various thicknesses, densities, and designs, allowing for customization based on the specific acoustic needs of the space.

Acoustic Treatment

Acoustic treatment refers to the process of improving the sound quality within a room by controlling factors such as reflection, absorption, and diffusion of sound waves. It involves the strategic placement of materials and structures to optimize the acoustics of a space for a particular purpose, such as recording, mixing, listening, or performing music.

Acoustic treatment aims to address issues such as:

  1. Reflection: Minimizing the amount of sound waves bouncing off hard surfaces, which can cause echoes and reverberation.
  2. Absorption: Using materials that absorb sound energy to reduce unwanted noise and reverberation. This can include acoustic panels, bass traps, and curtains made from materials like foam, fiberglass, or fabric.
  3. Diffusion: Scattering sound waves to create a more even distribution of sound throughout the space, reducing hot spots and dead spots.

Acoustic treatment is essential in environments like recording studios, home theaters, concert halls, and conference rooms to ensure optimal sound quality for recording, mixing, listening, or communication. Proper acoustic treatment can enhance clarity, reduce unwanted noise, and create a more immersive and enjoyable listening experience.


When referring to a speaker system, these systems that have built-in amplification, requiring an external power source to operate. Active speakers have a built-in amplifier, which eliminates the need for a separate amplifier unit.

When referring to guitars, some electric guitars and basses use active pickups, which require a power source (often a battery) to operate. Active pickups offer benefits such as higher output levels, improved signal-to-noise ratio, and tonal versatility compared to passive pickups.

In recording studios, active studio monitors are speakers with built-in amplifiers that are used for accurate audio monitoring during recording, mixing, and mastering processes.

Active Circuitry

Active circuitry typically refers to electronic components or systems that actively manipulate audio signals to achieve desired effects or functions. These circuits are often found in devices such as audio processors, amplifiers, equalizers, and effects units. Unlike passive components, which do not require an external power source and primarily modify the signal without additional energy, active circuitry relies on external power to operate and can actively boost, filter, or shape the audio signal.

Active Device

Active device typically refers to any electronic component or equipment that requires an external power source to operate and actively manipulates audio signals. Active devices are contrasted with passive devices, which do not require power and mainly interact with signals through passive means like resistance, capacitance, or inductance.

Active Loudspeaker

An active loudspeaker, also known as a powered speaker, is a speaker system that incorporates built-in amplification and active signal processing. Unlike passive speakers, which require an external amplifier to drive them, active loudspeakers have amplifiers built directly into their enclosures.

Active Sensing

Active sensing refers to a technique used in electronic musical instruments and MIDI (Musical Instrument Digital Interface) devices to check the status of connected equipment and ensure reliable communication between devices.

In MIDI terminology, active sensing is a system of messages exchanged between MIDI devices to confirm that they are still connected and functioning properly. It involves the transmission of periodic messages, typically at short intervals, from one MIDI device to another. If a device stops receiving active sensing messages from a connected device, it can assume that the connection has been interrupted or the device has malfunctioned.

Active sensing helps prevent MIDI devices from misinterpreting data or entering an error state due to a loss of connection or communication failure. It provides a mechanism for devices to continually monitor the status of their connections and react accordingly if there are any issues.

While active sensing is a valuable feature for maintaining the reliability of MIDI communication, it is worth noting that not all MIDI devices support active sensing. Additionally, some MIDI implementations may use alternative methods for detecting communication errors or device status, depending on the specific requirements of the application.


An actuator typically refers to a device or component that converts electrical signals into physical movement or mechanical action. Actuators play various roles in music production, performance, and instrument design, often contributing to the manipulation of sound or the interaction between musicians and their instruments.


ADAT stands for Alesis Digital Audio Tape. It’s a digital audio recording and transmission format developed by Alesis, a company known for its audio equipment. ADAT was introduced in the early 1990s as a means of recording multiple channels of digital audio onto magnetic tape.

The ADAT format was originally designed for use with dedicated ADAT tape machines, which could record up to eight tracks of digital audio onto a standard S-VHS videotape. Each track was encoded using a sample rate of 48 kHz and a resolution of 16 bits. Multiple ADAT machines could be synchronized together to record even more tracks simultaneously.

ADAT quickly became popular in recording studios and music production facilities due to its affordability, ease of use, and ability to record multiple tracks in a compact format. However, with the advent of computer-based recording systems and digital audio workstations (DAWs), ADAT tape machines have largely been replaced by computer-based recording solutions.

In addition to the physical tape format, the term “ADAT” is also used to refer to the digital audio interface standard developed by Alesis. ADAT interfaces allow for the transmission of multiple channels of digital audio over optical fiber cables using the ADAT Lightpipe protocol. This interface is commonly found on audio interfaces, mixers, and other professional audio equipment, allowing for the expansion of input and output channels using external ADAT-compatible devices.

ADAT Lightpipe

ADAT Lightpipe is a digital audio interface protocol that allows for the transmission of multiple channels of digital audio over optical fiber cables. It was developed by Alesis as part of the ADAT (Alesis Digital Audio Tape) format and has since become a widely used standard in professional audio equipment.

The ADAT Lightpipe protocol uses a single optical fiber cable to transmit up to eight channels of digital audio at a time. It operates at a sample rate of 48 kHz and a resolution of 16 bits per channel, which is standard for most digital audio recording and playback applications.

ADAT Lightpipe interfaces are commonly found on audio interfaces, digital mixers, and other professional audio equipment. They allow users to expand the number of input and output channels on their audio systems by connecting external devices that support the ADAT Lightpipe protocol.

One of the key advantages of ADAT Lightpipe is its simplicity and ease of use. It provides a convenient way to transmit multiple channels of digital audio over a single cable, making it ideal for applications where space and cable management are concerns. Additionally, because it uses optical fiber cables, ADAT Lightpipe connections are immune to electromagnetic interference, ensuring reliable and high-quality audio transmission.

Additive Synthesis

Additive synthesis is a method of sound synthesis that builds complex sounds by combining multiple individual sine waves, known as partials or harmonics. In additive synthesis, each partial is generated at a specific frequency and amplitude, and the combination of these partials creates a rich and diverse spectrum of timbres.

ADSR – Attack, Decay, Sustain, Release

ADSR, which stands for Attack, Decay, Sustain, Release, is a fundamental concept in sound synthesis and audio envelope shaping. It refers to the four distinct stages that characterize the change in volume (or amplitude) of a sound over time.

  1. Attack: The Attack stage is the initial phase of a sound where the volume gradually increases from zero to its maximum level after a key is pressed or a sound is triggered. The duration of the Attack stage determines how quickly the sound reaches its peak volume. A shorter attack time creates a more immediate onset, while a longer attack time results in a gradual buildup.
  2. Decay: After the Attack stage, the sound enters the Decay stage, during which the volume decreases from its peak level to a predefined sustain level. The decay time parameter controls how quickly the sound decreases in volume. A shorter decay time produces a faster decay, while a longer decay time results in a slower fade.
  3. Sustain: The Sustain stage occurs after the Decay stage and represents the period during which the sound maintains a constant volume as long as the key or trigger is held. The sustain level parameter determines the amplitude level at which the sound remains constant during this stage.
  4. Release: Finally, when the key is released or the trigger ends, the sound enters the Release stage, during which the volume gradually decreases from the sustain level to zero. The release time parameter controls how quickly the sound fades out after the key is released or the trigger ends. A shorter release time produces a quicker fade, while a longer release time results in a more prolonged decay.

AES commonly refers to the Audio Engineering Society. The Audio Engineering Society (AES) is an international professional organization dedicated to the advancement of audio technology and the science of sound.


AES10, also known as MADI (Multichannel Audio Digital Interface), is a standard for the digital transmission of multiple channels of audio over a single cable. Developed by the Audio Engineering Society (AES) and standardized as AES10, MADI provides a means of transmitting up to 64 channels of uncompressed digital audio between audio devices such as mixing consoles, digital audio workstations (DAWs), routers, and other professional audio equipment.


AES11 is a standard defined by the Audio Engineering Society (AES) that specifies the synchronization of digital audio equipment using embedded audio signals. Specifically, AES11 addresses the synchronization of digital audio clocks through the use of embedded digital audio signals, such as those found in AES3 (also known as AES/EBU) or S/PDIF formats.


AES17 is a standard set by the Audio Engineering Society (AES) that specifies the measurement of digital audio equipment. Specifically, AES17 provides guidelines and recommendations for conducting measurements of the analog-to-digital (ADC) and digital-to-analog (DAC) converters used in digital audio devices.




AES42 is a standard for digital audio transmission over XLR connectors using the AES/EBU protocol. It was developed by the Audio Engineering Society (AES) and is primarily used for the transmission of digital audio between microphones and digital audio equipment.

AES42 allows for the transmission of both audio data and control signals over a single XLR cable. This enables features such as remote control of microphone parameters, such as gain and polar pattern, as well as powering the microphone through the same cable using Power over Ethernet (PoE) technology.


AES59 is a standard developed by the Audio Engineering Society (AES) that specifies the pin assignments for the connectivity of various audio devices. Specifically, AES59 defines the pinout for a common connector type known as the “DB-25” or “D-Sub 25” connector. This connector has 25 pins arranged in two rows, with the pins numbered from 1 to 25.

The AES59 standard provides a consistent and standardized way to connect audio equipment using DB-25 connectors, ensuring interoperability between devices from different manufacturers. This standardization is particularly useful in professional audio environments where multiple pieces of equipment need to be interconnected reliably.

AES59 pin assignments cover a range of audio signals, including analog audio inputs and outputs, digital audio inputs and outputs, synchronization signals, and control signals. By following the AES59 standard, audio engineers and technicians can easily configure and troubleshoot audio systems that utilize DB-25 connectors, leading to more efficient workflows and improved reliability.


AES/EBU stands for Audio Engineering Society/European Broadcasting Union. It is a digital audio interface standard developed jointly by the Audio Engineering Society (AES) in the United States and the European Broadcasting Union (EBU). The AES/EBU standard specifies the format for transmitting digital audio signals between professional audio equipment.

AES/EBU typically uses balanced, shielded twisted-pair cables terminated with XLR connectors for transmission. It supports both PCM (Pulse Code Modulation) and non-PCM formats, making it versatile for various audio applications.

AFL – After Fade Listen

After Fade Listen (AFL) is a feature commonly found in audio mixing consoles and digital audio workstations (DAWs). It allows an engineer or operator to monitor a specific audio signal from a channel strip after it has been processed by the channel’s fader.


Aftertouch typically refers to a feature found in electronic musical instruments, particularly keyboards and synthesizers. Aftertouch, also known as pressure sensitivity or pressure response, allows a performer to modulate the sound of a note after it has been played by varying the pressure applied to the keys or other control surfaces.


An algorithm refers to a set of instructions or procedures used to analyze, synthesize, process, or manipulate audio signals. These algorithms can be implemented in various types of digital signal processing (DSP) systems, software applications, or electronic devices to achieve specific audio-related tasks.


Aliasing refers to a phenomenon where higher frequencies in a signal are incorrectly represented as lower frequencies due to undersampling or improper sampling rates during digitization or signal processing.

All-Pass Filter

An all-pass filter is a type of signal processing filter that maintains the amplitude of the input signal while altering its phase response across different frequencies. In acoustics, all-pass filters are often used to create phase shifts without affecting the magnitude of audio signals, allowing for the manipulation of spatial imaging and timbre in audio processing applications.


Ambience refers to the characteristic sound environment or acoustic atmosphere of a particular space, often influenced by factors such as room size, shape, surface materials, and sound reflections. It encompasses the reverberation, reflections, and background noise present in a space, contributing to the overall sonic character and perceived spaciousness of a sound recording or live performance.

Ambient Field

The ambient field refers to the total sound environment surrounding a given point in space, composed of both direct sound and diffuse sound reflections. It includes sound waves arriving from various directions and interacting with the surrounding environment, such as walls, ceilings, and other surfaces. Understanding the ambient field is essential for assessing the acoustic properties of a space, designing sound systems, and creating immersive audio experiences.

Ambient Miking

Ambient miking is a technique used in audio recording where microphones are strategically placed to capture the natural ambient sound of a space, rather than focusing solely on the direct sound source. It aims to capture the reverberation, room tone, and spatial characteristics of the environment, adding depth and realism to the recorded audio.

Amp (Ampere)

An ampere, commonly abbreviated as “amp,” is the SI unit of electric current, measuring the rate of flow of electric charge through a conductor. One ampere is defined as the amount of current that flows through a conductor when one volt of electric potential is applied across it, resulting in a one-coulomb charge passing through the conductor per second.

Amp / Amplifier

An amplifier, often abbreviated as “amp,” is an electronic device used to increase the amplitude or power of an electrical signal. It takes a weak input signal and outputs a stronger version of that signal, typically to drive speakers, headphones, or other transducers. Amplifiers are fundamental components in audio systems, ranging from small headphone amplifiers to large power amplifiers used in concert sound reinforcement.


Amplitude refers to the intensity or volume of a sound, often measured in decibels (dB) or perceived loudness. It represents the magnitude of the fluctuations in air pressure produced by a vibrating object, such as a musical instrument or vocal cords.


Analog refers to the use of continuous electrical signals to capture, process, or reproduce sound, mimicking the natural variations in air pressure caused by musical vibrations. Analog music equipment, such as analog synthesizers or record players, relies on analog signal processing to create or playback musical sounds with a characteristic warmth and organic quality.

Analog Recording

Analog recording involves the process of capturing and storing sound using analog technology, typically on magnetic tape or vinyl records. It relies on the continuous variation of electrical signals that directly correspond to the fluctuations in air pressure generated by musical vibrations, preserving the nuances and warmth of the original sound.

Analog Synthesis

Analog synthesis involves the creation of sound using electronic circuits that generate continuously varying electrical signals, mimicking the natural fluctuations of acoustic instruments. It relies on voltage-controlled oscillators, filters, and amplifiers to shape and modulate the analog signals, allowing for the production of a wide range of sounds with rich timbres and expressive qualities.


Anharmonic refers to the deviation from the harmonic series, where the frequencies of overtones are not integer multiples of the fundamental frequency. Anharmonic phenomena are often encountered in complex vibrating systems or non-linear oscillators, where the relationship between frequency components is not strictly harmonic. These deviations from harmonicity can lead to the production of dissonant or irregular sounds, contributing to the richness and complexity of acoustic phenomena.

Anharmonic Distortion

Anharmonic distortion refers to the generation of frequencies that are not integer multiples of the fundamental frequency in a sound signal, typically resulting from non-linear behavior in audio equipment or systems. This distortion can introduce harmonic components that are not naturally present in the original signal, altering its timbre and potentially introducing unwanted artifacts into the sound.

Anti-alias Filter

An anti-alias filter is a type of low-pass filter used in digital audio systems to prevent aliasing artifacts during analog-to-digital conversion. It attenuates frequencies above the Nyquist frequency (half the sampling rate) to ensure that only signals within the desired frequency range are accurately sampled. By removing high-frequency components that could fold back into the audible range as aliases, anti-alias filters help maintain the fidelity and integrity of the digitized audio signal.


See “Audio over IP”

App (Application)

An app refers to a software application designed to analyze, manipulate, or simulate sound waves and their properties. These apps often incorporate features such as spectral analysis, sound pressure level measurement, and room acoustic modeling to aid professionals in various tasks like sound engineering, architectural acoustics, or noise pollution assessment. They serve as convenient tools for both researchers and practitioners in the field of acoustics, offering versatile solutions for sound-related challenges.

Arming (Arm)

Arming, or the process of “arming,” involves configuring or activating a system or device for sound recording or playback. This typically includes preparing microphones, amplifiers, and other equipment to capture or reproduce audio signals accurately. Arming ensures that the system is ready to receive, process, and transmit sound effectively, whether in a recording studio, live performance venue, or other acoustic environments.


An arpeggiator is a feature commonly found in electronic musical instruments or synthesizers that automatically arpeggiates or plays the notes of a chord in a rhythmic pattern. It enables musicians to create intricate and dynamic melodic sequences by triggering individual notes of a chord sequentially.


ASCII typically refers to “Acoustic Source Characterization by Input Impedance,” a method used to characterize the acoustic properties of sources, such as loudspeakers or musical instruments, by analyzing their input impedance measurements. By studying the input impedance, researchers can gain insights into the behavior and performance of acoustic sources across different frequencies, aiding in the design and optimization of sound systems and musical instruments.


ATL stands for “Acoustic Transmission Line,” a design concept used in the construction of loudspeaker enclosures. Unlike traditional box enclosures, ATLs utilize a labyrinth-like pathway within the enclosure to control and manipulate sound waves, reducing unwanted resonances and improving bass response. This innovative approach allows for more efficient and accurate reproduction of low-frequency sound, resulting in clearer and more immersive audio experiences.


Attack refers to the initial onset or rise in amplitude of a sound wave, typically at the beginning of a musical note or sound event. It is a crucial aspect of sound perception, influencing the timbre, intensity, and articulation of the sound, and is often characterized by its sharpness, speed, and amplitude envelope.


Attenuate refers to the process of reducing the intensity or amplitude of sound waves as they propagate through a medium or encounter obstacles. This reduction in amplitude can occur due to factors such as absorption, scattering, or reflection, leading to a decrease in sound energy. Attenuation is crucial in various applications, including soundproofing, noise control, and telecommunications, where minimizing unwanted sound transmission or signal loss is essential.


Audio refers to the reproduction or transmission of sound waves, typically in the form of electrical signals, to convey auditory information. It encompasses a broad range of applications, including music playback, speech communication, sound recording, and broadcasting. Advances in audio technology have led to the development of various devices and systems, such as speakers, microphones, amplifiers, and digital audio processors, facilitating the creation, distribution, and enjoyment of sound content.

Audio Chain

An audio chain is a sequence of interconnected audio devices or components that work together to capture, process, and reproduce sound. It typically includes elements such as microphones, preamplifiers, mixers, signal processors, amplifiers, and speakers, each contributing to the overall audio signal path. The quality and characteristics of each component in the audio chain profoundly influence the fidelity and tonal characteristics of the final sound output.

Audio Data Reduction

Audio data reduction is the process of compressing audio signals to reduce their file size while retaining perceptual quality. This compression is achieved through various algorithms and techniques, such as lossy or lossless compression, to remove redundant or less essential information from the audio stream.

Audio Frequency

Audio frequency refers to the range of frequencies within the audible spectrum, typically perceived by the human ear, spanning from approximately 20 Hz to 20,000 Hz. These frequencies correspond to the pitch or tone of sound waves and are essential for discerning various aspects of auditory perception, including melody, rhythm, and timbre.

Audio Interface

An audio interface serves as a bridge between audio input and output devices, facilitating the transfer of audio signals between analog and digital domains. It typically connects to a computer or recording device via USB, Thunderbolt, or other interfaces, allowing musicians, producers, and sound engineers to record, process, and playback audio with high quality and low latency. Audio interfaces often feature microphone preamps, line inputs/outputs, headphone outputs, and digital converters to accommodate various recording and playback needs.

Audio Over IP

Audio over IP (AoIP) refers to the transmission of digital audio signals over an Internet Protocol (IP) network, such as the internet or local area networks (LANs). This technology enables efficient and flexible distribution of audio content, allowing for remote broadcasting, collaborative audio production, and integration with networked audio devices.

Audio Random Access (ARA)

Audio Random Access (ARA) is a technology that facilitates seamless integration between digital audio workstations (DAWs) and audio plugin software. It allows plugins to communicate directly with the DAW, enabling features such as real-time audio editing, automatic tempo detection, and instant access to audio regions for processing.

Audio Scrubbing

Audio scrubbing is a technique used in digital audio editing to navigate and preview audio recordings by manually moving through the waveform at variable speeds. This process allows users to locate specific sections or fine-tune edits with precision by listening to the audio playback in real-time. Audio scrubbing is commonly used in audio editing software for tasks such as identifying errors, synchronizing sound effects, or aligning musical elements.

Audio Video Bridging (AVB)

Audio Video Bridging (AVB) is a set of standards for transmitting audio and video data over Ethernet networks with guaranteed quality of service (QoS). It enables synchronized, low-latency streaming of multimedia content, making it suitable for applications such as live performances, conferencing systems, and professional audio/video production environments.


An audiophile is an individual who has a passionate and discerning appreciation for high-quality audio reproduction, often pursuing the highest fidelity in sound reproduction systems and recordings.

Auditory Area

An auditory area refers to a specific region or section of the brain responsible for processing auditory information, including sound perception and interpretation. These areas, such as the primary auditory cortex and associated auditory regions, play crucial roles in recognizing speech, detecting sound patterns, and distinguishing between different frequencies and timbres.


Auto-Tune is a pitch-correction software used in music production to adjust the intonation of vocal performances. It works by analyzing and modifying the pitch of individual notes, helping singers achieve a more polished and in-tune sound, though its distinct effect can also be used creatively for stylistic purposes.


An autolocator is a device used in audio recording and post-production to mark and navigate specific points in a recording session or project. It allows users to quickly locate and access desired sections of audio material for editing, mixing, or playback, enhancing workflow efficiency and organization.

Automatic Dialogue Replacement (ADR)

Automatic Dialogue Replacement (ADR) is a technique used in filmmaking and audio post-production to re-record dialogue spoken during filming. It involves actors watching the original footage and syncing their speech to match the lip movements of the characters on-screen, ensuring seamless integration of dialogue with the visual content.

Automatic Gain Control

Automatic Gain Control (AGC) is a dynamic processing technique used in audio systems to automatically adjust the gain or volume of an audio signal to maintain a consistent output level. It is commonly employed in devices such as amplifiers, mixers, and recording equipment to prevent signal clipping during periods of high input levels and to compensate for variations in signal strength.


Automation in audio production refers to the process of controlling various parameters within a digital audio workstation (DAW) or hardware equipment over time without manual intervention. It allows users to program changes in settings such as volume, panning, effects, and plugin parameters to create dynamic and evolving soundscapes, enhancing the expressiveness and precision of audio projects.

Aux Return

An Aux Return, short for auxiliary return, is an input channel on a mixing console or audio interface designed to receive signals from auxiliary sends or effects processors. It allows users to blend processed audio signals back into the main mix, providing control over the level of effects such as reverb, delay, or chorus in the overall sound mix.

Auxiliary Equipment

Auxiliary equipment in audio refers to additional devices or tools used to complement primary audio systems or processes, often enhancing functionality or providing specific features. This category encompasses a wide range of equipment, including signal processors like compressors and equalizers, effects units such as reverbs and delays, as well as utility devices like DI boxes and headphone amplifiers.

Auxiliary Sends (Auxes)

Auxiliary sends, often abbreviated as auxes, are dedicated output channels on a mixing console or audio interface used to route signals from individual channels to external devices or effects processors. They enable users to create separate mix buses for effects such as reverb, delay, or chorus, providing control over the amount of processed signal blended back into the main mix.


See “Audio Video Bridging (AVB)”


In audio engineering, an axis refers to a line or point along which sound sources or microphones are positioned to achieve specific spatial characteristics or capture techniques. Understanding the positioning and orientation of the axis relative to sound sources is crucial for achieving desired stereo imaging, spatial balance, and recording perspectives.


Azimuth refers to the horizontal angle between a reference point and a given direction, commonly used in audio to describe the alignment of a recording or playback device, such as a microphone or a tape head, with respect to the source of sound. Proper adjustment of azimuth ensures accurate sound pickup or playback, minimizing phase discrepancies and optimizing stereo imaging and channel separation.