Sample Rate and Bit Depth Explained
For Artists
Sample rate determines how many times per second audio is captured. Bit depth determines the resolution of each sample. For most music production, 44.1 kHz or 48 kHz at 24-bit is the standard. Recording at higher sample rates offers marginal benefits during processing but larger file sizes. Bit depth matters more than sample rate for recording quality, because it directly affects your noise floor and dynamic range.
These numbers show up every time you create a new session, bounce a mix, or export a master. Most producers pick the default and never think about it again. That usually works fine. But understanding what these settings actually do helps you make informed decisions when they matter: recording, mixing, exporting stems, and delivering masters for distribution.
For where sample rate and bit depth fit in the overall production workflow, see Music Production Basics.
Sample Rate: How Often the Sound Is Measured
Digital audio works by measuring (sampling) the amplitude of a sound wave thousands of times per second. The sample rate is how many measurements happen each second.
Why 44.1 kHz Is the Standard
Human hearing tops out around 20 kHz. The Nyquist theorem says you need at least double the highest frequency you want to capture. So 44.1 kHz captures frequencies up to 22.05 kHz, which covers everything a human can hear. This is why CD audio has been 44.1 kHz since 1982. It is not a compromise. It is a solved problem for playback.
Common Sample Rates
Sample Rate | Where It Is Used | File Size vs. 44.1 kHz |
|---|---|---|
44.1 kHz | CD, streaming distribution, most final masters | Baseline |
48 kHz | Video, film, broadcast, many DAW defaults | ~9% larger |
88.2 kHz | Some high-resolution production sessions | ~2x larger |
96 kHz | High-resolution audio, archival recording | ~2.2x larger |
192 kHz | Niche audiophile applications | ~4.3x larger |
Does a Higher Sample Rate Sound Better?
In a blind test between a 44.1 kHz and a 96 kHz recording, no study has reliably shown that listeners can hear a difference in the final playback. The frequencies above 22 kHz that higher sample rates capture are inaudible to humans.
Where higher sample rates can matter is during processing. Some plugins perform mathematical operations that benefit from more data points, particularly time-stretching, pitch-shifting, and certain saturation algorithms. Recording at 48 kHz or 96 kHz and converting to 44.1 kHz for the final master is a reasonable workflow. But recording at 192 kHz doubles your session's storage demands and CPU load for a difference that is, at best, theoretical.
Practical recommendation: Start sessions at 48 kHz. It is compatible with video workflows, handles processing well, and converts cleanly to 44.1 kHz for distribution. If your computer struggles with CPU load, 44.1 kHz is perfectly fine.
Bit Depth: The Resolution of Each Sample
Bit depth determines how precisely each sample is measured. Think of it as the number of possible volume levels each measurement can represent.
16-Bit vs. 24-Bit
Bit Depth | Dynamic Range | Noise Floor | Used For |
|---|---|---|---|
16-bit | 96 dB | Audible in quiet recordings | CD audio, streaming distribution, final masters |
24-bit | 144 dB | Well below audible threshold | Recording, mixing, production sessions |
32-bit float | ~1,528 dB | Virtually unlimited (internal processing) | DAW internal processing, plugin calculations |
The practical difference: 16-bit audio has 65,536 possible amplitude levels per sample. 24-bit has 16,777,216. The extra resolution means 24-bit recordings have a much lower noise floor, which gives you more headroom and cleaner quiet passages.
Why Bit Depth Matters More Than Sample Rate
When you record at 24-bit, you can record at conservative levels (peaks at -12 dBFS) and still have 120 dB of usable dynamic range. Plenty of room above the noise floor. At 16-bit with the same conservative recording level, you are working with noticeably less resolution in the quiet parts. This is why gain staging and bit depth work together: 24-bit recording lets you leave headroom without sacrificing quality.
Always record and mix at 24-bit. Convert to 16-bit only when bouncing the final master for CD or distribution. Your DAW handles the conversion internally when you export.
Dithering: The Conversion Step Nobody Explains
When you convert from 24-bit to 16-bit (which you do every time you bounce a final master), you lose 8 bits of resolution. Dithering adds an extremely low-level noise signal during this conversion to smooth out the quantization errors that would otherwise cause subtle distortion in quiet passages.
Use dithering when:
- Exporting a final master at 16-bit from a 24-bit session
- Bouncing to any lower bit depth
Do not use dithering when:
- Exporting at the same bit depth as your session
- Exporting stems (keep them at 24-bit)
- Bouncing within your session for further processing
Most mastering plugins and DAW export settings include a dither option. Enable it and choose a noise-shaping algorithm (POW-r Type 1 is a safe default for music). Apply dither once, on the final export. Applying it multiple times in the chain adds cumulative noise.
What Settings to Use at Each Stage
Stage | Sample Rate | Bit Depth | Reasoning |
|---|---|---|---|
Recording | 48 kHz | 24-bit | Clean capture with headroom and processing flexibility |
Mixing | Match recording session | 24-bit or 32-bit float | Do not convert mid-project |
Stem exports | Match session | 24-bit | Full resolution for remixes, sync, Atmos |
Master for distribution | 44.1 kHz | 16-bit (with dither) | Industry standard for streaming and CD |
Archival master | 48 kHz or higher | 24-bit | Full-resolution backup of your finished mix |
For specific delivery requirements by distributor, see Music Distribution Guide. Platform loudness standards and format requirements are covered in Mastering for Streaming.
The Audiophile Debate
You will encounter discussions online about whether high-resolution audio (96 kHz/24-bit or higher) sounds meaningfully better for listeners. The scientific consensus: for playback, 44.1 kHz/16-bit reproduces everything the human ear can detect. Higher-resolution playback formats are a marketing category more than an audible improvement.
Where higher resolution genuinely helps is in the production process, where extra headroom and processing precision make your work easier. The listener never hears your session files. They hear the final master, and that master is almost always 44.1 kHz/16-bit.
If you are an independent artist working within a budget, spend your money on better recordings and mixing, not on higher sample rates for distribution.
Frequently Asked Questions
Should I record at 44.1 kHz or 48 kHz?
48 kHz is a good default. It works for both music and video projects, handles processing well, and converts cleanly to 44.1 kHz for distribution.
Does 24-bit recording make my music sound better?
It gives you more headroom and a lower noise floor during recording and mixing. The final 16-bit master will sound the same, but the process of getting there is cleaner.
What is 32-bit float and do I need it?
32-bit float is the internal processing format in most modern DAWs. It provides nearly unlimited headroom inside the DAW. You do not need to record at 32-bit float, but your DAW is already using it for calculations.
Do streaming platforms support high-resolution audio?
Some platforms offer lossless or high-resolution tiers, but most listeners stream at compressed formats. Your distributor will specify accepted delivery formats, typically 44.1 kHz/16-bit WAV.
Read Next:
From Session to Release:
Knowing your session settings is the technical foundation. Orphiq handles the organizational side so your productions move from finished master to distributed release without files getting lost in translation.
