TECH DETAILS

Chip Mixer - Day 4

Advanced Level

Table of Contents

01 / Web Audio API Basics

Web Audio API uses a node-based graph. Audio flows from source nodes through processing nodes to the destination (speakers).

// Create audio context - the main hub const audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // Audio graph: Source -> Gain -> Analyser -> Destination const masterGain = audioCtx.createGain(); const analyser = audioCtx.createAnalyser(); masterGain.connect(analyser); analyser.connect(audioCtx.destination); // Set master volume masterGain.gain.value = 0.75;
iOS Audio Restriction

iOS requires a user gesture before playing audio. Call audioCtx.resume() inside a click/touch handler.

02 / Oscillator Types

Oscillators generate periodic waveforms. Each type has a distinct sound character - the foundation of synthesis.

// Available waveforms in Web Audio const waveforms = [ 'sine', // Pure tone, soft - like a flute 'square', // Hollow, buzzy - classic 8-bit sound 'sawtooth', // Bright, harsh - good for leads 'triangle' // Between sine/square - softer 8-bit ]; function playNote(frequency, type, duration) { const osc = audioCtx.createOscillator(); const gain = audioCtx.createGain(); osc.type = type; // 'square', 'sawtooth', etc. osc.frequency.setValueAtTime(frequency, audioCtx.currentTime); // ADSR-like envelope: quick attack, decay to sustain const now = audioCtx.currentTime; gain.gain.setValueAtTime(0.3, now); gain.gain.exponentialRampToValueAtTime(0.001, now + duration * 0.9); osc.connect(gain); gain.connect(masterGain); osc.start(now); osc.stop(now + duration); }
Why exponentialRampToValueAtTime?

Human hearing is logarithmic. Exponential decay sounds more natural than linear decay.

03 / Noise Generation

White noise is essential for percussion (hi-hats, snares). We create it by filling a buffer with random values.

function createNoiseBuffer() { // Short buffer (100ms of noise) const bufferSize = audioCtx.sampleRate * 0.1; const buffer = audioCtx.createBuffer(1, bufferSize, audioCtx.sampleRate); const output = buffer.getChannelData(0); // Fill with random values between -1 and 1 for (let i = 0; i < bufferSize; i++) { output[i] = Math.random() * 2 - 1; } return buffer; } function playNoise(duration) { const noise = audioCtx.createBufferSource(); noise.buffer = createNoiseBuffer(); const gain = audioCtx.createGain(); gain.gain.setValueAtTime(0.3, audioCtx.currentTime); gain.gain.exponentialRampToValueAtTime( 0.001, audioCtx.currentTime + duration * 0.8 ); noise.connect(gain); gain.connect(masterGain); noise.start(); noise.stop(audioCtx.currentTime + duration); }

04 / Step Sequencer Pattern

A step sequencer loops through a pattern, triggering notes at each step. Classic chiptune technique.

// Pattern: 16 steps, frequency for each (0 = silent) const pattern = { ch1: [262, 0, 330, 0, 392, 0, 330, 0, 262, 0, 392, 0, 330, 0, 262, 0], ch2: [131, 131, 0, 131, 131, 0, 131, 131, 0, 131, 131, 0, 131, 131, 0, 0], noise: [1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0] }; let currentStep = 0; let bpm = 120; function sequencerStep() { const stepDuration = 60 / bpm / 4; // 16th note duration // Play each channel if note exists if (pattern.ch1[currentStep] > 0) { playNote(pattern.ch1[currentStep], 'square', stepDuration); } if (pattern.ch2[currentStep] > 0) { playNote(pattern.ch2[currentStep], 'sawtooth', stepDuration); } if (pattern.noise[currentStep] > 0) { playNoise(stepDuration); } // Advance to next step (loop at 16) currentStep = (currentStep + 1) % 16; } // Start sequencer const intervalMs = (60 / bpm / 4) * 1000; setInterval(sequencerStep, intervalMs);

05 / Audio Visualizer

The AnalyserNode provides frequency data we can visualize as bars - classic equalizer effect.

// Configure analyser const analyser = audioCtx.createAnalyser(); analyser.fftSize = 256; // Determines frequency resolution const bufferLength = analyser.frequencyBinCount; // Half of fftSize const dataArray = new Uint8Array(bufferLength); function drawVisualizer() { // Get current frequency data analyser.getByteFrequencyData(dataArray); const barCount = 32; const step = Math.floor(bufferLength / barCount); for (let i = 0; i < barCount; i++) { // Get frequency value (0-255) const value = dataArray[i * step]; // Map to bar height const height = (value / 255) * 55; // Draw bar bars[i].style.height = height + 'px'; } requestAnimationFrame(drawVisualizer); }

06 / BPM & Timing

BPM (beats per minute) determines the tempo. We calculate step duration from BPM.

// BPM to milliseconds conversion function getStepDuration(bpm) { const beatsPerSecond = bpm / 60; const secondsPerBeat = 1 / beatsPerSecond; const secondsPer16th = secondsPerBeat / 4; // 4 steps per beat return secondsPer16th * 1000; // Convert to ms } // Example: 120 BPM // 120/60 = 2 beats/sec // 1/2 = 0.5 sec/beat // 0.5/4 = 0.125 sec per 16th note = 125ms // Update tempo dynamically function updateBPM(newBPM) { bpm = newBPM; clearInterval(sequencerInterval); sequencerInterval = setInterval(sequencerStep, getStepDuration(bpm)); }
Timing Accuracy

setInterval isn't precise for music. For pro-level timing, use audioCtx.currentTime and schedule notes ahead.