Recently, I heard a beautiful song "flowers all the way", so I wanted to use Three.js to visualize the music spectrum. The final effect is as follows:

The code address is here: https: / / github.com/Quark gluonplasta / threejs exerci ze
The realization of this effect can learn two aspects:
- AudioContext is used for audio decoding and various processing
- 3d scene rendering of Three.js
What are you waiting for? Let's start.
Train of thought analysis
To visualize the music spectrum, you must first obtain the spectrum data, which uses the AudioContext api.
The AudioContext api can decode audio and do a series of processing on it. Each processing step is called a Node.
After decoding, we need to use analyzer to get the spectrum data, and then pass it to audioContext for playback. Therefore, there are three processing nodes: Source, analyzer and Destination
context audioCtx = new AudioContext(); const source = audioCtx.createBufferSource(); const analyser = audioCtx.createAnalyser(); audioCtx.decodeAudioData(Audio binary data, function(decodedData) { source.buffer = decodedData; source.connect(analyser); analyser.connect(audioCtx.destination); });
First decode the audio, create a BufferSource node to save the decoded data, then transfer it to the analyzer to obtain spectrum data, and finally transfer it to the Destination for playback.
Call source.start() to start transmitting audio data, so that the analyzer can get the data of music spectrum and the Destination can play normally.
The api for analyzer to get audio spectrum data is as follows:
const frequencyData = new Uint8Array(analyser.frequencyBinCount); analyser.getByteFrequencyData(frequencyData);
The frequency data that can be obtained each time has 1024 elements, which can be divided into 50. Calculate the average value, so there will only be 1024 / 50 = 21 spectrum unit data.
Then you can draw these spectrum data with Three.js.
21 values can be drawn into 21 cubes BoxGeometry. For materials, use MeshPhongMaterial (because this reflective calculation method is proposed by a person surnamed Feng, it is called Phong). Its feature is that the surface can reflect light. If MeshBasicMaterial is used, it will not reflect light.
Then add the petal rain effect, which we have implemented before, that is, use Sprite (always facing a plane of the camera) to map, and then change the position frame by frame.
Start Three.js with "flowers and rain all over the sky"
Then set the light and camera respectively:
Light we use point light to illuminate from one position. With Phong's material, we can achieve reflective effect.
The camera uses perspective camera, which is characterized by the effect of near large and far small when viewed from one point, and has a sense of space. Because orthographic camera is a parallel projection, it has no effect of near large and far small. Objects are the same size no matter how far away they are.
Then render it by Renderer, and then refresh it frame by frame with requestanimation frame.
Next, let's write the code:
code implementation
We first get the audio files on the server through the fetch and convert them to ArrayBuffer.
ArrayBuffer is an api provided by JS language for storing binary data. It is similar to Blob and Buffer. The differences are as follows:
- ArrayBuffer is a general API for storing binary data provided by JS language itself
- Blob is an API provided by the browser for file processing
- Buffer is an API provided by Node.js for IO operations
Here, there is no doubt that we will use ArrayBuffer to store binary data of audio.
fetch('./music/Flower all the way.mp3') .then(function(response) { if (!response.ok) { throw new Error("HTTP error, status = " + response.status); } return response.arrayBuffer(); }) .then(function(arrayBuffer) { });
Then, the AudioContext api is used for decoding and subsequent processing, which is divided into three processing nodes: Source, analyzer and destination:
let audioCtx = new AudioContext(); let source, analyser; function getData() { source = audioCtx.createBufferSource(); analyser = audioCtx.createAnalyser(); return fetch('./music/Flower all the way.mp3') .then(function(response) { if (!response.ok) { throw new Error("HTTP error, status = " + response.status); } return response.arrayBuffer(); }) .then(function(arrayBuffer) { audioCtx.decodeAudioData(arrayBuffer, function(decodedData) { source.buffer = decodedData; source.connect(analyser); analyser.connect(audioCtx.destination); }); }); };
After the audio is obtained and processed with AudioContext, it cannot be played directly because the browser makes restrictions. The user must take the initiative to do something before playing the audio.
In order to bypass this restriction, we listen to the mousedown event. After the user clicks, it can be played.
function triggerHandler() { getData().then(function() { source.start(0); // Play from 0 create(); // Create various objects of Three.js render(); // Render }); document.removeEventListener('mousedown', triggerHandler) } document.addEventListener('mousedown', triggerHandler);
You can then create various objects in the 3D scene:
Create cube:
Because the spectrum is 1024 data, we divide 50 into a group, so we only need to render 21 cubes:
const cubes = new THREE.Group(); const STEP = 50; const CUBE_NUM = Math.ceil(1024 / STEP); for (let i = 0; i < CUBE_NUM; i ++ ) { const geometry = new THREE.BoxGeometry( 10, 10, 10 ); const material = new THREE.MeshPhongMaterial({color: 'yellowgreen'}); const cube = new THREE.Mesh( geometry, material ); cube.translateX((10 + 10) * i); cubes.add(cube); } cubes.translateX(- (10 +10) * CUBE_NUM / 2); scene.add(cubes);
For the cube object Mesh, the geometry is BoxGeometry, the length, width and height are 10, the material is MeshPhongMaterial, and the color is yellow green.
Each cube needs to do the displacement of the lower x-axis. Finally, the whole group will do the lower displacement and move half of the overall width to achieve the purpose of centering.
The spectrum can be visualized through these cubes.
Then the petals, created with Sprite, because sprite is the plane that always faces the camera. Paste random texture maps and set random positions.
const FLOWER_NUM = 400; /** * Petal grouping */ const petal = new THREE.Group(); var flowerTexture1 = new THREE.TextureLoader().load("img/flower1.png"); var flowerTexture2 = new THREE.TextureLoader().load("img/flower2.png"); var flowerTexture3 = new THREE.TextureLoader().load("img/flower3.png"); var flowerTexture4 = new THREE.TextureLoader().load("img/flower4.png"); var flowerTexture5 = new THREE.TextureLoader().load("img/flower5.png"); var imageList = [flowerTexture1, flowerTexture2, flowerTexture3, flowerTexture4, flowerTexture5]; for (let i = 0; i < FLOWER_NUM; i++) { var spriteMaterial = new THREE.SpriteMaterial({ map: imageList[Math.floor(Math.random() * imageList.length)], }); var sprite = new THREE.Sprite(spriteMaterial); petal.add(sprite); sprite.scale.set(40, 50, 1); sprite.position.set(2000 * (Math.random() - 0.5), 500 * Math.random(), 2000 * (Math.random() - 0.5)) } scene.add(petal);
After adding the spectrum cube and a pile of petals to the scene, the object is created.
Then set the camera. We use a perspective camera to specify the angle of the viewing angle, the nearest and farthest distance, and the aspect ratio of the viewing area.

const width = window.innerWidth; const height = window.innerHeight; const camera = new THREE.PerspectiveCamera(45, width / height, 0.1, 1000); camera.position.set(0,300, 400); camera.lookAt(scene.position);
Then set the lower light and use the point light source:
const pointLight = new THREE.PointLight( 0xffffff ); pointLight.position.set(0, 300, 40); scene.add(pointLight);
Then you can use renderer to render, and combine requestanimation frame to render frame by frame.
const renderer = new THREE.WebGLRenderer(); function render() { renderer.render(scene, camera); requestAnimationFrame(render); } render();
When rendering, the position of petals and the height of spectrum cube should be calculated for each frame.
The position of the petals is to keep falling and return to the top when they reach a certain height:
petal.children.forEach(sprite => { sprite.position.y -= 5; sprite.position.x += 0.5; if (sprite.position.y < - height / 2) { sprite.position.y = height / 2; } if (sprite.position.x > 1000) { sprite.position.x = -1000; } });
For the spectrum cube, use the analyzer to obtain the latest spectrum data, calculate the average value of each group, and then set it to the scale of the cube.
// Acquire spectrum data const frequencyData = new Uint8Array(analyser.frequencyBinCount); analyser.getByteFrequencyData(frequencyData); // Calculate the average spectrum data of each packet const averageFrequencyData = []; for (let i = 0; i< frequencyData.length; i += STEP) { let sum = 0; for(let j = i; j < i + STEP; j++) { sum += frequencyData[j]; } averageFrequencyData.push(sum / STEP); } // Set the scaleY of the cube for (let i = 0; i < averageFrequencyData.length; i++) { cubes.children[i].scale.y = Math.floor(averageFrequencyData[i] * 0.4); }
You can also render the scene around the X axis, rotating a certain angle per frame.
scene.rotateX(0.005);
Finally, add the track controller. Its function is to use the mouse to adjust the position of the camera and the distance and angle of what you see.
const controls = new THREE.OrbitControls(camera);
The final effect is like this: the petals fly, and the spectrum cube beats with the music.

The complete code was submitted to github:
https://github.com/QuarkGluonPlasma/threejs-exercize
Also post a copy here:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Music spectrum visualization</title> <style> body { margin: 0; overflow: hidden; } </style> <script src="./js/three.js"></script> <script src="./js/OrbitControls.js"></script> </head> <body> <script> let audioCtx = new AudioContext(); let source, analyser; function getData() { source = audioCtx.createBufferSource(); analyser = audioCtx.createAnalyser(); return fetch('./music/Flower all the way.mp3') .then(function(response) { if (!response.ok) { throw new Error("HTTP error, status = " + response.status); } return response.arrayBuffer(); }) .then(function(arrayBuffer) { audioCtx.decodeAudioData(arrayBuffer, function(decodedData) { source.buffer = decodedData; source.connect(analyser); analyser.connect(audioCtx.destination); }); }); }; function triggerHandler() { getData().then(function() { source.start(0); create(); render(); }); document.removeEventListener('mousedown', triggerHandler) } document.addEventListener('mousedown', triggerHandler); const STEP = 50; const CUBE_NUM = Math.ceil(1024 / STEP); const FLOWER_NUM = 400; const width = window.innerWidth; const height = window.innerHeight; const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(45, width / height, 0.1, 1000); const renderer = new THREE.WebGLRenderer(); /** * Petal grouping */ const petal = new THREE.Group(); /** * Spectrum cube */ const cubes = new THREE.Group(); function create() { const pointLight = new THREE.PointLight( 0xffffff ); pointLight.position.set(0, 300, 40); scene.add(pointLight); camera.position.set(0,300, 400); camera.lookAt(scene.position); renderer.setSize(width, height); document.body.appendChild(renderer.domElement) renderer.render(scene, camera) for (let i = 0; i < CUBE_NUM; i ++ ) { const geometry = new THREE.BoxGeometry( 10, 10, 10 ); const material = new THREE.MeshPhongMaterial({color: 'yellowgreen'}); const cube = new THREE.Mesh( geometry, material ); cube.translateX((10 + 10) * i); cube.translateY(1); cubes.add(cube); } cubes.translateX(- (10 +10) * CUBE_NUM / 2); var flowerTexture1 = new THREE.TextureLoader().load("img/flower1.png"); var flowerTexture2 = new THREE.TextureLoader().load("img/flower2.png"); var flowerTexture3 = new THREE.TextureLoader().load("img/flower3.png"); var flowerTexture4 = new THREE.TextureLoader().load("img/flower4.png"); var flowerTexture5 = new THREE.TextureLoader().load("img/flower5.png"); var imageList = [flowerTexture1, flowerTexture2, flowerTexture3, flowerTexture4, flowerTexture5]; for (let i = 0; i < FLOWER_NUM; i++) { var spriteMaterial = new THREE.SpriteMaterial({ map: imageList[Math.floor(Math.random() * imageList.length)], }); var sprite = new THREE.Sprite(spriteMaterial); petal.add(sprite); sprite.scale.set(40, 50, 1); sprite.position.set(2000 * (Math.random() - 0.5), 500 * Math.random(), 2000 * (Math.random() - 0.5)) } scene.add(cubes); scene.add(petal); } function render() { petal.children.forEach(sprite => { sprite.position.y -= 5; sprite.position.x += 0.5; if (sprite.position.y < - height / 2) { sprite.position.y = height / 2; } if (sprite.position.x > 1000) { sprite.position.x = -1000; } }); const frequencyData = new Uint8Array(analyser.frequencyBinCount); analyser.getByteFrequencyData(frequencyData); const averageFrequencyData = []; for (let i = 0; i< frequencyData.length; i += STEP) { let sum = 0; for(let j = i; j < i + STEP; j++) { sum += frequencyData[j]; } averageFrequencyData.push(sum / STEP); } for (let i = 0; i < averageFrequencyData.length; i++) { cubes.children[i].scale.y = Math.floor(averageFrequencyData[i] * 0.4); } scene.rotateX(0.005); renderer.render(scene, camera); requestAnimationFrame(render); } const controls = new THREE.OrbitControls(camera); </script> </body> </html>
summary
In this paper, we learned how to do audio spectrum visualization.
First, get audio data through fetch and save it with ArrayBuffer, which is JS's standard api for storing binary data. Other similar APIs are blob and Buffer. Blob is the api for saving file binary data in the browser, and Buffer is the api for saving IO data in Node.js,.
Then use the AudioContext api to obtain spectrum data and play audio. It is composed of a series of nodes. Here, we save the audio data through the Source, then pass it to the analyzer to obtain spectrum data, and finally pass it to the Destination.
Then there is the rendering of 3D scene. The spectrum cube and petal rain are drawn respectively. Mesh and Sprite are used. Mesh is an object composed of geometry and materials. BoxGeometry and MeshPhongMaterial (reflective) are used here. Sprite is a plane that always faces the camera to show the petals.
Then a point light source is set, which can achieve the reflective effect with Phong's material.
Using a perspective camera, you can achieve the 3D perspective effect of near large and far small, but the orthogonal camera can't do this effect. It is a plane projection, and the size is the same as far away.
Then, in the rendering of each frame, change the position of the petals and obtain the spectrum data, and change the scale of the cube.
In this article, we not only learned AudioContext to obtain audio spectrum data, but also learned 3D rendering with Three.js. The combination of data and rendering is what visualization does: better display data through an appropriate display method.
Visualization is an application scenario of Three.js, and games are also an application scenario. We will do some exploration later.