Important
This version of the project is now Legacy. Active development has moved to the Radio-Stream-Player-PHP repository. This document serves as the technical guide for the final standalone version.
This project is a single-page web application that plays radio streams and provides real-time audio visualizations. It is built with vanilla HTML, CSS, and JavaScript, with no external frameworks or build steps, making it lightweight and easy to run.
Core Goals:
- Provide a simple, elegant user interface.
- Offer dynamic and engaging audio visualizations using the Web Audio API.
- Be self-contained and easy to deploy.
The project consists of a few key files:
index.html: The main entry point and structure for the application. It contains the player UI, station list, and control buttons.popout.html: A minimal version of the UI for the pop-out player window. It communicates with the main window viapostMessage.stations.js: A simple ES module that exports the array of default radio stations.state.js: A centralized Pub/SubStateManagerclass that manages all application state and triggers UI updates.player.js: An ES module that handles all core player logic, including audio playback, UI controls (play, volume, station select), and interacting with the state manager.visualizer.js: An ES module responsible for all Web Audio API analysis, canvas/DOM drawing, and VU meter style logic.styles.css: Contains all styling for the application, including layout, theming (light/dark modes), and the appearance of all VU meter styles.script.js: The main entry point. It imports the other modules (player.js,visualizer.js) and initializes the application.settings.js: Handles the logic for the settings modal, including theme toggling, custom stations, background preferences, and favorites filtering.popout-script.js: The entry point for the pop-out window. It reusesplayer.jslogic but adapts it for the secondary window context.CHANGELOG.md: The history of changes and versions (formerlyPROGRESS.md).README.md: The user-facing documentation.DEVELOPER_GUIDE.md: (This file) The technical documentation for contributors.
In version 1.4.0+, the player moved from a simple global object to a robust, class-based StateManager found in state.js.
- Encapsulation: The state is held privately within the
StateManagerclass. - Controlled Mutations: Public methods like
setPlaying(status)andsetVolume(level)are the only ways to modify the state. - Reactivity (Pub/Sub): Modules can call
subscribe(callback)to listen for state changes. When state changes, all registered subscribers are notified, allowing the UI to react instantly and stay in sync regardless of whether changes originated from the main window or settings modal.
The audio visualization is powered by the Web Audio API. The audio signal flows through a series of connected nodes, all of which are created and managed within visualizer.js.
<audio>Element: The source of the stream.MediaElementAudioSourceNode: Connects the<audio>element to the Web Audio API graph.ChannelSplitterNode: Splits the stereo source into two separate mono channels (Left and Right).AnalyserNode(x2): One for each channel. These nodes do not affect the audio but provide data for visualization (time-domain and frequency-domain).AudioDestinationNode: The output, which is the user's speakers. The original source is connected directly to the destination so the analysis doesn't interfere with playback.
Diagram:
┌──────────────────────────┐
<audio> element -> │ MediaElementAudioSource ├─> speakers (AudioDestination)
└──────────────────────────┘
│
┌──────────────────────────┐
│ ChannelSplitter │
└──────────────────────────┘
│
┌────────────────────┴────────────────────┐
│ (Channel 0) (Channel 1) │
▼ ▼
┌───────────────────┐ ┌───────────────────┐
│ AnalyserNode (L) │ │ AnalyserNode (R) │
└───────────────────┘ └───────────────────┘
│ │
▼ ▼
(JS Analysis) (JS Analysis)
The visualization is handled by a requestAnimationFrame loop inside the updateVUMeters function within visualizer.js.
- Loop: On each frame, it gets the latest audio data from the
AnalyserNodes. - Style Switching: A
switchstatement checks the currentstate.vuStyleand calls the appropriate rendering function (e.g.,updateClassicVu,updateLedVu). - DOM Manipulation: The rendering functions update the DOM directly—by changing CSS properties (
height,transform), updating SVG attributes, or drawing on a<canvas>.
The pop-out feature uses window.open() to create a new, smaller browser window.
- State Transfer: The main window is paused, and the new pop-out window is initialized with the current station URL passed as a query parameter.
- Communication: When the pop-out window is closed, it uses
window.opener.postMessage()to notify the main window. The main window listens for this message to restore its UI and resume playback if necessary.
This is the simplest contribution. The station list is now managed in its own file.
- Open
stations.js. - Add a new object to the
stationsarray. - The object must have a
name(string) and aurl(string) to the audio stream.
export const stations = [
// ... existing stations
{ name: "My New Station", url: "https://your-stream-url/stream" }
];Follow these steps to add a new visualization style (e.g., "neon").
visualizer.js- Register the Style: Add your new style name to theVU_STYLESarray at the top of the file.const VU_STYLES = ['classic', 'led', /*...*/, 'retro', 'neon'];
visualizer.js- Create Setup Function: Create a functioncreateNeonVu(container, channel)that builds the initial HTML/SVG/Canvas structure for one channel of your new meter. Add a call to it in theupdateVuStylefunction'sswitchstatement.visualizer.js- Create Update Function: Create a functionupdateNeonVu(levelLeft, levelRight)that updates your meter's visuals based on the audio data. Add a call to it in theupdateVUMetersfunction'sswitchstatement.visualizer.js- Create Reset Function (Optional): If your meter needs to be reset to a zero state when paused, add logic to theresetVuMetersfunction.styles.css- Add Styling: Add the necessary CSS rules to style your new meter. Use a class selector based on the style name (e.g.,.vu-meters.vu-neon).
Many online radio stations (especially older Shoutcast/Icecast servers) still broadcast over HTTP. Modern web browsers enforce a security policy called Mixed Content, which blocks insecure HTTP resources (like audio streams) from loading on a secure HTTPS page.
To solve this, the application natively utilizes a Cloudflare Worker Proxy hosted at api.djay.ca.
How it works:
- Frontend Routing: When
player.jsattempts to play a stream, it checks the URL. If the URL starts withhttp://, it intercepts the request and re-routes it through the secure proxy:https://api.djay.ca/?url=http://.... - Worker Proxying: The Cloudflare Worker receives the request, strips any
Icy-MetaDataheaders (which cause audio playback artifacts in simple<audio>tags), fetches the insecure stream server-side, and pipes the pure audio response back to the browser over HTTPS with permissive CORS headers. - Metadata Extraction: The Worker also hosts a separate endpoint at
https://api.djay.ca/metadata?url=.... The frontend polls this endpoint every 12 seconds. The Worker fetches the stream, explicitly asks forIcy-MetaData, reads only the stream headers to extract theStreamTitle, and returns it as a clean JSON object for the UI to display in the "Now Playing" marquee.
Worker Security:
The Worker script checks the Origin and Referer headers. It will return a 403 Forbidden response to any request that does not originate from localhost or a *.djay.ca domain, preventing abuse of the proxy bandwidth.
Cloudflare's Free Tier allows for 100,000 requests per day. It's important to understand how the player utilizes these requests to estimate your maximum concurrent user capacity.
How Requests Are Triggered: When a user listens to a station, requests are made in two ways:
- Audio Stream (1 Request): The initial connection to the
/proxy endpoint consumes exactly 1 request. Because it is a continuous streaming connection, it stays open and does not generate additional requests, regardless of how long they listen. - Metadata Polling (Multiple Requests): To display live track information, the frontend polls the
/metadataendpoint every 12 seconds.
Usage Math per User:
- 1 User listening for 1 Minute = 1 (Audio) + 5 (Metadata) = 6 Requests
- 1 User listening for 1 Hour = 60 mins / 12 secs = 300 Metadata requests. Total = 301 Requests / Hour.
Capacity Estimation: With a limit of 100,000 requests per day:
- Maximum total listening hours per day:
100,000 / 301≈ 332 Hours - Scenario A (Dedicated Listeners): If your average user listens for 3 hours a day, you can support roughly ~110 daily active users on the completely free tier.
- Scenario B (Casual Listeners): If your average user listens for 1 hour a day, you can support roughly ~330 daily active users.
How to Optimize (Reduce Requests): If you are approaching the 100k daily limit, you can safely decrease the metadata polling frequency in the frontend code.
- Open
player.jsandpopout-script.js. - Locate the
setIntervalfunction insideupdateNowPlaying()(around line 365 inplayer.js). - Change the
12000(12 seconds) value to something higher, like30000(30 seconds) or60000(1 minute). Note: A 60-second interval drops the usage from 301 requests/hour down to just 61 requests/hour, effectively quintupling your user capacity!
Note: If your application grows beyond these limits, you can easily upgrade to Cloudflare Workers Paid ($5/month for 10 million requests).
When preparing a new release (incrementing the version number):
- Update
CHANGELOG.md: Move the content from[Unreleased]to a new section with the version number and date (e.g.,[1.2.0] - 2025-12-25). - Create Release Notes: Create a new file in
docs/namedRELEASE_vX.X.X.md(e.g.,docs/RELEASE_v1.2.0.md).- This file should contain a user-friendly summary of the release, highlighting key features and changes.
- It serves as the source for GitHub Release notes or announcements.
- Update Version: Ensure any version numbers in the code or
README.md(if applicable) are updated.
This guide should be kept up-to-date with any significant architectural changes.