Using StreamSaver.js to Handle Large File Downloads in Web Applications
Overview
This article introduces StreamSaver.js, a library that enables efficient and seamless large file downloads directly from web browsers.
Traditional methods of downloading files using <a> tags can cause issues with large files, such as high memory consumption or browser rendering instead of triggering a download for certain file types like .txt or .mp4. StreamSaver.js addresses these concerns by streaming data in chunks, thereby reducing memory usage and improving performance.
Enviroment Setup
Before diving into StreamSaver.js, prepare one or more downloadable files. You can use remote resources or serve local files via a development server.
For example, when using Vite with Vue, place a test.txt file in side the public directory. Once the project runs, access it at http://localhost:port/public/test.txt.
Installation
CDN
Include the StreamSaver.js script directly in your HTML:
<script src="../StreamSaver.js"></script>
npm
Install via npm:
npm install streamsaver
Then import it where needed:
import streamSaver from "streamsaver";
Basic Usage
To download a .txt file using StreamSaver.js:
- Create a writable stream using
streamSaver.createWriteStream('filename.ext'). - Fetch the file URL using
fetch()and pipe its body into the created stream. - Handle completion by closing the stream once all data has been written.
Example implementation:
<button id="download">Download</button>
<script src="../StreamSaver.js"></script>
<script>
download.onclick = () => {
const fileStream = streamSaver.createWriteStream('test.txt');
fetch('http://localhost:9988/public/test.txt')
.then(res => {
const readableStream = res.body;
if (window.WritableStream && readableStream.pipeTo) {
return readableStream.pipeTo(fileStream)
.then(() => console.log('Download complete'));
}
const writer = fileStream.getWriter();
const reader = res.body.getReader();
const pump = () => reader.read()
.then(res => res.done
? writer.close()
: writer.write(res.value).then(pump)
);
pump();
});
};
</script>
If cross-origin errors occur, configure a mitm path pointing to mitm.html from the repository:
streamSaver.mitm = 'https://your-server.com/mitm.html';
Downloading Multiple Files as ZIP
To package multiple files into a ZIP archive for download:
- Create a ZIP stream using
zip-stream.js. - Enqueue each file's stream into the ZIP.
- Close the ZIP stream upon completion.
Example:
<button id="download">Download ZIP</button>
<script src="../StreamSaver.js"></script>
<script src="zip-stream.js"></script>
<script>
const urls = [
{ fileName: 'test.txt', url: 'http://localhost:9988/public/test.txt' },
{ fileName: 'test.csv', url: 'http://localhost:9988/public/test.csv' }
];
download.onclick = () => {
const fileStream = streamSaver.createWriteStream('test.zip');
const readableZipStream = new ZIP({
async pull(ctrl) {
for (let i = 0; i < urls.length; i++) {
const res = await fetch(urls[i].url);
const stream = () => res.body;
const name = urls[i].fileName;
ctrl.enqueue({ name, stream });
}
ctrl.close();
}
});
if (window.WritableStream && readableZipStream.pipeTo) {
readableZipStream.pipeTo(fileStream).then(() => console.log('ZIP downloaded'));
}
};
</script>
Merging Files Before Download
To merge two CSV files into one before downloading:
- Define a array of URLs for the source files.
- Iterate through them, fetching each file and appending a newline after each.
- Close the output stream once all have been processed.
Implementation:
<button onclick="mergeAndDownload()">Merge & Download</button>
<script src="../StreamSaver.js"></script>
<script>
const encode = TextEncoder.prototype.encode.bind(new TextEncoder);
const fileUrls = [
'http://localhost:9988/public/test1.csv',
'http://localhost:9988/public/test2.csv'
];
let writer = null;
let iterator = null;
function mergeAndDownload() {
const fileStream = streamSaver.createWriteStream('merged.csv');
writer = fileStream.getWriter();
iterator = fileUrls[Symbol.iterator]();
processNext();
}
async function processNext() {
const next = iterator.next();
if (next.done) {
writer.close();
return;
}
const response = await fetch(next.value);
const reader = response.body.getReader();
const pump = () => reader.read().then(res => {
if (res.done) {
writer.write(encode('\n')).then(processNext);
} else {
writer.write(res.value).then(pump);
}
});
pump();
}
</script>
This approach ensures sequential handling of file contents while maintaining low memory overhead.