Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Using StreamSaver.js to Handle Large File Downloads in Web Applications

Tech 1

Overview

This article introduces StreamSaver.js, a library that enables efficient and seamless large file downloads directly from web browsers.

Traditional methods of downloading files using <a> tags can cause issues with large files, such as high memory consumption or browser rendering instead of triggering a download for certain file types like .txt or .mp4. StreamSaver.js addresses these concerns by streaming data in chunks, thereby reducing memory usage and improving performance.

Enviroment Setup

Before diving into StreamSaver.js, prepare one or more downloadable files. You can use remote resources or serve local files via a development server.

For example, when using Vite with Vue, place a test.txt file in side the public directory. Once the project runs, access it at http://localhost:port/public/test.txt.

Installation

CDN

Include the StreamSaver.js script directly in your HTML:

<script src="../StreamSaver.js"></script>

npm

Install via npm:

npm install streamsaver

Then import it where needed:

import streamSaver from "streamsaver";

Basic Usage

To download a .txt file using StreamSaver.js:

  1. Create a writable stream using streamSaver.createWriteStream('filename.ext').
  2. Fetch the file URL using fetch() and pipe its body into the created stream.
  3. Handle completion by closing the stream once all data has been written.

Example implementation:

<button id="download">Download</button>
<script src="../StreamSaver.js"></script>
<script>
download.onclick = () => {
  const fileStream = streamSaver.createWriteStream('test.txt');
  
  fetch('http://localhost:9988/public/test.txt')
    .then(res => {
      const readableStream = res.body;
      if (window.WritableStream && readableStream.pipeTo) {
        return readableStream.pipeTo(fileStream)
          .then(() => console.log('Download complete'));
      }
      
      const writer = fileStream.getWriter();
      const reader = res.body.getReader();
      const pump = () => reader.read()
        .then(res => res.done
          ? writer.close()
          : writer.write(res.value).then(pump)
        );
      pump();
    });
};
</script>

If cross-origin errors occur, configure a mitm path pointing to mitm.html from the repository:

streamSaver.mitm = 'https://your-server.com/mitm.html';

Downloading Multiple Files as ZIP

To package multiple files into a ZIP archive for download:

  1. Create a ZIP stream using zip-stream.js.
  2. Enqueue each file's stream into the ZIP.
  3. Close the ZIP stream upon completion.

Example:

<button id="download">Download ZIP</button>
<script src="../StreamSaver.js"></script>
<script src="zip-stream.js"></script>
<script>
const urls = [
  { fileName: 'test.txt', url: 'http://localhost:9988/public/test.txt' },
  { fileName: 'test.csv', url: 'http://localhost:9988/public/test.csv' }
];

download.onclick = () => {
  const fileStream = streamSaver.createWriteStream('test.zip');
  
  const readableZipStream = new ZIP({
    async pull(ctrl) {
      for (let i = 0; i < urls.length; i++) {
        const res = await fetch(urls[i].url);
        const stream = () => res.body;
        const name = urls[i].fileName;
        ctrl.enqueue({ name, stream });
      }
      ctrl.close();
    }
  });
  
  if (window.WritableStream && readableZipStream.pipeTo) {
    readableZipStream.pipeTo(fileStream).then(() => console.log('ZIP downloaded'));
  }
};
</script>

Merging Files Before Download

To merge two CSV files into one before downloading:

  1. Define a array of URLs for the source files.
  2. Iterate through them, fetching each file and appending a newline after each.
  3. Close the output stream once all have been processed.

Implementation:

<button onclick="mergeAndDownload()">Merge & Download</button>
<script src="../StreamSaver.js"></script>
<script>
const encode = TextEncoder.prototype.encode.bind(new TextEncoder);
const fileUrls = [
  'http://localhost:9988/public/test1.csv',
  'http://localhost:9988/public/test2.csv'
];

let writer = null;
let iterator = null;

function mergeAndDownload() {
  const fileStream = streamSaver.createWriteStream('merged.csv');
  writer = fileStream.getWriter();
  iterator = fileUrls[Symbol.iterator]();
  processNext();
}

async function processNext() {
  const next = iterator.next();
  if (next.done) {
    writer.close();
    return;
  }
  
  const response = await fetch(next.value);
  const reader = response.body.getReader();
  
  const pump = () => reader.read().then(res => {
    if (res.done) {
      writer.write(encode('\n')).then(processNext);
    } else {
      writer.write(res.value).then(pump);
    }
  });
  
  pump();
}
</script>

This approach ensures sequential handling of file contents while maintaining low memory overhead.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.