ASAR is a simple extensive archive format. It concatenates all files together without compression (like tar) while having random access support.
- Support random access
- Use JSON to store file information
- Very easy to write a parser
This module requires Node 22.12.0 or later.
npm install --engine-strict @electron/asar$ asar --help Usage: asar [options] [command] Commands: pack|p <dir><output> create asar archive list|l <archive> list files of asar archive extract-file|ef <archive><filename> extract one file from archive extract|e <archive><dest> extract archive Options: -h, --help output usage information -V, --version output the version number Given:
app (a) ├── x1 (b) ├── x2 (c) ├── y3 (d) │ ├── x1 (e) │ └── z1 (f) │ └── x2 (g) └── z4 (h) └── w1 Exclude: a, b
asar pack app app.asar --unpack-dir "{x1,x2}"Exclude: a, b, d, f
asar pack app app.asar --unpack-dir "**/{x1,x2}"Exclude: a, b, d, f, h
asar pack app app.asar --unpack-dir "{**/x1,**/x2,z4/w1}"For full API usage, see the API documentation.
import{createPackage}from'@electron/asar';constsrc='some/path/';constdest='name.asar';awaitcreatePackage(src,dest);console.log('done.');Please note that there is currently no error handling provided!
You can pass in a transform option, that is a function, which either returns nothing, or a stream.Transform. The latter will be used on files that will be in the .asar file to transform them (e.g. compress).
import{createPackageWithOptions}from'@electron/asar';constsrc='some/path/';constdest='name.asar';functiontransform(filename){returnnewCustomTransformStream()}awaitcreatePackageWithOptions(src,dest,{transform: transform});console.log('done.');Asar uses Pickle to safely serialize binary value to file.
The format of asar is very flat:
| UInt32: header_size | String: header | Bytes: file1 | ... | Bytes: file42 |The header_size and header are serialized with Pickle class, and header_size's Pickle object is 8 bytes.
The header is a JSON string, and the header_size is the size of header's Pickle object.
Structure of header is something like this:
{"files":{"tmp":{"files":{} }, "usr" :{"files":{"bin":{"files":{"ls":{"offset": "0", "size": 100, "executable": true, "integrity":{"algorithm": "SHA256", "hash": "...", "blockSize": 1024, "blocks": ["...", "..."] } }, "cd":{"offset": "100", "size": 100, "executable": true, "integrity":{"algorithm": "SHA256", "hash": "...", "blockSize": 1024, "blocks": ["...", "..."] } } } } } }, "etc":{"files":{"hosts":{"offset": "200", "size": 32, "integrity":{"algorithm": "SHA256", "hash": "...", "blockSize": 1024, "blocks": ["...", "..."] } } } } } }offset and size records the information to read the file from archive, the offset starts from 0 so you have to manually add the size of header_size and header to the offset to get the real offset of the file.
offset is a UINT64 number represented in string, because there is no way to precisely represent UINT64 in JavaScript Number. size is a JavaScript Number that is no larger than Number.MAX_SAFE_INTEGER, which has a value of 9007199254740991 and is about 8PB in size. We didn't store size in UINT64 because file size in Node.js is represented as Number and it is not safe to convert Number to UINT64.
integrity is an object consisting of a few keys:
- A hashing
algorithm, currently onlySHA256is supported. - A hex encoded
hashvalue representing the hash of the entire file. - An array of hex encoded hashes for the
blocksof the file (i.e. for a blockSize of 4KB, this array contains the hash of every block if you split the file into N 4KB blocks). - A integer value
blockSizerepresenting the size in bytes of each block in theblockshashes above.