cross-runtime benchmarking lib and cli
MIT License

Sometimes when we bench some code, we need to prepare the environment. Think of a scenario like when we try to compare the performance of two different database indexes. The baseline can be a query without index, and two bench statements both with their own unique index. Pseudo:

group('select 10 users', () => {
  baseline('no index', async () => {
    await client.query('select * from users order by created_at asc, id asc limit 10');

  bench('with created_at index', async() => {
    await client.query(`select * from users order by created_at asc, id asc limit 10`);
  }, { 
    before: async () => dropIndexes().then(() => createFirstIndex()),
    after: async () => dropIndexes(),

  bench('with created_at and id index', async() => {
    await client.query(`select * from users order by created_at asc, id asc limit 10`);
  }, { 
    before: async () => dropIndexes().then(() => createAnotherIndex()),
    after: async () => dropIndexes(),

I think it would be useful to have it both on the group level, as well as on the bench level. The group could in this case for example be used to connect/disconnect to the database, or to support multiple query tests for a single created index, where another group would test another index.

Currently, benches can run forever. Having a timeout option would be nice.

Thanks for a fantastic tool!

Is there a way to write the output to a file and convert it to a table in markdown with the performance measurements and dates executed? That would be helpful for app developers who are keen to track performance over time and conveniently be able to share the results across the team.

Please add a parameter to set up a unit like us/ms to the run command, so all results will be in the given unit.

It would be useful if it could be set to automatically save the output of the benchmark (e.g. to .json) and then the two could be automatically compared to produce something like this:

I made simple script:

import { summary } from 'mitata/reporter/table.mjs';
import { join } from 'node:path';

const __dirname = new URL('.', import.meta.url).pathname;

const outputBunUtilities = JSON.parse(await (await Bun.file(join(__dirname, 'outputs', 'bun-utilities.json'))).text()) => {
    return { ...b, name: `[bun-utilities] ${}`, id: };
const outputNodeChildProcess = JSON.parse(await (await Bun.file(join(__dirname, 'outputs', 'node-fs.json'))).text()) => {
    return { ...b, name: `[node-fs] ${}`, id: };

const benchmarks = [].concat(outputBunUtilities, outputNodeChildProcess);
const summaries = [
    'copydir empty',
    'copydir files',
    'rmdir empty',
    'rmdir files'

for (const summaryName of summaries) {
    const filtered = benchmarks.filter(b => === summaryName);


I have a complex case to get working, and I think I understand what's going on. The test is below.

The issue is akasha.filecache.documents. This object is initialized asynchronously, and the code here is how I've successfully used this object in many other files. The cacheSetup and fileCachesReady functions are where this object is set up, and calling isReady ensures that it is set up. THen calling console.log(documents) prints out the object.

Hence - the object is set up before the bench blocks are executed. Unless run executes the bench blocks right away somehow.

In the plugin.findBlogDoc function, akasha.filecache.documents is found to be undefined. But, as just said, the code which executes before the bench blocks has ensured that this object is set up.

Is run executing in-line with the other code? That is, the condition of akasha.filecache.documents being undefined could happen if run were to magically execute right away, before fileCachesReady exits. But that should not be happening because Node.js handles top-level async/await correctly, I believe.

import { bench, run } from "mitata";
import { createRequire } from 'module';
import * as akasha from 'akasharender';
const require = createRequire(import.meta.url);
const config = require('../config.js');
// const akasha = config.akasha;

await akasha.cacheSetup(config);
await akasha.fileCachesReady(config);

const documents = (await akasha.filecache).documents;
await documents.isReady();

const plugin = config.plugin("@akashacms/plugins-blog-podcast");

const info1 = documents.find('blog/2017/11/');

bench('find-blog-vpath', () => {

bench('find-blog-docs', async () => {
    await plugin.findBlogDocs(config, plugin.blogcfg('news'), 'news');

// await filecache.close();
await akasha.closeCaches();

try {
    await run({
        percentiles: false
} catch (err) { console.error(err); }
  const colors = opts.colors ??= true;
SyntaxError: Unexpected token '??='
    at Loader.moduleStrategy (internal/modules/esm/translators.js:145:1[8](
Error: Process completed with exit code 1.

I want to create a set of performance comparisons of template engines running on Node or Bun, and decided to use Mitata as the benchmark framework because that's what the Bun project uses.

I have found several template engines -- EJS, Nunjucks, MarkdownIT so far -- that cause Bun to segfault when running the benchmark using Mitata. Full details are in the Bun issue queue: oven-sh/bun#811

Reporting here because you may have an idea.

Simplest example is:

import { bench, run } from "mitata";

import * as ejs from 'ejs';

let people = ['geddy', 'neil', 'alex'];

bench('literal', () => { return `${people.join(', ')}`; });

bench('ejs-join', () => {
    ejs.render('<%= people.join(", "); %>', { people: people });

try {
    await run();
} catch (err) { console.error(err); }

If I run this code standalone (without Mitata) it executes perfectly on Bun. If I comment out the ejs-join stanza, it still segfaults, and if I comment out the import for ejs then Bun does not segfault.

Hey ! First of all I wanted to thank you for this package, the output looks very nice, it's super-intuitive and runs fast !

This is how I use it:
Screenshot 2022-06-03 at 09 59 09

I thought that the group name could be in the output, and maybe adding a line of ----- between groups would add some clarity, what do you think ?

Hi, thanks for this great crate!

I noticed there is a small typo in the output when making the summary. It will print for example 2x times faster, but '2x' already reads as '2 times', so 2x times reads as '2 times times'. The right output would be either 2x faster or 2 times faster.