To install mirdata:

pip install mirdata


mirdata is easily imported into your Python code by:

import mirdata

Initializing a dataset

Print a list of all available dataset loaders by calling:

import mirdata

To use a loader, (for example, ‘orchset’) you need to initialize it by calling:

import mirdata
orchset = mirdata.initialize('orchset')

Now orchset is a Dataset object containing common methods, described below.

Downloading a dataset

All dataset loaders in mirdata have a download() function that allows the user to download the canonical version of the dataset (when available). When initializing a dataset, by default, mirdata will download/read data to/from a default location (“~/mir_datasets”). This can be customized by specifying data_home in mirdata.initialize.

Downloading a dataset into the default folder:

In this first example, data_home is not specified. Thus, ORCHSET will be downloaded and retrieved from mir_datasets folder created at user root folder:

import mirdata
orchset = mirdata.initialize('orchset')
orchset.download()  # Dataset is downloaded to ~/mir_datasets/orchset
Downloading a dataset into a specified folder:

Now data_home is specified and so orchset will be read from / written to this custom location:

orchset = mirdata.initialize('orchset', data_home='Users/leslieknope/Desktop/orchset123')
orchset.download()  # Dataset is downloaded to the folder "orchset123" Leslie Knope's desktop

Partially downloading a dataset

The download() functions allows partial downloads of a dataset. In other words, if applicable, the user can select which elements of the dataset they want to download. Each dataset has a REMOTES dictionary were all the available elements are listed.

cante100 has different elements as seen in the REMOTES dictionary. Thus, we can specify which of these elements are downloaded, by passing to the download() function the list of keys in REMOTES that we are interested in. This list is passed to the download() function through the partial_download variable.

A partial download example for cante100 dataset could be:

cante100.download(partial_download=['spectrogram', 'melody', 'metadata'])

Validating a dataset

Using the method validate() we can check if the files in the local version are the same than the available canonical version, and the files were downloaded correctly (none of them are corrupted).

For big datasets: In future mirdata versions, a random validation will be included. This improvement will reduce validation time for very big datasets.

Accessing annotations

We can choose a random track from a dataset with the choice_track() method.

We can also access specific tracks by id. The available track ids can be accessed via the .track_ids attribute. In the next example we take the first track id, and then we retrieve the melody annotation.

orchset_ids = orchset.track_ids  # the list of orchset's track ids
orchset_data = orchset.load_tracks()  # Load all tracks in the dataset
example_track = orchset_data[orchset_ids[0]]  # Get the first track

# Accessing the track's melody annotation
example_melody = example_track.melody

Alternatively, we don’t need to load the whole dataset to get a single track.

orchset_ids = orchset.track_ids  # the list of orchset's track ids
example_track = orchset.track(orchset_ids[0])  # load this particular track
example_melody = example_track.melody  # Get the melody from first track

Accessing data on non-local filesystems

mirdata uses the smart_open library, which supports non-local filesystems such as GCS and AWS. If your data lives, e.g. on Google Cloud Storage (GCS), simply set the data_home variable accordingly when initializing a dataset. For example:

Annotation classes

mirdata defines annotation-specific data classes. These data classes are meant to standardize the format for all loaders, and are compatibly with jams and mir_eval.

The list and descriptions of available annotation classes can be found in Annotations.


These classes may be extended in the case that a loader requires it.

Iterating over datasets and annotations

In general, most datasets are a collection of tracks, and in most cases each track has an audio file along with annotations.

With the load_tracks() method, all tracks are loaded as a dictionary with the ids as keys and track objects (which include their respective audio and annotations, which are lazy-loaded on access) as values.

orchset = mirdata.initialize('orchset')
for key, track in orchset.load_tracks().items():
    print(key, track.audio_path)

Alternatively, we can loop over the track_ids list to directly access each track in the dataset.

orchset = mirdata.initialize('orchset')
for track_id in orchset.track_ids:

    print(track_id, orchset.track(track_id).audio_path)

Basic example: including mirdata in your pipeline

If we wanted to use orchset to evaluate the performance of a melody extraction algorithm (in our case, very_bad_melody_extractor), and then split the scores based on the metadata, we could do the following:

This is the result of the example above.

You can see that very_bad_melody_extractor performs very badly!

Using mirdata with tensorflow

The following is a simple example of a generator that can be used to create a tensorflow Dataset.

In future mirdata versions, generators for Tensorflow and Pytorch will be included.