Articles | Volume 3, issue 2
https://doi.org/10.5194/gc-3-263-2020
https://doi.org/10.5194/gc-3-263-2020
Research article
 | 
11 Sep 2020
Research article |  | 11 Sep 2020

Earth system music: music generated from the United Kingdom Earth System Model (UKESM1)

Lee de Mora, Alistair A. Sellar, Andrew Yool, Julien Palmieri, Robin S. Smith, Till Kuhlbrodt, Robert J. Parker, Jeremy Walton, Jeremy C. Blackford, and Colin G. Jones
Video abstract
Abstract

Scientific data are almost always represented graphically in figures or in videos. With the ever-growing interest from the general public in understanding climate sciences, it is becoming increasingly important that scientists present this information in ways that are both accessible and engaging to non-experts.

In this pilot study, we use time series data from the first United Kingdom Earth System Model (UKESM1) to create six procedurally generated musical pieces. Each of these pieces presents a unique aspect of the ocean component of the UKESM1, either in terms of a scientific principle or a practical aspect of modelling. In addition, each piece is arranged using a different musical progression, style and tempo.

These pieces were created in the Musical Instrument Digital Interface (MIDI) format and then performed by a digital piano synthesiser. An associated video showing the time development of the data in time with the music was also created. The music and video were published on the lead author's YouTube channel. A brief description of the methodology was also posted alongside the video. We also discuss the limitations of this pilot study and describe several approaches to extend and expand upon this work.

1 Introduction

The use of non-speech audio to convey information is known as sonification. One of the earliest and perhaps the most well known applications of sonification in science is the Geiger counter, a device which produces a distinctive clicking sound when it interacts with ionising radiation (Rutherford and Royds1908). Beyond the Geiger counter, sonification is also widely used in monitoring instrumentation. Sonification is appropriate when the information being displayed changes in time, includes warnings, or calls for immediate action. Sonification instrumentation is used in environments where the operator is unable to use a visual display, for instance if the visual system is busy with another task, overtaxed or when factors such as smoke, light or line of sight impact the operator's visual system (Walker and Nees2011). Sonification also allows several metrics to be displayed simultaneously using variations in pitch, timbre, volume and period (Pollack and Ficks1954; Flowers2005). For these reasons, sonification is widely used in medicine for monitoring crucial metrics of patient health (Craven and Mcindoe1999; Morris and Mohacsi2005; Sanderson et al.2009).

Outside of sonification for monitoring purposes, the sonification of data can also be used to produce music. There have been several examples of sonification of climate system data. “Climate symphony” by Disobedient Films (Borromeo et al.2016) is a musical composition performed by strings and piano using observational data from sea ice indices, surface temperature and carbon dioxide concentration. Daniel Crawford's “Planetary bands, warming world” (Crawford2013) is a string quartet which uses observational data from Northern Hemisphere temperatures. In this piece, each of the four stringed parts represents a different latitude band of the Northern Hemisphere temperature over the time range 1880–2012. Similarly, the Climate Music Project (https://climatemusic.org/, last access: 17 August 2020) is a project which makes original music inspired by climate science. They have produced three pieces which cover a wide range of climatological and demographic data and both observational and simulated data. However, pieces such as those by Borromeo et al. (2016) and Crawford (2013) often use similar observational temperature and carbon dioxide data sets. Both of these data sets only have monthly data, and approximately one century of data or less are available. In addition, both temperature and carbon dioxide have risen since the start of the observational record. This means that these musical pieces tend to have similar structures and sounds. The pieces are slow, quiet and low pitched at the start of the data set before slowly increasing and building up to a high-pitched conclusion at the present day. It should be noted that all the pieces list here are also accompanied by a video which explains the methodology behind the creation of the music, shows the performance by the artists or shows the data development while the music is played.

An alternative strategy was deployed in the Sounding Coastal Change project (Revill2018). In that work, sound works, music recordings, photography and film produced through the project were geotagged and shared on a sound map. This created a record of the changing social and environmental soundscape of North Norfolk. They used these sounds to create music and explore the ways in which the coast was changing and how people's lives were changing with it.

In addition to its practical applications, sonification is a unique field in which scientific and artistic purposes may coexist (Tsuchiya et al.2015). This is especially true when, in addition to being converted into sound, the data are also converted into music. This branch of sonification is called musification. Note that the philosophical distinction between sound and music is beyond the scope of this work. Through the choice of musical scales and chords, tempo, timbre and volume dynamics, the composer can attempt to add emotive meaning to the piece. As such, unlike sonification, musification should be treated as a potentially biased interpretation of the underlying data. It cannot be both musical and a truly objective representation of the data. Furthermore, even though the composer may have made musical and artistic decisions to link the behaviour of the data with a specific emotional response, it may not necessarily be interpreted in the same way by the listener.

With the ever-growing interest from the general public in understanding climate science, it is becoming increasingly important that we present our model results and methods in ways that are accessible and engaging to non-experts. In this work, six musical pieces were procedurally generated using output from a climate model, specifically the first version of the United Kingdom Earth System Model (UKESM1; Sellar et al.2019). By using simulated data instead of observational data, we can generate music from time periods outside the recent past, such as the pre-industrial period before 1850 and multiple projections of possible future climates. Similarly, model data allow access to regions and measurements far beyond what can be found in the observational record. The UKESM1 is a current generation computational simulation of the Earth's climate and has been deployed to understand the historical behaviour of the climate system and make projections of the climate in the future. The UKESM1 is described in more detail in Sect. 2. The methodology used to produce the pieces, and a brief summary of each piece, is shown in Sect. 3. The aims of the project are outlined below in Sect. 4.

Each of the six musical pieces was produced alongside a video showing the time series data developing concurrently with the music. These videos were published on the YouTube video hosting service. This work was an early pilot study and has revealed several limitations which we outline in Sect. 5. We also include some possible extensions, improvements and new directions for future versions of the work.

2 UKESM1

The UKESM1 is a computational simulation of the Earth system produced by a collaboration between the Hadley Centre Met Office from the United Kingdom and the Natural Environment Research Council (NERC; Sellar et al.2019). The UKESM1 represents a major advancement in Earth system modelling, including a new atmospheric circulation model with a well resolved stratosphere; terrestrial biogeochemistry with coupled carbon and nitrogen cycles and enhanced land management; troposphere–stratospheric chemistry that allows the simulation of radiative forcing from ozone, methane and nitrous oxide; a fully featured aerosol model; and an ocean biogeochemistry model with two-way coupling to the carbon cycle and atmospheric aerosols. The complexity of coupling between the ocean, land and atmosphere physical climate and biogeochemical cycles in UKESM1 is unprecedented for an Earth system model.

In this work, we have exclusively used data from the ocean component of the UKESM1. The UKESM1's ocean is subdivided into three component models, namely the Nucleus for European Modelling of the Ocean (NEMO), which simulates the ocean circulation and thermodynamics (Storkey et al.2018), the Model of Ecosystem Dynamics, nutrient Utilisation, Sequestration and Acidification (MEDUSA), which is the sub-model of the marine biogeochemistry (Yool et al.2013), and the Los Alamos Sea Ice Model (CICE), which simulates the growth, melt and movement of sea ice (Ridley et al.2018).

https://gc.copernicus.org/articles/3/263/2020/gc-3-263-2020-f01

Figure 1The computational process used to convert UKESM1 data into a musical piece and an associated video. The boxes with a dark border represent files and data sets, and the arrows and chevrons represent processes. The blue areas are UKESM1 data and the preprocessing stages, the green areas show the data and processing stages needed to convert model data into music in the MIDI format, and the orange area shows the post-processing stages which convert images and MIDI into sheet music and videos.

Download

The UKESM1 is being used in the UK's contribution to the sixth international Coupled Model Intercomparison Project (CMIP6) (Eyring et al.2016). The UKESM1 simulations that were submitted to the CMIP6 were used to generate the musical pieces. These simulations include the pre-industrial control (PI control), several historical simulations and many projections of future climate scenarios. The CMIP6 experiments that were used in these works are listed in Table 1.

This is not the first time that the UKESM1 has been used to inspire creative projects. In 2017, the UKESM1 participated in a science and poetry project in which a scientist and a writer were paired together to produce poetry. Ben Smith was paired with Lee de Mora and produced several poems inspired by the United Kingdom Earth System Model (UKESM; Smith2018).

3 Methods

In this section, we describe the method used to produce the music and the videos. Figure 1 illustrates this process. The initial data are UKESM1 model output files, downloaded directly from the United Kingdom's Met Office data storage system (MASS). These native-format UKESM1 data will not be available outside the UKESM collaboration, but selected model variables have been transformed into a standard format and made available on the Earth System Grid Federation (ESGF) via, for example, https://esgf-index1.ceda.ac.uk/search/cmip6-ceda/, last access: 17 August 2020.

The time series data are calculated from the UKESM1 data by the BGC-val model evaluation suite (de Mora et al.2018). BGC-val is a software toolkit that was deployed to evaluate the development and performance of the ocean component of the UKESM1. In all six pieces, we use annual average data as the time series data. The data sets that were used in this work are listed in Table 1.

Each time series data set is used to create an individual Musical Instrument Digital Interface (MIDI) track composed of a series of MIDI notes. The MIDI protocol is a standardised digital way to convey musical performance information. It can be thought of as the instructions that tell a music synthesiser how to perform a piece of music (The MIDI Manufacturers Association1996). All six pieces shown here are saved as a single MIDI file which contains one or many MIDI tracks played simultaneously. Each MIDI track is composed of a series of MIDI notes.

https://gc.copernicus.org/articles/3/263/2020/gc-3-263-2020-f02

Figure 2The musical range of each of the data sets used in the “Earth System Allegro”. The four histograms on the left-hand side show the distributions of the data used in the piece, and the right-hand side shows a standard piano keyboard with the musical range available in each data set. In this piece, the Drake Passage current, shown in red, is free to vary within a two octave range of the C major scale. The other three data sets have their own ranges but are limited to the notes in the chord progression, namely C major, G major, A minor and F major. The dark coloured keys are the notes in C major chord, but the lighter coloured keys show the other notes which are available for the other chords in the progression. Note that both the C major scale and chord do not include any of the ebony keys on a piano, but these notes could be used if they were within the available range and appeared in the chord progression used.

Download

Each MIDI note is assigned four parameters. The first two parameters are timing (when the note occurs in the song) and duration (the length of time that the note is held). The timing is the number of beats between this note and the beginning of the song. The duration is positive rational number representing the number of beats for which the note is held. A unity duration is equivalent to a crotchet (quarter note), a duration of two is a minim (half note) and the duration value of a half is a quaver (eighth note).

The third MIDI note parameter is the pitch which, in MIDI, must be an integer between  1 and 127, where 1 is a very low pitch and 127 is a very high pitch. These integer values represent the chromatic scale, and middle C is set to a value of 60. The pitch of the MIDI notes must be an integer as there is no capacity for MIDI notes to sit between values on the chromatic scale. Musically this can be explained, as there are not notes in between the notes on a keyboard in MIDI. The total range of available pitches covers 10.5 octaves; however, we found that pitches below 30 or above 110 started to become unpleasant when performed by TiMidity; other MIDI pianos may have more success. Also note that MIDI's 127 note system extends beyond the standard piano keyboard, which only covers the range 21–108 of the MIDI pitch system. MIDI uses the 12-tone equal temperament tuning system; while this is not the only tuning system, it is the most widely used in Western music.

The fourth MIDI note parameter is the velocity; this indicates the speed with which the key would be struck on a piano and is the relative loudness of the note. In practical terms, velocity is an integer ranging between 1 and 127, where 1 is very quiet and 127 is very loud. The overall tempo of the piece is assigned as a global parameter of the MIDI file in units of the number of beats per minute.

Each model's time series data set is converted into a series of consecutive MIDI notes, which together form a track. For instance, the sea surface temperature (SST) time series could be converted into a series of MIDI notes in the upper range of the keyboard to form a track. For each track, the time series data are converted into musical notes so that the lowest value in the data set is represented by the lowest note pitch available, and the highest value in the data set is represented by the highest note pitch available. The notes in between are assigned proportionally by their data value between the lowest and highest pitched notes. The lowest and highest notes available for each track are predefined in the piece's settings, and they are considered an artistic decision. Each track is given its own customised pitch range so that the tracks may be at a lower pitch, higher pitch or have overlapping pitch ranges relative to other tracks in the piece. The ranges of notes available for the piece “Earth System Allegro” is shown in Fig. 2. In this figure, the four histograms on the left-hand side show the distributions of the data used in the piece, and the right-hand side includes four standard piano keyboards showing the musical range available in each data set. For instance, the Drake Passage current ranges between 135 and 175 Tg s−1 in these simulations, and we selected a range between MIDI pitches 72 and 96. This means that the lowest Drake Passage current values (135 Tg s−1) would be represented in MIDI with a pitch of 72, and the highest Drake Passage current values (175 Tg s−1) would be assigned a MIDI pitch of 96, which is two octaves higher.

These note pitches are then binned into a scale or a chord. The choice of chord or scale depends on the artistic decisions made by the composer. For instance, the C major chord is composed of the notes C, E and G, which are the zeroth, fourth and seventh notes, respectively, in the 12-note chromatic scale, starting from C at zero. Figure 3 shows a representation of these notes on a standard piano keyboard. The C major in the zeroth octave is composed of the following set of MIDI pitch integers:

(1) C major 0 = { 0 , 4 , 7 } .
https://gc.copernicus.org/articles/3/263/2020/gc-3-263-2020-f03

Figure 3A depiction of a standard piano keyboard showing the names of the notes and the number of these notes in MIDI format. The C major chord is highlighted in green, and the zeroth octave is shown in a darker green than the subsequent octaves.

Download

In the 12-tone equal temperament tuning system, the 12 named notes are repeated, and each distance of 12 notes represents an octave. As shown in Fig. 3, a chord may also include notes from subsequent octaves. In this figure, the C major chord is highlighted in green, and the zeroth octave is shown in a darker green than the subsequent octaves. As such, the C major chord can be formed from any of the following sets of MIDI pitches:

(2) C major 0 , 1 , 2 , = { 0 , 4 , 7 , 12 , 16 , 19 , 24 , 28 , 31 127 } .

It then follows that the notes of the C major chord are values between 0 an 127, where the following condition is true:

pCmajor0,1,2,

This can be can be written more simply as follows:

p%12Cmajor0,

where p represents the pitch value, namely an integer between the minimum and maximum pitches provided in the settings, and the percent sign (%) represents the remainder operator.

The zeroth octave values for other chords and scales with the same root note can be calculated from their chromatic relation with the root note. For instance:

Cminor0={0,3,7}Cmajor07={0,4,7,11}Cminor07={0,3,7,10}

Note that the derivation of these chords and their nomenclature is beyond the scope of this work. For more information on music theory, please consult an introductory guide to music theory such as Schroeder (2002) or Clendinning and Marvin (2016).

The zeroth octave values for other keys can be included by appending the root note of the scale (C: 0, C#/Db: 1, D: 2, D#/Eb: 3 and so on) to the relationships in the key of C above. For instance:

Cmajor0={0,4,7}C#major0={0,4,7}+1={1,5,8}Dmajor0={0,4,7}+2={2,6,9}D#major0={0,4,7}+3={3,7,10}

Using these methods, we can combinatorially create a list of all the MIDI pitches in the zeroth octave for all 12 keys for most standard musical chords. From this list, we can convert model data into nearly any choice of chord or scale.

The conversion from model data to musical pitch is performed using the following method. First, the data are translated into the pitch scale but kept as a rational number between the minimum and maximum pitch range assigned by the composer for this data set. As an example, in the piece “Earth System Allegro” the Drake Passage current was assigned a pitch range between 72 and 96, as shown in Fig. 2. Once the set of possible integer pitches for a given chord or scale has been produced using the methods described above, the in-scale MIDI pitch with this smallest distance to this rational number pitch is used. As mentioned earlier, the pitch of the MIDI notes must be an integer as there is no capacity for MIDI notes to sit between values on the chromatic scale. The choice of scale is provided in the piece's settings and is an artistic choice made by the composer. Furthermore, instead of using a single chord or scale for a piece, it is also possible to use a repeating pattern of chords or a chord progression. The choice of chords, and the order of chords, is different for each piece. In addition, the number of beats between chord changes, and the number of notes per beat, is also assigned in the settings. Furthermore, each track in a given piece may use a different chord progression.

The velocity of notes is determined using a similar method to pitch; the time series data are converted into velocities so that the lowest value in the data set is the quietest value available, and the highest value of the data set is the loudest value available. The notes in between are assigned proportionally by their data value between the quietest and loudest notes. Each track may have its own customised velocity range, such that any given track may be louder or quieter than the other tracks in a piece. The choice of data set used to determine velocity is provided in the settings. We rarely used the same data set for both pitch and velocity. This is because it results in the high-pitch notes being louder and the low-pitch notes being quieter.

After binning the notes into the appropriate scales, all notes are initially the same duration. If the same pitched note is played successively, then the first note's duration is extended and the repeated notes are removed.

A smoothing function may also be applied to the data before the data set is converted into musical notes. Smoothing means that it is more likely that the same pitched note will be played successively, so a track with a larger smoothing window will have fewer notes than a track with a smaller window. From a musical perspective, smoothing slows down the piece by replacing fast short notes with longer slower notes. Smoothing can also be used to slow down the backing parts to highlight a faster moving melody. Nearly all the pieces described here used a smoothing window.

After applying this method to multiple tracks, they are saved together in a single MIDI file using the Python MIDITime library (Corey2016). Having created the MIDI file, the piece is performed by the TiMidity++ digital piano (Izumo and Toivonen2004), which converts the MIDI format into a digital audio performance in the MP3 format. In principle, it should be possible to use alternative MIDI instruments, but for this limited study we exclusively used the TiMidity++ digital piano. Where possible, the MIDI files were converted into sheet music portable document format (PDF) files using the MuseScore software (MuseScore BVBA2019). However, it is not possible to produce sheet music for all six pieces as some have too many MIDI tracks to be converted to sheet music by this software.

Each piece has a diverse range of settings and artistic choices made by the composer, including the choice of data sets used to determine pitch and velocity for each track, the pitch and velocity ranges for each track, the piece's tempo and the number of notes per beat, the musical key and chord progression for each track, and the width of the smoothing window. The choice of instrument is also another artistic choice, although in this work only one instrument was used, namely the TiMidity+ piano synthesiser. As a whole, these decisions allow the composer to attempt to define the emotional context of the final piece. For instance, a fast-paced piece in a major progression may sound happy and cheerful to an audience who are used to associating fast-paced songs in major keys with happy and cheerful environments. It should be mentioned that there are no strict rules governing the emotional context of chords, tempo or instrument, and the emotional contexts of harmonies, timbres and tempos differ between cultures. Nevertheless, through exploiting the standard behaviours of Western musical traditions, the composer can attempt to imbue the piece with emotional musical cues that fit the theme of the piece or the behaviour of the underlying climate data.

To create a video, we produced an image for each time step in each piece. These figures show the data once they have been converted and binned into musical notes using units of the original data. A still image from each video is shown in Fig. 4. The FFmpeg video editing software (FFmpeg Developers2017) was used to convert the set of images into a video and to add the MP3 as the soundtrack.

The finished videos were uploaded onto the lead author's YouTube channel1 (de Mora2019).

https://gc.copernicus.org/articles/3/263/2020/gc-3-263-2020-f04

Figure 4The final frame of each of the six videos. The frames of the videos are shown in the order that they were published. The videos (1), (3), (5) and (6) use a consistent x axis for the duration of the video, but videos (2) and (4) have rolling x axes that change over the course of the video. This means that panels (2) and (4) show only a small part of time range. Panel (5) includes two vertical lines showing the jumps in the spin-up piece. Panel (6) shows a single vertical line for the crossover between the historical and future scenarios.

Download

4 Works

Six pieces were composed, generated and published using the methods described here. These pieces and their web addresses are below. Note that each of these videos' last access before this paper was published was 17 August 2020.

  1. “Earth System Allegro”; https://www.youtube.com/watch?v=RxBhLNPH8ls

  2. “Pre-industrial Vivace”; https://www.youtube.com/watch?v=Hnkvkx4BMk4

  3. “Ocean Acidification in E minor”; https://www.youtube.com/watch?v=FPeSAA38MjI

  4. “Sea Surface Temperature Aria”; https://www.youtube.com/watch?v=SYEncjETkZA

  5. “Giant Steps Spin Up”; https://www.youtube.com/watch?v=fSK6ayp4i4w

  6. “Seven Levels of Climate Change”; https://www.youtube.com/watch?v=2YE9uHBE5OI

The main goals of the work were to generate music using climate model data and to use music to illustrate some standard practices in Earth system modelling that might not be widely known outside our community. Beyond these broader goals, each piece had its own unique goal; for example, to demonstrate the principles of sonification using UKESM1 data in the “Earth System Allegro”. The “Pre-industrial Vivace” introduces the concept of a PI control simulation and highlights how an emotional connection can be made between the model output and the sonification of the data. The goal of the “Sea Surface Temperature Aria” is to demonstrate the range of behaviours of the future climate projections. “Ocean Acidification in E minor” aims to show the impact of rising atmospheric CO2 on ocean acidification and also to illustrate how historical runs are branched from the PI control. The “Giant Steps Spin Up” shows the process of spinning up the marine component of the UKESM1, and finally, the “Seven Levels of Climate Change” aims to use the musical principles of jazz harmonisation to distinguish the full set of UKESM1's future scenario simulations.

These six pieces are summarised in Fig. 4 and Table 1. Figure 4 shows the final frame of each of the pieces, and Table 1 shows the summary of each of the videos, including the publication date and duration, and lists the experiments and data sets used to generate the piece.

Table 1The video publication details, including the publication date, the duration, the Coupled Model Intercomparison Project (CMIP) experiment names and the data sets used. Note: DIC – dissolved inorganic carbon; PI control – pre-industrial control; SSP – shared socioeconomic pathway; SST – sea surface temperature. Note that each of these videos' last access before this paper was published was 17 August 2020.

Download Print Version | Download XLSX

4.1 “Earth System Allegro”

The “Earth System Allegro” is a relatively fast-paced piece in C major, showing some important metrics of the Southern Ocean in the recent past and projected into the future with the shared socioeconomic pathway (SSP) scenario, SSP1 1.9. The SSP1 1.9 projection is the future scenario in which the anthropogenic impact on the climate is the smallest. The C major scale is composed of only natural notes (no sharp or flat notes), making it one of the first chords that people encounter when learning music. In addition, major chords and scales like C major typically sound happy. Christian Schubart's “Ideen zu einer Aesthetik der Tonkunst” (1806) describes C major as “Completely pure. Its character is: innocence, simplicity, naivety, children's talk” (Schubart and DuBois, 1983) Through choosing C major and an upbeat tempo and data from the best possible climate scenario (SSP1 1.9), we aimed to start the project with a piece with a sense of optimism about the future climate and to introduce the principles of musification of the UKESM1 time series data.

The Drake Passage current, shown in panel (1) of Fig. 4, is a measure of the strongest current in the ocean, namely the Antarctic Circumpolar current. This is the current that flows eastwards around Antarctica. The second data set, shown here in orange, is the global total air to sea flux of CO2. This field shows the global total atmospheric carbon dioxide that is absorbed into the ocean each year. Even under SSP1 1.9, UKESM1 predicts that this value would rise from around zero during the pre-industrial period to a maximum of approximately 2 Pg of carbon per year around the year 2030, followed by a return to zero at the end of the century. The third field is the sea ice extent of the Southern Hemisphere, shown in blue. This is the total area of the ocean in the Southern Hemisphere which has more that 15 % ice coverage per grid cell of our model. The fourth field is the Southern Ocean mean surface temperature, shown in green, which rises slightly from approximately 5 C in the pre-industrial period up to a maximum of 6 C. The ranges of each data set are illustrated in Fig. 2.

In this piece, the Drake Passage current is set to the C major scale, but the other three parts modulate between the C major, G major, A minor and F major chords. These are the first, fifth, sixth and fourth chords in the root of C major. This progression is strikingly popular and may be heard in songs such as “Let it be” by the Beatles, “No woman no cry” by Bob Marley and the Whalers, “With or without you” by U2, “I’m yours” by Jason Mraz and “Africa” by Toto, among many others. By choosing such a common progression, we were aiming to introduce the concept of musification of data using familiar-sounding music and to avoid alienating the audience.

4.2 “Pre-industrial Vivace”

The “Pre-industrial Vivace” is a fast-paced piece in C major, showing various metrics of the behaviour of the global ocean in the PI control run. The PI control run is a long-term simulation of the Earth's climate without the impact of the industrial revolution or any of the subsequent human impact on climate. At the time that the piece was created, there were approximately 1400 simulated years. We use the control run as the starting point for historical simulations but also to compare the difference between human-influenced simulations and simulations of the ocean without any anthropogenic impact.

The final frame of the “Pre-industrial Vivace” video is shown in panel (2) of Fig. 4. The top pane of this video shows the global marine primary production in purple. The primary production is a measure of how much marine phytoplankton is growing. Similarly, the second pane shows the global marine surface chlorophyll concentration in green; this line rises and falls alongside the primary production in most cases. The third and fourth panes show the global mean sea surface temperature and sea surface salinity (SSS) in red and orange. The fifth pane shows the global total ice extent. These five fields are an overview of the behaviour of the pristine natural ocean of our Earth system model. There is no significant drift, and there is no long-term trend in any of these fields. However, there is significant natural variability operating at decadal and millennial scales.

As with the “Earth System Allegro”, “Pre-industrial Vivace” uses the familiar C major scale but adds a slight variation to the chord progression. The first half of the progression is C major, G major, A minor and F major, but it follows with a common variant of this progression, namely C major, D minor, E minor and F major. Through using the lively vivace tempo and a familiar chord progression in a major key, this piece aims to use musification to link the PI control simulation with a sense of happiness and ease. The lively, fast and jovial tone of the piece should match the pre-industrial environment, which is free running and uninhibited by anthropogenic pollution.

4.3 “Sea Surface Temperature Aria”

The “Sea Surface Temperature Aria” demonstrates the change in sea surface temperature in the PI control run, the historical scenario and under three future climate projection scenarios, as shown in panel (3) of Fig. 4. The first scenario is the “business as usual” scenario (SSP5 8.5; shown in red) in which human carbon emissions continue without mitigation. The second scenario is an “overshoot” scenario, namely an SSP5 3.4-overshoot, in which emissions continue to grow but then drop rapidly in the middle of the 21st century, as shown in orange. The third scenario is SSP1 1.9, labelled as the “Paris Agreement” scenario and shown in green, in which carbon emissions drop rapidly from the present day. The goal of this piece is to demonstrate the range of differences between some of the SSP scenarios on sea surface temperature.

The PI control run and much of the historical scenario data are relatively constant. However, they start to diverge in the 1950s. In the future scenarios, the three projects all behave similarly until the 2030s; then the SSP1 1.9 scenario branches off and maintains a relatively constant global mean sea surface temperature. The SSP5 3.4 scenario's SST continues to grow until the year 2050, while the SSP5 8.5 scenario's SST grows until the end of the simulation.

Musically, this piece is consistently in the scale of A minor harmonic, with no modulating chord progression. The minor harmonic scale is a somewhat artificial scale in that it augments seventh note of the natural minor scale. The augmented seventh means that there is a minor third between the sixth and seventh note, making it sound uneasy and sad (at least to the author's ears). An aria is a self-contained piece for one voice, normally within a larger work. In this case, the name “aria” is used to highlight that only one data set, namely the sea surface temperature, participates in the piece. This piece starts relatively low and slow, then grows higher and louder as the future scenarios are added to the piece. The unchanging minor harmonic chord, slow tempo and pitch range were chosen to elicit a sense of dread and discord as the piece progresses to the catastrophic SSP5 8.5 scenario at the end of the 21st century.

4.4 “Ocean acidification in E minor”

“Ocean acidification in Eminor” demonstrates the standard modelling practice of branching historical simulations from the PI control run and the impact of rising anthropogenic carbon on the ocean carbon cycle. The final frame of this video is shown in panel (4) of Fig. 4. The top pane shows the global mean dissolved inorganic carbon (DIC) concentration in the surface of the ocean, and the lower pane shows the global mean sea surface pH. In both panes, the PI control run data are shown as a black line, and the coloured lines represent the 15 historical simulations.

This piece uses a repeating “12 bar blues” structure in E minor and a relatively fast tempo. This chord progression is an exceptionally common progression, especially in blues, jazz and early rock and roll. It is composed of four bars of the E minor, two bars of A minor, two bars of E minor, then one bar of B minor, A minor, E minor and B minor. The 12 bar blues can be heard in songs such as “Johnny B. Goode” by Chuck Berry, “Hound dog” by Elvis Presley, “I got you (I feel Good)” by James Brown, “Sweet home Chicago” by Robert Johnson or “Rock and roll” by Led Zeppelin. In the context of Earth system music, the 12 bar pattern with its opening set of four bars, then two sets of two bars and ending with four sets of one bar between key changes drives the song forward before starting again slowly. This behaviour is thematically similar to the behaviour of the ocean acidification in UKESM1 historical simulation, in which the bulk of the acidification occurs at the end of each historical period.

This video highlights that the marine carbon system has been heavily impacted over the historical period. In the PI control runs, both the pH and the DIC are very stable. However, in all historical simulations with rising atmospheric CO2, the DIC concentration rises and the pH falls. The process of ocean acidification is relatively simple and well understood (Caldeira and Wickett2003; Orr et al.2005). The atmospheric CO2 is absorbed from the air into the ocean surface, which releases hydrogen ions into the ocean, making the ocean more acidic. The concentration of DIC in the sea surface is closely linked with the concentration of atmospheric CO2, and it rises over the historic period. This behaviour was observed in every single UKESM1 historical simulation. This video also illustrates an important part of the methodology used to produce models of the climate that may not be widely known outside our community. When we produce models of the Earth system, we use a range of points of the PI control as the initial conditions for the historical simulations. All the historical simulations have slightly different starting points, and evolve from these different initial conditions, which give us more confidence that the results of our projections are due to changes since the pre-industrial period instead of simply a consequence of the initial conditions. In this figure, the historical simulations are shown where they branch from the PI control run instead of using the “real” time as the x axis.

4.5 “Giant Steps Spin Up”

This piece combines the spin up of the United Kingdom Earth System Model with the chord progression of John Coltrane's “Giant steps” (Coltrane1960). The spin up is the process of running the model from a set of initial conditions to a near-steady-state climate. When a model reaches a steady state, this means that there is no significant trend or drift in the mean behaviour of several key metrics. For instance, as part of the Coupled Climate Carbon Cycle Model Intercomparison Project (C4MIP) protocol, Jones et al. (2016) suggest a drift criterion of less than 10 Pg of carbon per century in the absolute value of the flux of CO2 from the atmosphere to the ocean. In practical terms, the ocean model is considered to be spun up when the long-term average of the air–sea flux of carbon is consistently between 0.1 and 0.1 Pg of carbon per year.

The spin up is a crucial part of model development. Without spinning up, the historical ocean model would still be equilibrating with the atmosphere. It would be much more difficult to separate the trends in the historical and future scenarios from the underlying trend of a model still trying to equilibrate. Note that while a steady-state model does not have any significant long-term trends or drifts, it can still have short-term variability. This short-term variability can be seen in the pre-industrial simulation in the “Pre-industrial Vivace” piece. It can take a model thousands of years of simulations for the ocean to reach a steady state. In our case, the spin up ran for approximately 5000 simulated years before the spun up drift criterion was met (Yool et al.2020).

The UKESM1 spin up was composed of several phases in succession. The first stage was a fully coupled run using an early version of UKESM1. Then, an ocean-only run was started using a 30 year repeating atmospheric forcing data set. The beginning of this part of the run is considered to be the beginning of the spin up, and the time axis is set to zero at the start of this run. This is because the early version of UKESM1 did not include a carbon system in the ocean. After about 1900 years of simulating the ocean with the repeating atmospheric forcing data set, we had found that some changes were needed to the physical model. At this point, we initialised a new simulation from the final year of the previous stage and changed the atmospheric forcing. This second ocean-only simulation ran until the year 4900. At the point, we finished the spin up with a few hundred years of fully coupled UKESM1 with ocean, land, sea ice and atmosphere models. Due to the slow and repetitive native of the ocean-only spin up, several centuries of data were omitted. These are marked as grey vertical lines in the video and panel (5) of Fig. 4.

The piece is composed of several important metrics of the spin up in the ocean, such as the Atlantic meridional overturning current (purple), Arctic ocean total ice extent (blue), the global air–sea flux of CO2 (red), the volume-weighted mean temperature of the Arctic ocean (orange), the surface mean DIC in the Arctic Ocean (pink) and the surface mean chlorophyll concentration in the Arctic ocean (green).

The music is based on the chord progression from the jazz standard, John Coltrane's “Giant steps”, although the musical progression was slowed to one chord change per four beats instead of a change at every beat. This change occurred as an accident, but we found that the full-speed version sounded very chaotic, so the slowed version was published instead. This piece was chosen because it has a certain notoriety due to the difficulty for musicians to improvise over the rapid chord changes. In addition, “Giant steps” was the first new composition to feature Coltrane changes. Coltrane changes are a complex cyclical harmonic progression which form a musical framework for jazz improvisation. We hoped that the complexity of the Earth system model is reflected in the complexity of the harmonic structure of the piece. The cyclical relationship of the Coltrane changes also reflects the 30 year repeating atmospheric forcing data set used to spin up the ocean model.

4.6 “Seven Levels of Climate Change”

This piece is based on a YouTube video by Adam Neely, called “The 7 levels of jazz harmony” (Neely2019). In that video, Neely demonstrates seven increasingly complex levels of jazz harmony by re-harmonising a line of the chorus of Lizzo's song “Juice”. We have repeated Neely's re-harmonisation of “Juice” here, such that each successive level's note choice is informed by Earth system simulations, with increasing levels of emissions and stronger anthropogenic climate change.

At the time of writing, UKESM1 had produced simulations of seven future scenarios. The seven scenarios of climate change and their associated jazz harmony are as follows:

  • Level 0: PI control – original harmony

  • Level 1: SSP1 1.9 – four note chords

  • Level 2: SSP1 2.6 – tritone substitution

  • Level 3: SSP4 3.4 – tertiary harmony extension

  • Level 4: SSP5 3.4 (overshoot) – pedal point

  • Level 5: SSP2 4.5 – non-functional harmony

  • Level 6: SSP3 7.0 – liberated dissonance

  • Level 7: SSP5 8.5 – fully chromatic

Note that we were not able to reproduce Neely's seventh level, namely intonalism or xenharmony. In this level, the intonation of the notes are changed depending on the underlying melody. Unfortunately, the MIDITime Python interface for MIDI has not yet reached such a level of sophistication. Instead, we simply allow all possible values of the 12-note chromatic scale.

The data sets used in this piece are a set of global-scale metrics that show the bulk properties of the model under the future climate change scenarios. They include the global mean SST (red), the global mean surface pH (purple), the Drake Passage current (yellow), the global mean surface chlorophyll concentration (green), the global total air to sea flux to CO2 (gold) and the global total ice extent (blue). As the piece progresses through the seven levels, the anthropogenic climate change in the model becomes more extreme, matching the increasingly esoteric harmonies of the music.

5 Limitations and potential extensions

We have successfully demonstrated that it is possible to generate music using data from the UK's Earth System Model. We have also shown that we can illustrate some standard practices in Earth system modelling using music. Within the framework of this pilot study, we must also raise some limitations and suggest some possible extensions for future versions of this work.

A significant omission from this study is the measurement of the impact, the reach or the engagement of these works. We did not test whether the audience was composed of laypeople or experts. We did not investigate whether the audience learnt anything about Earth system modelling through these series of videos. We did not monitor the audience reactions or interpretations of the music. Future extensions of this project should include a survey of the audience to investigate their backgrounds and demographics, what they learnt about Earth system models, and their overall impressions of the pieces. This could take the form of an online survey associated with each video or a discussion with the audience at a live performance.

In addition, in this work, we make no effort to monitor or describe the reach of the YouTube videos, track comments, subscriptions or the source of the views. While some tools are available for monitoring the number of videos within YouTube's content creator toolkit, YouTube Studio (Google2019), a preliminary investigation found that it was not possible to use these tools alone to create a sufficiently detailed analyses of the impact, reach or dissemination of these music creation methods. YouTube Studio currently includes some demographic details, including gender, country of origin, viewership age, and traffic source, but it is not sufficient for an audience survey. This toolkit was built to help content creators monitor and build their audience and to monetise videos using advertisements. It is not fit for the purpose of scientific engagement monitoring. For instance, it was not possible to use YouTube Studio to determine the expertise of the audience, their thoughts on climate change, whether they read the video description section or whether they understood the description. Some of these features could be added to YouTube by Google, but many of them would require the audience survey described above.

Our videos only include the music and a visualisation of the data; they do not include any description about how the music was generated or the Earth system modelling methods used to create the underlying data. The explanations of the science and musification methodologies are given in a description below the video. Furthermore, viewers must expand this box by clicking the “show more” button. Using the tools provided in YouTube studio, it is not currently possible to determine whether the viewers have expanded, read or understood the description section. When we have shown these videos to audiences at scientific meetings and conferences, it has always been associated with a brief explanation of the methods. In future, this explanatory preface to the work could be included in the video itself, or as a separate video, in addition to the text below the video in the description section. This would likely increase the audience's understanding of our music-generation process.

If additional pieces were made, there are several potential ways that the methodology used to create them could be improved relative to the methods used to create the initial set of videos. In future versions of this work, it should be possible to use the ESMValTool (Righi et al.2019) to produce the time series data instead of BGC-val. This would make the production of the time series more easily repeatable but would also make it easier for pieces to be composed using data available in the CMIP5 and CMIP6 coupled model intercomparison projects. This broadens the scope of the data by allowing other models, other model domains, including the atmosphere and the land surface, and even observational data sets. For instance, we could make a multi-model intercomparison piece or a piece based on the atmospheric, terrestrial and ocean components of the same model. In addition, using ESMValTool would also make it more straightforward to distribute the source code that was used to make these pieces.

In the reflections on auditory graphics, Flowers (2005) lists several “Things that work” and “Approaches that do not work”. From the list of things that work, we included four of the five methods that worked: pitch coding of numeric data, the exploitation of temporal resolution of human audition, manipulating loudness changes and using time as time. We were not able to include the selection of distinct timbres to minimise stream confusion. From the list of approaches that do not work, we successfully avoided several of the pitfalls, notably pitch mapping to continuous variables and using loudness changes to represent an important continuous variable. However, we did include one of the approaches that Flowers did not recommend: we simultaneously plotted several variables with similar pitches and timbres. However, it is worth noting that maximising the clarity of the sonification is the goal of Flowers (2005), but our focus was to produce and disseminate some relatively listenable pieces of music using UKESM1 data.

The two suggestions by Flowers (2005) that we failed to address were both related to using the same timbre digital piano synthesiser for all data. Due to the technical limitations of using TiMidity++, we were not able to vary the instruments used, and thus there was very little variability in terms of the timbres. These pieces were all performed by the same instrument, a solo piano, which limits the musical diversity of the set of pieces. In addition, each data set in a given piece was performed by the same instrument, making it difficult to distinguish the different data sets being performed simultaneously. Further extensions of this work could use a fully featured digital audio workstation to access a range of digital instruments beyond the digital piano, such as a string quartet, a horn and woodwind section, a full digital orchestra, electric guitar and bass, percussive instruments, or electronic synthesised instruments. This would comply with the suggestions listed in Flowers (2005), allowing the individual data sets to stand out musically from each other in an individual piece, but would also lead to a much more diverse set of musical pieces.

From a musical perspective, there are many ways to improve the performances of the pieces for future versions of this work. As raised in the comments from social media, a human pianist would be able to add a warmth to the performance that is beyond the abilities of MIDI interpreters. A recording of a human performance could also add the hidden artefacts of a live recording, such as room noise, stereo effects and natural reverb. On the other hand, due to the nature of the process used to generate these pieces, it may not be possible for a single human to perform several of the pieces due to the speed, complexity, number of simultaneous notes or the range of these pieces. Alternatively, it may be possible to “humanise” the MIDI by making subtle changes to the timing and velocities of the MIDI notes. This is a recording technique that can take a synthesised, perfectly timed beat and make it sound like it is being played by a human. It does this by moving the individual notes slightly before or after the beat, and adding subtle variations in the velocity (Walden2017). Also, TiMidity++ uses the same piano sample for each pitch. This means that when two tracks of a piece play the same pitch at the same time, exactly the same sample is played twice simultaneously. These two identical sample sound waves are added constructively, and the note jumps out much louder than it would if a human played the part. A fully featured digital piano or a human performance would remove these loud jumps but also be able to add more nuance and warmth to the performance. Finally, the published pieces had no mastering or post-production. Even a basic mastering session by a professional sound engineer would likely improve the overall quality of the sound of these pieces.

In terms of the selection of chords progression, tempo and rhythms, it may be possible to target specific audiences using music based on popular artists or genres. For instance, the reach of a piece might be increased by responding to viral videos or by basing a work on a popular song.

In these works, we have focused on reproducing Western music, both traditional and modern, in order to connect each piece with the associated emotional musical cues. Alternatively, there is a significant diversity in traditional and modern styles of music from other regions around the world; a much wider range of rhythms, timbres, styles and emotional cues could be exploited in future extensions of this work.

With regards to the visual aspect of these videos, it should be straightforward to improve the quality of the graphics used. The current videos only show a simple scalar field as it develops over time. They could be improved by adding animated global maps of the model, interviews or live performances to the video. It may also be a positive addition to preface the videos with a brief explanation of the project and the methods deployed. On the technical side, there may also be some visual glitches and artefacts which arise due to YouTube's compression or streaming algorithms. A different streaming service or alternative video making software might help remove these glitches.

YouTube videos are typically shown in the suggestions queue with a thumbnail image and the video title. The thumbnail is the graphic placeholder that shows the video while it is not playing on YouTube as a suggested video or in Facebook or Twitter feeds. The thumbnail is how viewers first encounter the video, and it is a crucial part of attracting an audience. There are lots of guides helping one to create better thumbnails (Kjellberg and PewDiePie2017; Video Influencers2016; Myers2019). Future works should attempt to optimise the video thumbnail to attract a wider audience.

While we did not investigate the reach or dissemination of these pieces in this work, if the goal of future projects was to increase the online audience size then it might be possible to reach a wider audience using a press release, a public screening of the videos, a scheduled publication date or through a collaboration with other musicians or YouTube content creators. It may also be possible to host a live concert, make a live recording or broadcast a YouTube live stream. It is not fully understood how a video can go viral, but it has been shown that view counts can rise exponentially when a single person or organisation with a large audience shares a video (West2011; Jiang et al.2014). Improvements to the music, the video, the description and the thumbnail make it more likely for such an influencer with large audience to like, share or retweet a piece, which could result in a significant increase in the audience size and view count. The videos in this work were posted online in an ad hoc fashion as soon as they were finished. To maximise the number of views, experts have recommended consistent, scheduled in advance, weekly videos, and it has been advised to publish them late in the week in the afternoons (Cox2017; Think Media2017). Finally, it should be possible to increase the reach of this work through paid advertising on YouTube and other social media platforms. This would place the videos higher in the suggested video rankings and on the discovery queues.

6 Conclusions

In this work, we took data from the first United Kingdom Earth System Model and converted it into six musical pieces and videos. These pieces covered the core principles of climate modelling or ocean modelling, namely PI control runs, the spin-up process, multiple future scenarios, the Drake Passage current, the air–sea flux of CO2 and the Atlantic meridional overturning circulation. While limited to a single instrument, namely the synthesised piano, they included a range of musical styles, including classical, jazz, blues and contemporary styles.

While the wider public are likely to be familiar with climate change, they are less likely to be familiar with our community's methods. In fact, many standard tools in the arsenal of climate modellers may not be widely appreciated outside our small community, even within the scientific community. These six musical pieces open the door on a new, exciting and fun approach to how we engage with the fellow scientists and the wider public.

We have also discussed some ways of improving future iterations of this pilot study. Future works could be performed to a live audience, we could collaborate with musicians and the viewership would likely be increased with improved video graphics, thumbnails, live performances, video diversity, and more frequent upload rates. The scientific content of the videos could be expanded by accessing new data sets, other parts of the UKESM1 Earth System Model, other CMIP models or observational data sets. The quality of the music could be improved by including additional instruments and musical genres, and by making live recordings instead of MIDI performance. The knowledge transfer aspect of the project could be improved upon by appending explanations of the science to the video and by surveying the audience to identify the impact of these works.

Finally, the authors would like to encourage other scientists to think about how their work may be sonified. You may have beautiful and unique music hidden within your data; the methods described in this work would allow it to be made manifest.

Data availability

The sheet music for the four pieces and the MIDI files for all six pieces are available alongside this publication in the Supplement. Note that it was not possible to produce sheet music for “Ocean Acidification in E minor” or “Seven Levels of Climate Change” as there are too many MIDI tracks in these pieces. The UKESM1 model data used in this work is available via the World Climate Research Programme (WCRP) CMIP6 data interface https://esgf-node.llnl.gov/projects/cmip6/ (last access: 17 August 2020, WCRP, 2020).

Video supplement

These videos are published online on the YouTube channel: https://www.youtube.com/c/LeedeMora, last access: 17 August 2020 (de Mora2019).

The videos described here are distributed under the standard YouTube Licence. The chord progressions from John Coltrane's “Giant steps”, Lizzo's “Juice” and Adam Neely's re-harmonisation of “Juice” were reproduced under fair use without explicit permission from the copyright owners.

Supplement

The supplement related to this article is available online at: https://doi.org/10.5194/gc-3-263-2020-supplement.

Author contributions

LdM used BGC-val to produce the model time series data, sonified the BGC-val data, published the videos and prepared the text. AAS, RSS and JW provided feedback and held early discussions on the music. ESM, AY, JP and TK helped develop the core time series data sets in UKESM1. RJP shared the finished videos and provided audience feedback. JCB and CGJ led the PML modelling group and UKESM1 projects, respectively, and both provided crucial feedback and support.

Competing interests

The authors declare that they have no conflict of interest.

Like most YouTube content creators, Lee de Mora has a financial relationship with YouTube. However, at the time of writing, the channel in which these videos were posted did not meet YouTube's monetisation requirements (i.e. 1000 subscribers and 4000 h watched).

Acknowledgements

Lee de Mora, Andrew Yool, Julien Palmieri, Robin S. Smith, Till Kuhlbrodt, Robert J. Parker, Jeremy C. Blackford and Colin G. Jones were supported by the National Environmental Research Council (NERC) National Capability Science Multi-Centre (NCSMC) funding for the UK Earth System Modelling project. Alistair A. Sellar and Jeremy Walton were supported by the Met Office Hadley Centre Climate Programme funded by BEIS and Defra. The following funding is also acknowledged for the following contributors: Colin G. Jones, Till Kuhlbrodt and Robin S. Smith (grant no. NE/N017978/1); Robert J. Parker (grant no. NE/N018079/1); and Andrew Yool, Julien Palmieri, Lee de Mora and Jeremy C. Blackford (grant no. NE/N018036/1). Colin G. Jones, Till Kuhlbrodt, Andrew Yool and Julien Palmieri additionally acknowledge the EU Horizon 2020 CRESCENDO Project (grant no. 641816).

We acknowledge use of the MONSooN2 system, a collaborative facility supplied under the joint Weather and Climate Research Programme, which is a strategic partnership between the Met Office and the Natural Environment Research Council.

The simulation data used in this study are archived at the Met Office and are available for research purposes through the JASMIN platform (http://www.jasmin.ac.uk, last access: 17 August 2020) maintained by the Centre for Environmental Data Analysis (CEDA).

The authors would like to thank our handling editor at Geoscience Communication, Sam Illingworth, and the referees. Their contributions were valuable and resulted in a significantly improved paper.

Finally, the authors would also like to thank anyone who took the time to watch a video, leave a comment, use the like button, subscribe to the channel or share these videos.

Financial support

This research has been supported by the NERC Environmental Bioinformatics Centre (grant nos. NE/N018036/1 and NE/N018079/1) and the EU Horizon 2020 (grant no. 641816).

Review statement

This paper was edited by Sam Illingworth and reviewed by Solmaz Mohadjer and one anonymous referee.

References

Borromeo, L., Round, K., and Perera, J.: Climate Symphony, available at: https://www.disobedientfilms.com/climate-symphony (last access: 17 August 2020), 2016. a, b

Caldeira, K. and Wickett, M. E.: Anthropogenic carbon and ocean pH, Nature, 425, 365–365, https://doi.org/10.1038/425365a, 2003. a

Clendinning, J. P. and Marvin, E. W.: The Musician's Guide To Theory And Analysis, W. W. Norton & Company, 3rd Edn., 2016. a

Coltrane, J.: Giant Steps (album), Atlantic Records, Published February 1960. a

Corey, M.: MIDITime python library for MIDI, available at: https://github.com/cirlabs/miditime (last access: 17 August 2020), 2016. a

Cox, S.: How Often Should You Upload Videos to YouTube to Get More Views, available at: https://www.filmora.io/community-blog/how-often-should-you-upload-to-youtube–consistent-posting-187.html (last access: 17 August 2020), 2017. a

Craven, R. M. and Mcindoe, A. K.: Continuous auditory monitoring – how much information do we register?, Brit. J. Anaesth., 83, 747–749, 1999. a

Crawford, D.: Planetary Bands, Warming World string quartet, Video published by Ensia magazine, available at: https://vimeo.com/127083533 (last access: 17 August 2020), 2013 a, b

de Mora, L.: Lee de Mora's YouTube channel homepage, available at: https://www.youtube.com/c/LeedeMora (last access: 17 August 2020), 2019. a, b

de Mora, L., Yool, A., Palmieri, J., Sellar, A., Kuhlbrodt, T., Popova, E., Jones, C., and Allen, J. I.: BGC-val: a model- and grid-independent Python toolkit to evaluate marine biogeochemical models, Geosci. Model Dev., 11, 4215–4240, https://doi.org/10.5194/gmd-11-4215-2018, 2018. a

Eyring, V., Bony, S., Meehl, G. A., Senior, C. A., Stevens, B., Stouffer, R. J., and Taylor, K. E.: Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization, Geosci. Model Dev., 9, 1937–1958, https://doi.org/10.5194/gmd-9-1937-2016, 2016. a

FFmpeg Developers: FFmpeg, a complete, cross-platform solution to record, convert and stream audio and video, available at: https://ffmpeg.org/ (last access: 17 August 2020), 2017. a

Flowers, J. H.: Thirteen years of reflection on auditory graphing: promises, pitfalls and potential new directions, Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display, Limerick, Ireland, 6–9 July, 406–409, 2005. a, b, c, d, e

Google: Manage your channel with Creator Studio – Youtube studio support website, available at: https://support.google.com/youtube/answer/9440613 (last access: 17 August 2020), 2019. a

Izumo, M. and Toivonen, T.: TiMidity++ open source MIDI to WAVE converter and player, available at: http://timidity.sourceforge.net/ (last access: 17 August 2020), 2004. a

Jiang, L., Miao, Y., Yang, Y., Lan, Z., and Hauptmann, A. G.: Viral Video Style: A Closer Look at Viral Videos on YouTube, in: Proceedings of International Conference on Multimedia Retrieval, 193–200, https://doi.org/10.1145/2578726.2578754, 2014. a

Jones, C. D., Arora, V., Friedlingstein, P., Bopp, L., Brovkin, V., Dunne, J., Graven, H., Hoffman, F., Ilyina, T., John, J. G., Jung, M., Kawamiya, M., Koven, C., Pongratz, J., Raddatz, T., Randerson, J. T., and Zaehle, S.: C4MIP – The Coupled Climate–Carbon Cycle Model Intercomparison Project: experimental protocol for CMIP6, Geosci. Model Dev., 9, 2853–2880, https://doi.org/10.5194/gmd-9-2853-2016, 2016. a

Kjellberg, F. A. U. and PewDiePie: How to make really good thumbnails on YouTube, available at: https://www.youtube.com/watch?v=Nz3Ngt0AMDA (last access: 17 August 2020), 2017. a

Morris, R. W. and Mohacsi, P. J.: How Well Can Anaesthetists Discriminate Pulse Oximeter Tones?, Anaesth Intensive Care, 33, 497–500, 2005. a

MuseScore BVBA: MuseScore Music Score Editor, available at: https://musescore.com/ (last access: 17 August 2020), 2019. a

Myers, L.: This is How to Create the Best YouTube Thumbnails, available at: https://louisem.com/198803/how-to-youtube-thumbnails (last access: 17 August 2020), 2019. a

Neely, A.: The 7 Levels of Jazz Harmony, available at: https://www.youtube.com/watch?v=lz3WR-F_pnM (last access: 17 August 2020), 2019. a

Orr, J. C., Fabry, V. J., Aumont, O., Bopp, L., Doney, S. C., Feely, R. A., Gnanadesikan, A., Gruber, N., Ishida, A., Joos, F., Key, R. M., Lindsay, K., Maier-reimer, E., Matear, R., Monfray, P., Mouchet, A., Najjar, R. G., Slater, R. D., Totterdell, I. J., Weirig, M.-f., Yamanaka, Y., and Yool, A.: Anthropogenic ocean acidification over the twenty-first century and its impact on calcifying organisms, Nature, 437, 681–686, https://doi.org/10.1038/nature04095, 2005. a

Pollack, I. and Ficks, L.: The Information of Elementary Multidimensional Auditory Displays, J. Acoust. Soc. Am., 26, p. 136, https://doi.org/10.1121/1.1917759, 1954. a

Revill, G.: Landscape, Music and Sonic Environments, in: The Routledge Companion to Landscape Studies, edited by: Howard, P., Thompson, I., Waterton, E., and Atha, M., chap. 21, London, 2nd Edn., p. 650, 2018. a

Ridley, J. K., Blockley, E. W., Keen, A. B., Rae, J. G. L., West, A. E., and Schroeder, D.: The sea ice model component of HadGEM3-GC3.1, Geosci. Model Dev., 11, 713–723, https://doi.org/10.5194/gmd-11-713-2018, 2018. a

Righi, M., Andela, B., Eyring, V., Lauer, A., Predoi, V., Schlund, M., Vegas-regidor, J., Bock, L., Brötz, B., Mora, L. D., Diblen, F., Dreyer, L., Drost, N., Earnshaw, P., Hassler, B., Koldunov, N., Little, B., Loosveldt, S., and Zimmermann, K.: Earth System Model Evaluation Tool (ESMValTool) v2.0 – technical overview, Geosci. Model Dev., 13, 1179–1199, https://doi.org/10.5194/gmd-13-1179-2020, 2020. a

Rutherford, E. and Royds, T.: Spectrum of the Radium Emanation, Phil. Mag. S., 16, 313–319, https://doi.org/10.1080/14786440808636511, 1908. a

Sanderson, P. M., Liu, D., and Jenkins, S. A.: Auditory displays in anesthesiology, Curr. Opin. Anaesthesio., 22, 788–295, https://doi.org/10.1097/ACO.0b013e3283326a2f, 2009. a

Schroeder, C.: Hal Leonard Pocket Music Theory: A Comprehensive and Convenient Source for All Musicians, Hal Leonard, 2002. a

Schubart, C. F. D. and DuBois, T. A.: Ideen zu Einer Asthetik der Tonkunst: An annotated translation A Dissertation Presented to the Faculty of the graduate school university of Southern California in fulfillment of the requirements for the degree of Doctor of Philosophy for Musicology, available at: https://www.musikipedia.dk/dokumenter/boeger/engelsk-tonkunst.pdf (last access: 17 August 2020), 1983. 

Sellar, A. A., Jones, C. G., Mulcahy, J. P., Tang, Y., Yool, A., Wiltshire, A., O'Connor, F. M., Stringer, M., Hill, R., Palmieri, J., Woodward, S., Mora, L., Kuhlbrodt, T., Rumbold, S. T., Kelley, D. I., Ellis, R., Johnson, C. E., Walton, J., Abraham, N. L., Andrews, M. B., Andrews, T., Archibald, A. T., Berthou, S., Burke, E., Blockley, E., Carslaw, K., Dalvi, M., Edwards, J., Folberth, G. A., Gedney, N., Griffiths, P. T., Harper, A. B., Hendry, M. A., Hewitt, A. J., Johnson, B., Jones, A., Jones, C. D., Keeble, J., Liddicoat, S., Morgenstern, O., Parker, R. J., Predoi, V., Robertson, E., Siahaan, A., Smith, R. S., Swaminathan, R., Woodhouse, M. T., Zeng, G., and Zerroukat, M.: UKESM1: Description and Evaluation of the U.K. Earth System Model, J. Adv. Model. Earth Syst., 11, 4513–4558, https://doi.org/10.1029/2019MS001739, 2019. a, b

Smith, B.: Poems for the Earth System Model, Magma poetry, Autumn, 72, 16–19, 2018. a

Storkey, D., Blaker, A. T., Mathiot, P., Megann, A., Aksenov, Y., Blockley, E. W., Calvert, D., Graham, T., Hewitt, H. T., Hyder, P., Kuhlbrodt, T., Rae, J. G., and Sinha, B.: UK Global Ocean GO6 and GO7: A traceable hierarchy of model resolutions, Geosci. Model Dev., 11, 3187–3213, https://doi.org/10.5194/gmd-11-3187-2018, 2018. a

The MIDI Manufacturers Association: The Complete MIDI 1.0 Detailed Specification, The MIDI manufacturers Associationn, Los Angeles, CA, 3rd Edn., 1996. a

Think Media: How Often Should You Post on YouTube? – 3 YouTube Upload Schedule Tips, available at: https://www.youtube.com/watch?v=A3kwRAB_-lQ (last access: 17 August 2020), 2017. a

Tsuchiya, T., Freeman, J., and Lerner, L. W.: Data-to-music API : Real-time data-agnostic sonification with musical structure models, The 21th International Conference on Auditory Display, 244–251, 2015.  a

Video Influencers: How to Make a YouTube Custom Thumbnail Tutorial – Quick and Easy, available at: https://www.youtube.com/watch?v=8YbZuaBP9B8 (last access: 17 August 2020), 2016. a

Walden, J.: Cubase: Humanise Your Programmed Drums, available at: https://www.soundonsound.com/techniques/cubase-humanise-your-programmed-drums (last access: 17 August 2020), 2017. a

Walker, B. N. and Nees, M. A.: The Theory of Sonification, in: The Sonification Handbook, edited by: Hermann, T., Hunt, A., and Neuhoff, J. G., chap. 2, Logos Publishing House, Berlin, GErmany, 9–39, 2011. a

World Climate Change Research Program (WCRP): Coupled Model Intercomparison Project (Phase 6) Data Search interface, available at: https://esgf-node.llnl.gov/projects/cmip6/, last access: 17 August 2020. 

West, T.: Going Viral : Factors That Lead Videos to Become Internet Phenomena, The Elon Journal of Undergraduate Research in Communications, 2, 76–84, 2011. a

Yool, A., Popova, E. E., and Anderson, T. R.: MEDUSA-2.0: An intermediate complexity biogeochemical model of the marine carbon cycle for climate change and ocean acidification studies, Geosci. Model Dev., 6, 1767–1811, https://doi.org/10.5194/gmd-6-1767-2013, 2013. a

Yool, A., Palmiéri, J., Jones, C. G., Sellar, A. A., de Mora, L., Kuhlbrodt, T., Popova, E. E., Mulcahy, J. P., Wiltshire, A., Rumbold, S. T., Stringer, M., Hill, R. S. R., Tang, Y., Walton, J., Blaker, A., Nurser, A. J. G., Coward, A. C., Hirschi, J., Woodward, S., Kelley, D. I., Ellis, R., and Rumbold-Jones, S.: Spin-up of UK Earth System Model 1 (UKESM1) for CMIP6, J. Adv. Model. Earth Syst., 12, e2019MS001933, https://doi.org/10.1029/2019MS001933, 2020. a

1

See https://www.youtube.com/c/LeedeMora, last access: 17 August 2020.

Download
Short summary
We use time series data from the first United Kingdom Earth System Model (UKESM1) to create six procedurally generated musical pieces for piano. Each of the six pieces help to explain either a scientific principle or a practical aspect of Earth system modelling. We describe the methods that were used to create these pieces, discuss the limitations of this pilot study and list several approaches to extend and expand upon this work.
Altmetrics
Final-revised paper
Preprint