 {"id":1379,"date":"2022-01-07T12:00:06","date_gmt":"2022-01-07T10:00:06","guid":{"rendered":"https:\/\/nivel.teak.fi\/carpa7\/?p=1379"},"modified":"2024-04-19T11:53:26","modified_gmt":"2024-04-19T08:53:26","slug":"machinic-automation-in-the-process-of-text-and-music-composition","status":"publish","type":"post","link":"https:\/\/nivel.teak.fi\/carpa7\/machinic-automation-in-the-process-of-text-and-music-composition\/","title":{"rendered":"Machinic automation in the process of text and music composition:"},"content":{"rendered":"\n<ul class=\"sidenote wp-block-list\">\n<li>Updated April 2024<\/li>\n<\/ul>\n\n\n\n<div class=\"wp-block-group abstrakti\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<p>The idea for the piece \u201cVersificator \u2013 Render 3\u201d comes originally as a metaphor for the <em>versificator<\/em> created by George Orwell in the novel \u201c1984\u201d whose main purpose was to act as an automatic generator of literature and music. I am an avid consumer of science fiction literature and films, and while thinking of automated music generation as an instrument of alienation in dystopian literature from the middle 20<sup>th<\/sup> century \u2013 in particular in Orwell and Huxley, I came up with a thought: what if there was something artistically valuable that could come out of these machines? what if creative machinic automation, instead of being used as an instrument of alienation, could be used as a compositional tool that could contribute to creating interesting musical works? The result of the implementation of a partially-automated compositional workflow and its creative exploration is the piece \u201cVersificator \u2013 Render 3\u201d, for vocal ensemble. In the novel \u201c1984\u201d, the versificator plays a role as a tool of social control. In this work, the metaphor for the <em>versificator<\/em> is reframed as a tool employed to artistically investigate music and text composition through the interplay between an automated but still dynamic cycle of generative exploration, in dialog with the creative subjectivity of a composer.&nbsp;<\/p>\n\n\n\n<p><strong>Keywords:<\/strong> <em>Versificator, Orwell, Computer-aided composition, Constraint Algorithms<\/em><\/p>\n<\/div><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Background<\/h2>\n\n\n\n<p>The inspiration for this composition lies in the <em>still-relevant-nowadays<\/em> literary works of early to mid-20th century writers Aldous Huxley and George Orwell. As I see it, their explorations of technological dystopias are far from relics of the past: they continue to resonate strongly today, as evidenced by a mounting pile of subsequent literature and filmography that echoes their themes of totalitarianism, technology, alienation, and rebellion. In this sense, I feel that the writings of Orwell and Huxley should persistently resonate within the collective consciousness, and ultimately, the reflections on the idea for the piece and its workflow should also welcome discussion around these overarching socio-cultural issues.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"773\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig1.jpg\" alt=\"\" class=\"wp-image-1390\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig1.jpg 1000w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig1-300x232.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig1-768x594.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Automated music<\/h2>\n\n\n\n<p>George Orwell\u2019s \u201c1984\u201d (Orwell 2023) and Aldous Huxley\u2019s \u201cBrave New World\u201d (Huxley 2013) provide distinct yet related dystopian visions of a futuristic \u2013 for the time the books were written \u2013 Western world. Orwell\u2019s narrative is set in a future where a totalitarian regime maintains power through strict control and mass surveillance, while Huxley\u2019s world presents a technologically advanced society with caste divisions based on genetic manipulation and psychological conditioning. Both authors have erased historical and cultural legacies in their respective societies, however, they give artificially generated music an important role in their narratives. Orwell\u2019s \u201c1984\u201d features the <em>versificator<\/em>, an obscure device used by the Ministry of Truth to produce cultural content, media, and entertainment, including automated song and lyric generation, without human intervention. Similarly, in \u201cBrave New World\u201d, music, generated by the <em>synthetic music machines<\/em>, serves as entertainment and promotes societal conformism, preventing people from challenging the prevailing order. Both authors attribute these mechanisms certain power as tools for social control. The music from both Orwell\u2019s and Huxley\u2019s generative devices, the <em>versificator<\/em> and the <em>synthetic music machines<\/em>, is portrayed as low-quality, with simplistic melodies and clich\u00e9 lyrics. Unfortunately, we don\u2019t have access to the sound imagination of Huxley and Orwell to have a better idea of the actual audio realization of these songs<sup>[1]<\/sup>, but it is clear that both authors equate mechanically produced music with a precarious outcome that mainly serves the purpose of social alienation.<\/p>\n\n\n\n<ul class=\"sidenote wp-block-list\">\n<li><sup>[1]<\/sup> It is possible to hear a <em>proletarian<\/em> singing one of the versificator\u2019s sons in the movie \u201c1984\u201d (1984) by M. Radford<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Artistic inquiries&nbsp;<\/h2>\n\n\n\n<p>Overall, my <em>versificator<\/em> is a modular system that simultaneously generates and sonifies machine-generated text using AI rule-based methods. This process automatizes a large part of the generation of musical information in the form of pitches, durations, dynamics, and vocal articulations. In addition, the system provides the possibility of automatizing the formal structure of the piece and the temporal and textural disposition of the musical material, by carrying out highly complex stochastic calculations. The system thus, facilitates diverse processes of musical generation, transformation, and concatenation by solely changing input parameters. However, this dependence on automated processes raises multiple inquiries, especially about the musical aspects that <em>escape<\/em> the computational formalization of the system. Can we dismiss these elements during the music creation process? On the other hand, are the parameters that the system can operate compositionally enough to produce music with depth and significance? Is there still space for subjectivity within the system\u2019s computational framework? And what about the potential for <em>defiance<\/em> of the system? If it exists, in what ways might it be investigated and utilized?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Implementation<\/h2>\n\n\n\n<p>Initially, I thought of a system that could generate some text and from this text, automatically retrieve some music. I decided to try to create some non-sensical poetry, and from it, generate a musical layer (pitch and durations) based on its phonological content. This in a way resembles the functioning of a <em>Text-To-Speech<\/em> (TTS) system. A TTS system generally receives some text input and translates it into the sound of a synthetic voice. This translation relies on the artificial recreation of acoustic information from the text by mapping letters to sounds with a particular spectral characteristic which makes us recognize them as speech sounds<sup>[2]<\/sup>. For example, to synthesize a vowel, the TTS system should contain information on its spectral structure in the form of the fundamental frequency, formants, and durations and use this information to synthesize it. In the case of my system, instead of synthetic speech, the result is a musical unit that maps the spectral structure of a phoneme into musical symbolic representation.<\/p>\n\n\n\n<ul class=\"sidenote wp-block-list\">\n<li><sup>[2]<\/sup> Probably the most widely used TTS synthesis method from the 80s onwards was a type of synthesizer developed especially after the works of Dennis Klatt: the Klatt synthesized (Klatt 1980).<\/li>\n<\/ul>\n\n\n\n<p>The integral musical material is the phonological content of an imaginary language that nurtures itself with a merging of phonemes coming from Latin and English (see <em>performance notes<\/em>, pages. 1 and 2 of the score). This phonological world materializes in three forms, as <em>nonsensical <\/em>words, and only vocalic or consonant sounds. Each of these forms comes out from three different generative modules. The system relies mainly on these three text generator modules plus a formal operator module. Within each module exists a complex interaction between multiple constraint rules of different natures. The final complexity is derived from the chaining of simple rules. The outcome of each module is a musical unit \u2013 let\u2019s call it a musical phrase \u2013 consisting of both the generated text as the uttered text for a vocal part, and its sonification as musical symbolic information in the form of pitches, durations, and vocal articulations.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"497\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig2.jpg\" alt=\"\" class=\"wp-image-1391\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig2.jpg 1000w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig2-300x149.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig2-768x382.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><figcaption class=\"wp-element-caption\">Visual interface of the versificator, showing the main window and individual modules.<\/figcaption><\/figure>\n\n\n\n<p>The <em>versificator<\/em> consists of a large Max<sup>[3]<\/sup> patch containing several subpatches. The main patch is where the main global view and functionality are located, and each subpatcher is a generative module. The core functionalities are built mainly upon two Max external libraries: Bach<sup>[4]<\/sup> and MOZ\u2019Lib<sup>[5]<\/sup>. Among many useful tools for computer-assisted composition, the Bach library provides a well-developed music notation interface. The library MOZ\u2019Lib contains an implementation of <em>PatchWorksConstraints<\/em>, a LISP-based constraints-solving engine developed by Mikael Larsson (Laurson 1996) for the software PWGL and ported to Max by \u00d6rjan Sandred and Julien Vincenot.<\/p>\n\n\n\n<ul class=\"sidenote wp-block-list\">\n<li><sup>[3]<\/sup> Max 8 is a musical programming language in the form of a patching environment. It is available at <a href=\"https:\/\/cycling74.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/cycling74.com<\/a>.<\/li>\n\n\n\n<li><sup>[4]<\/sup> <a href=\"https:\/\/bachproject.net\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/bachproject.net<\/a>.<\/li>\n\n\n\n<li><sup>[5]<\/sup> <a href=\"https:\/\/github.com\/JulienVincenot\/MOZLib\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/github.com\/JulienVincenot\/MOZLib<\/a>.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Compositional method<\/h2>\n\n\n\n<p>In the <em>versificator<\/em>, the process of composition occurs almost simultaneously with the process of generation, at least for the part of the process that occurs <em>inside the system<\/em>. This means that the generation of musical material carries itself a compositional logic. Mainly, the parameters that govern the generation and organization of the musical material both at the micro and the macro level have been formalized as <em>constraint rules <\/em>enforced by a<em> constraint algorithm<\/em>.<\/p>\n\n\n\n<p>The way that constraint algorithms work is rather simple. One determines a domain, or a <em>search space<\/em>, consisting of a set of musical elements, one determines some <em>rules<\/em> for a particular musical organization for these elements, and the algorithm sorts the musical elements by enforcing this rule or set of rules. The algorithm evaluates iteratively every possible combination of elements until it finds a solution or multiple solutions, or until it finds none. The rules are usually expressed as logical statements and each candidate solution will be evaluated as <em>true<\/em> or <em>false<\/em>. Those evaluated as true are accepted and returned to the user, and those evaluated as false are rejected. In the field of contemporary music, particularly in computer-assisted composition workflows, constrained programming and its implementation for music composition as <em>constraint algorithms<\/em> have a very long history, starting with its implementations in the Illiac Suite<sup>[6]<\/sup> and later in the work of other relevant contemporary composers, such as Magnus Lindberg and \u00d6rjan Sandred<sup>[7]<\/sup>.<\/p>\n\n\n\n<ul class=\"sidenote wp-block-list\">\n<li><sup>[6]<\/sup> The Illiac Suite (1957), a piece for string quartet by Lejaren Hiller is conventionally agreed to be the first musical work composed using computational algorithms in the ILLIAC computer, in the University of Illinois.<\/li>\n\n\n\n<li><sup>[7]<\/sup> \u00d6rjan Sandred has written a significant amount of literature on the development and implementation of constraint algorithms as tools in computer-assisted composition workflows (Sandred 2009), (Sandred 2010), (Sandred 2017).<\/li>\n<\/ul>\n\n\n\n<p>Within the <em>versificator<\/em>, I have implemented mainly two types of rules. The first type operates at the level of each individual generative module, constraining the generation of some text and its sonification in the form of musical symbolic information. The second type operates at the global formal level, involving the temporal distribution and textural organization of the output of each module.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">First module: Non-sensical canon<sup>[8]<\/sup><\/h2>\n\n\n\n<p>The first module generates individual or sequences of sung <em>nonsense <\/em>words. A <em>nonsense <\/em>word or a <em>pseudoword<\/em> is a unit of speech or text that appears to be a real word in a given language, as its construction follows the phonotactic rules of the language in question, although it has no meaning, or it doesn\u2019t exist in the lexicon. The module generates pseudowords are the result of rule-based combinations of prefixes, roots, and suffixes of English words. Some rules that govern the generation of pseudowords have to do with the use of <em>rhyme patterns<\/em><sup>[9]<\/sup>, alliteration, number of vowels or consonants, or proportion between them. For example, depending on how many words one generates \u2013 up to a maximum of 4 \u2013 the rhyme pattern change (e.g. 2 words: aa; 3 words: aba; 4 words: abab.).<\/p>\n\n\n\n<ul class=\"sidenote wp-block-list\">\n<li><sup>[8]<\/sup> The non-sensical canon text generator is inspired by a program named \u201cWords without sense\u201d created by the artist Mario Guzm\u00e1n (<a href=\"https:\/\/www.mario-guzman.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">mario-guzman.com<\/a>), which outputs random combinations of prefixes, suffixes, and roots in Spanish or English. My version builds upon Guzman\u2019s work by adding the possibility of generating text using constraint rules.<\/li>\n\n\n\n<li><sup>[9]<\/sup> A rhyme pattern applies to the ending of consecutive words. This is different from the notion of <em>rhyme scheme<\/em>, which applies to the final word of a stanza containing a determined number of syllables.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"136\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig3-1024x136.jpg\" alt=\"\" class=\"wp-image-1392\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig3-1024x136.jpg 1024w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig3-300x40.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig3-768x102.jpg 768w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig3.jpg 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Examples of pseudowords following a rhyme pattern that are generated by random combinations of prefixes, roots, and suffixes.<\/figcaption><\/figure>\n\n\n\n<p>The symbolic sonification of the word occurs in stages. First, the word is automatically hyphenated, and a musical pitch is given to each vowel in a syllable \u2013 similar to almost any sung text. These notes come from a database that contains information about the formant frequencies and duration of English language vocalic sound measurements. (Hillenbrand et al. 1995). A different formant pitch will be assigned to each voice in the ensemble. The fundamental frequency will always appear in the lower voice, and the higher formants will appear successively in higher voices. The number of voices that should sing a word will determine how many notes from the formant structure should be sonified. The words thus are sung syllabically.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"90\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig4.jpg\" alt=\"\" class=\"wp-image-1393\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig4.jpg 1000w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig4-300x27.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig4-768x69.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><figcaption class=\"wp-element-caption\">Example of the outcome of the pseudowords generator module: Two words are sung by a female voice. The notes come from the sonification of the 4th formant from Hillenbrandt for each vowel.<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full small\"><img loading=\"lazy\" decoding=\"async\" width=\"591\" height=\"203\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig5.jpg\" alt=\"\" class=\"wp-image-1394\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig5.jpg 591w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig5-300x103.jpg 300w\" sizes=\"auto, (max-width: 591px) 100vw, 591px\" \/><figcaption class=\"wp-element-caption\">An example of a sung part by the baritone where the notes come from the sonification of the fundamental frequency of the vowels used in the word (m. 180 of the score).<\/figcaption><\/figure>\n\n\n\n<p>Below is illustrated how a <em>heuristic<\/em> rule<sup>[10]<\/sup> that controls the number of letters for a word works. The search engine receives three variables: a prefix, a root, and a suffix for a word. The rule tells the engine to count letters for a different combination of variables and find the combination that is closer to 20.<\/p>\n\n\n\n<ul class=\"sidenote wp-block-list\">\n<li><sup>[10]<\/sup> The concept of <em>heuristic<\/em> and <em>deterministic<\/em> rules are explained in \u201cmethods\u201d \u2013 constraint algorithms.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"502\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig6.jpg\" alt=\"\" class=\"wp-image-1395\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig6.jpg 1000w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig6-300x151.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig6-768x386.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><figcaption class=\"wp-element-caption\">Process of generation of longer or shorter words relying on a heuristic constraint rule.<\/figcaption><\/figure>\n\n\n\n<p>Another example of a compositionally driven generation that can be achieved by implementing heuristic rules is the construction of a set of words that have a variable proportion between vowels and consonants. Below is shown a sequence of words with a decreasing proportion of consonants (m. 39\u201390 of the score):<\/p>\n\n\n\n<figure class=\"wp-block-image size-full small\"><img loading=\"lazy\" decoding=\"async\" width=\"923\" height=\"479\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig7.jpg\" alt=\"\" class=\"wp-image-1396\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig7.jpg 923w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig7-300x156.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig7-768x399.jpg 768w\" sizes=\"auto, (max-width: 923px) 100vw, 923px\" \/><figcaption class=\"wp-element-caption\">Example of a generated sequence of words with a decreasing proportion of consonants.<\/figcaption><\/figure>\n\n\n\n<p>Another functionality of the <em>non-sensical canon<\/em> module allows me to scatter each word along the vocal texture. As a result, it is possible to hear the word in one of the voices, and splinters of it in the rest of the texture (mm. 45\u201353 of the score).<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"681\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig8.jpg\" alt=\"\" class=\"wp-image-1397\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig8.jpg 1000w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig8-300x204.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig8-768x523.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><figcaption class=\"wp-element-caption\">Example of the scattered word \u201csuphrinchy\u201d (mm. 45\u201353 of the score).<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">The \u201cvowel-choral\u201d module<\/h2>\n\n\n\n<p>The second module, the \u201cvowel-choral\u201d module, generates a sequence of vocalic sounds, by ordering sequentially a set of predefined vocalic IPA symbols. As the source for the sonification of these vocalic sounds, I use again the database of formants and durations from Hillendbrand\u2019s study. However, I have added another parameter for constraining the generation of sequences, and that is the <em>contrastiveness <\/em>between successive durations, as measured using the nPVI index (Nolan and Asu 2009; Grabe et al. 2000). In short, the module generates sequences in the form of choral in which each successive vocalic sound is more or less contrastive in terms of duration, and the constraint engine facilitates this by allowing the generation of sequences with a desired nPVI ranging from 5. (less contrasting) to 40. (more contrasting). Some other complementary rules may be activated, such as the possibility of repetition or not of any symbol, and the length of the sequence, which can range from 2 to 12.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full small\"><img loading=\"lazy\" decoding=\"async\" width=\"580\" height=\"399\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig9.jpg\" alt=\"\" class=\"wp-image-1398\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig9.jpg 580w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig9-300x206.jpg 300w\" sizes=\"auto, (max-width: 580px) 100vw, 580px\" \/><figcaption class=\"wp-element-caption\">Schematic representation of the nPVI index.<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"240\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig10-1024x240.jpg\" alt=\"\" class=\"wp-image-1399\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig10-1024x240.jpg 1024w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig10-300x70.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig10-768x180.jpg 768w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig10.jpg 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Example of the raw outcome of the \u201cvowel choral\u201d module.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">The Consonant Cloud module<\/h2>\n\n\n\n<p>The third module, the \u201cconsonant cloud\u201d, generates sequences of consonants. The rules that constrain the generation of these sequences are based on phonetic features. According to the IPA chart<sup>[11]<\/sup>, consonants can be classified according to <strong>(i) <em>phonation<\/em><\/strong> \u2013 as <em>voiced<\/em> or <em>unvoiced <\/em>\u2013; <strong>(ii) <em>place of articulation<\/em><\/strong> \u2013 as<em> bilabial, alveolar, velar, labiodental, dental, postalveolar, palatoalveolar and postalveolar \u2013<\/em> and <strong>(iii) <em>manner of articulation<\/em><\/strong> \u2013 as <em>plosives, nasals, fricatives, and affricatives<\/em>. These constraint rules also depend on the number of parallel sequences that should be generated, each of them mapped to a voice of the ensemble. For example, it is possible to constraint a sequence to have four parallel lines containing only unvoiced sounds:<\/p>\n\n\n\n<ul class=\"sidenote wp-block-list\">\n<li><sup>[11]<\/sup>  <a href=\"https:\/\/www.internationalphoneticalphabet.org\" target=\"_blank\" rel=\"noreferrer noopener\">www.internationalphoneticalphabet.org<\/a><\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"138\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig11-1024x138.jpg\" alt=\"\" class=\"wp-image-1401\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig11-1024x138.jpg 1024w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig11-300x40.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig11-768x104.jpg 768w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig11-1536x207.jpg 1536w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig11-2048x276.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Generated sequence containing only unvoiced consonants.<\/figcaption><\/figure>\n\n\n\n<p>Below is an example of a more complex rule, where independent lines are constrained to share some phonetic quality:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"139\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig12-1024x139.jpg\" alt=\"\" class=\"wp-image-1402\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig12-1024x139.jpg 1024w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig12-300x41.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig12-768x104.jpg 768w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig12-1536x208.jpg 1536w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig12-2048x278.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Constrained generation of consonant sequences.<\/figcaption><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Voice 1 and 2: equal <em>phonation.<\/em>&nbsp;<\/li>\n\n\n\n<li>Voice 3 and 4: equal <em>place articulation.<\/em><\/li>\n\n\n\n<li>Voice 4 and 5: equal <em>manner or articulation<\/em><\/li>\n<\/ul>\n\n\n\n<p>Unvoiced consonants are notated as unpitched sounds. Voiced consonants are notated using pitches coming from the formant structure of a neutral vowel \u201cschwa\u201d (\/\u0259\/).<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"97\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig13-1024x97.jpg\" alt=\"\" class=\"wp-image-1403\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig13-1024x97.jpg 1024w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig13-300x28.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig13-768x73.jpg 768w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig13.jpg 1376w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Example of the outcome of the \u201cconsonant cloud\u201d generator module.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Dynamics submodule<\/h2>\n\n\n\n<p>Within each module, there is a dynamics generator submodule, that allows the creation of a dynamic layer for the generated phrase. This layer is conceived in a probabilistic way, namely, the submodule allows the creation of a probability distribution for all possible dynamics within a range (<strong><em>ppp<\/em><\/strong> to <strong><em>fff<\/em><\/strong>). Once this distribution is established, a dynamic layer can be created for each phrase or section (there is also a \u201cgeneral\u201d dynamics submodule in the main user interface screen).<\/p>\n\n\n\n<figure class=\"wp-block-image size-full small\"><img loading=\"lazy\" decoding=\"async\" width=\"807\" height=\"377\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig14.jpg\" alt=\"\" class=\"wp-image-1404\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig14.jpg 807w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig14-300x140.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig14-768x359.jpg 768w\" sizes=\"auto, (max-width: 807px) 100vw, 807px\" \/><figcaption class=\"wp-element-caption\">View of the submodule that generates the dynamic layer.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Generation\/composition flow<\/h2>\n\n\n\n<p>As it can be appreciated, my <em>versificator<\/em> does not work as a fully automated music score generator but rather proposes an iterative creation process where each generative module produces an output, and if this is non-satisfactory, the composer can modify some input parameters and expect different results. The workflow for the system can be roughly diagramed in the following way:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"819\" height=\"800\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig15.jpg\" alt=\"\" class=\"wp-image-1405\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig15.jpg 819w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig15-300x293.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig15-768x750.jpg 768w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/><figcaption class=\"wp-element-caption\">Diagram showing the workflow for the composition of the versificator.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Formal organization&nbsp;<\/h2>\n\n\n\n<p>An independent module facilitates the formal and textural organization of musical material. The process of formal determination is done in advance to the generation of musical material with the modules, and its outcome is a list containing the type of material (words, vowels, and consonants), time-scaling factor, and textural distribution of musical units that should be generated by the modules. This list assumes the role of a <em>formal blueprint<\/em> and serves as a guide to generate the sentences in the modules, join them, and concatenate them. The determining factors for formal constraints are based on the possible duration and scaling factor of each musical unit, the number of voices in which it can appear, and the type of material and its possibilities of combination in groups of two or three, depending on how many voices of the ensemble they appear on. Within the module, these calculations are done using stochastic methods based on probabilistic distributions and linear progressions. However, I will leave a detailed explanation of those processes out of this text.<\/p>\n\n\n\n<p>Some examples of the rules used to determine the formal organization are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Presentations of materials can appear in duets or trios (E.G.: cons-vows; cons-words; vows-cons-words; etc.).<\/li>\n\n\n\n<li>Duos or trios cannot be composed of the same type of material.&nbsp;<\/li>\n\n\n\n<li>The sum of voices of each duet or trio of materials cannot exceed the number of voices determined for the vocal ensemble (by default there are five voices).<\/li>\n\n\n\n<li>Organize the pairs\/trios based on the greatest possible contrast between the temporal scaling factors.<\/li>\n\n\n\n<li>Organize the pairs\/trios as close as possible to the mean of the scalar factor. Example: (1 x 10) \/ 2 = 5.5.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"906\" height=\"752\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig16.jpg\" alt=\"\" class=\"wp-image-1406\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig16.jpg 906w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig16-300x249.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig16-768x637.jpg 768w\" sizes=\"auto, (max-width: 906px) 100vw, 906px\" \/><figcaption class=\"wp-element-caption\">Examples of some of my formal rules expressed as LISP.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">\u201cOutside\u201d of the system<\/h2>\n\n\n\n<p>Some compositional decisions are made outside the system. In particular, those related to <em>tempo<\/em>, use of<em> mouth shapes<\/em>, <em>whispered\/spoken\/sprechgesang<\/em>, and <em>character indication<\/em>. It means two things for a compositional decision to be made \u201coutside\u201d the system. Initially, the specific musical parameter on which it operates has been left outside of a computational formalization. This might happen for several reasons, but mainly, due to the complexity of the implementation that would entail its computational formalization and compositional operation in this workflow. Second, the fact that most of this work can easily and efficiently be done \u201cby hand\u201d afterward. For example, a ritenuto at the end of a phrase, or the end of a section. At first glance, it seems like a <em>marginal<\/em> compositional space, but the reader will observe afterward how these actually became a fundamental part of the compositional work. Below, I will discuss some of the <em>non-computational<\/em> formalizations that happened from my side, \u201coutside the system\u201d.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Tempo<\/h3>\n\n\n\n<p>When deciding the tempo of a section, I mainly care about determining a pace that would allow a listener to follow the levels of musical information delivered by the piece at that moment. This varies largely, depending on the type of material presented and the density of the texture as indicated by the formal blueprint. In addition, the choice of faster\/slower tempos is sometimes related to the general character of the section, which is indicated with text indications (more information on this later). The addition of <em>rallentandos<\/em> and <em>accelerandos<\/em> obeys mainly organic phrasing concerns and formal determination needs, for example, when a section ends, usually a rallentando is desired.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Dynamics<\/h3>\n\n\n\n<p>Clearly, a probabilistic methodology for generating a dynamic layer is agnostic of any type of phrase structure, or even more problematic, is independent of how these sound materials should be orchestrated in the texture (e.g., a given <strong><em>ppp<\/em><\/strong> dynamic for a group of unvoiced plosives together in the same phrase with a <strong><em>mf <\/em><\/strong>vowel choral material will likely cause \u201corchestration\u201d problems, and these need to be later addressed \u201cby hand\u201d. The dynamic layer generator, as originally ideated, will be most of the time flawed from this <em>orchestrational<\/em> point of view. Although the result of this was not easy to imagine a priori, once heard live by the ensemble, decisions were to be made. For this, it was also important to have the feedback of the performers. Ultimately, the solution to the problem of dynamics had to happen \u201coutside\u201d of the system. In addition, the chosen set of dynamic possibilities seems now too restrictive, as some important are missing, for example, <strong><em>sforzatos<\/em><\/strong>. However, it seems to work relatively better when choosing dynamic distributions to phrases generated by each module according to their material composition, instead of choosing overall dynamic layers for each compound phrase.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Intonation\/tunning<\/h3>\n\n\n\n<p>The system allows me to choose from different microtonal grids (this is more a feature of the roll\/score objects of the Bach library, rather than a feature implemented by me for the versificator). Initially, I chose quarter tones as the <em>defaul<\/em>t tone division. However, I maintained the flexibility to change the grid to semitones in some sections to facilitate some singing lines that were very complex due to microtonal leaps. I stuck to the original microtonal division in the sections where there are glissandos, as they are easier to sing in tune and the harmonic effects are more interesting.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The (illusion of) defiance<\/h3>\n\n\n\n<p>Some other compositional strategies were thought of as ways of breaking the logic of the system. Below I discuss three different cases. The first of them involves using some computational weighted randomness in the form of melodic perturbations, and the others involve no computational processes at all, rather, they come as somewhat arbitrary decisions based on my subjectivity and imaginary sonic representations of certain moments of the piece.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Case 1 \u2013 Statistical perturbations<\/h2>\n\n\n\n<p>After the first render of the piece, I realized that the sonification of vowels and their resulting chords were too static and repetitive, as the phonological components of the voiced sounds \u2013 which ultimately provide the material for the harmonic content \u2013 were limited and eventually, the chords started to repeat too often. Thus, in order to give the resulting chords some extent of variation, I decided to add some deviation in how the system would choose the pitches for each harmonic field. As a result, instead of mapping a 1-to-1 pitch for each formant, the system picks weighted random pitches based on a Gaussian probability distribution in which the original pitch is the highest point of the distribution. The width of the distribution (size of the range within pitches can be chosen) increases towards the higher voices. As a result, chords usually vary microtonally\/tonally for each vowel.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full small\"><img loading=\"lazy\" decoding=\"async\" width=\"596\" height=\"417\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig17-1.jpg\" alt=\"\" class=\"wp-image-1447\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig17-1.jpg 596w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig17-1-300x210.jpg 300w\" sizes=\"auto, (max-width: 596px) 100vw, 596px\" \/><figcaption class=\"wp-element-caption\">Gaussian distributions of pitches for each formant.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Case 2 \u2013 Mouth shapes<\/h2>\n\n\n\n<p>There is a prominent apparition of <em>mouth shape<\/em> symbols, especially in the first section (m. 1\u201334). As it can be observed, in this section I only make use of consonant sounds. The idea of the mouth shapes came as a result of listening to a recording of a rehearsal of this section and finding that the overall sonority was quite static and unchanged. I thought that adding a layer of timbral transformation by changing the shape of the mouth while pronouncing the phonemes would add some richness and spontaneity to the texture, which to my judgment was absent. The addition of mouth shapes was done fully \u201coutside of the system\u201d.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"747\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig18.jpg\" alt=\"\" class=\"wp-image-1408\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig18.jpg 1000w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig18-300x224.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig18-768x574.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><figcaption class=\"wp-element-caption\">Example of how mouth shapes are used for varying the timbre of consonants (mm. 23\u201327).<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Case 3 \u2013 Character indications<\/h2>\n\n\n\n<p>The sung character indications came to my mind following a vague (imaginary) semantic connection between the text in each section and an idea about what that text might mean in this imaginary language, and subsequently how it should be uttered. For some reason, the word \u201ctranschynklisys\u201d (m. 39) sounded to me somewhat <em>mysterious<\/em> or <em>metaphysical<\/em>, the word \u201csuphrinchy\u201d (m. 46) more like a childish game, therefore it is asked <em>to be sung as a toy<\/em>, whereas the word \u201cdifponieance\u201d (m. 69) sounded more solemn, or the words \u201chomovirish abominish\u201d (m. 135) sounded more religious, therefore I wanted them to be sung in a Palestrina-like style. Another example, starting on m. 91 comes the first vowel choral, and to me, this section for some reason gave me some psychedelic vibes, as a type of introspective trip under the effects of some psychoactive substance, where one can hear how words slow down and stretch. Therefore, the indication in the score states that this section should sound <em>psychedelic<\/em> and <em>retro<\/em> (60s-ish).<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"290\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig19-1024x290.jpg\" alt=\"\" class=\"wp-image-1409\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig19-1024x290.jpg 1024w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig19-300x85.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig19-768x218.jpg 768w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig19.jpg 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Examples of diverse sung character indications in the score.<\/figcaption><\/figure>\n\n\n\n<p>In summary, the character indications are rather unspecific, but they come as comments on my personal idea of how the overall sonority of the section should be. Sometimes, character indications involve a sense of theatricality (e.g. <em>\u201cas from radio news<\/em>\u201d, in m. 130). In some others, the singing technique changes dramatically, such as in m. 147 and 150. In every case, the result of their interpretation is almost unforeseen and open to what each performer understands from them.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Final reflections<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"365\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig20-1024x365.jpg\" alt=\"\" class=\"wp-image-1410\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig20-1024x365.jpg 1024w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig20-300x107.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig20-768x274.jpg 768w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig20.jpg 1100w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Example of the outcome of a musical phrase from the <em>versificator<\/em>.<\/figcaption><\/figure>\n\n\n\n<p>After listening to the recording of the premiere of the piece several times, I found some nice-sounding moments, for example, the end of the section on page 13 of the score with the glottal trill \u2013 added outside the system \u2013 (bars 84\u201394), the intervention of the mezzo-soprano in bar 130 (\u201cas from radio news\u201d), or the very end of the piece. The material is in general very homogenous: the text based on phonetic rules gives it uniformity in its sonority, and the rhyming pseudowords give it a poetic-imaginary quality that has a certain charm. However, I must say that the interpretation by the performers of the <em>character indications<\/em> and those elements composed \u201coutside of the system\u201d is what gives the piece <em>something<\/em>, that otherwise would not have. As it can be observed, the overall compositional trajectory of the piece goes from almost no character indications to where the <em>irrationality<\/em> of the indications floods the whole texture around the climatic point, sometime around m. 150.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"715\" src=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig21.jpg\" alt=\"\" class=\"wp-image-1411\" srcset=\"https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig21.jpg 1000w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig21-300x215.jpg 300w, https:\/\/nivel.teak.fi\/carpa7\/wp-content\/uploads\/2024\/04\/vassalo2-fig21-768x549.jpg 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><figcaption class=\"wp-element-caption\">Same phrase after a refinement process.<\/figcaption><\/figure>\n\n\n\n<p>After this experience, I came to the conclusion that the work with automation and computational formalizations \u2013 at least as they were implemented in the <em>versificator<\/em> \u2013 seemed to me more like a <em>starting point<\/em> for a deeper compositional elucubration, which ultimately was oriented towards the addition of new layers and the addition of extra-musical ideas that were spilled into the piece, in particular, layers involving theatricality or more visual-gestural performing aspects. Even after a significant process of notation refinement and engraving, the piece as it emerges originally from the <em>versificator <\/em>seems to be far from finished. For it to be a complete piece, it is still necessary to generate and refine new layers of compositional development outside of the formalizations that the <em>versificator<\/em> operates. In this sense, the process of composition \u201coutside of the system\u201d became the key to making a valuable piece: a type of <em>marginal<\/em> \u2013 and why not <em>liminal \u2013<\/em> compositional space at the borders of the system became the <em>heart<\/em> of the work. Without this, the piece would be plainly not worth of performance in concert.<\/p>\n\n\n\n<p>What has become clear to me is that the piece\u2019s artistic individuality lies far beyond computational formalizations. This realization prompts several questions: Is the purpose of automated composition merely to demonstrate a concept, or does it hold real artistic value? How many times must the <em>versificator<\/em> repeat its process to produce the finest result? Could be the case thus, that computational formalizations might work just as musical \u201ctipping points\u201d to further explore? Is this lack of <em>humanity<\/em> in the outcome of the <em>versificator<\/em> what Orwell imagined as the alienating nature of the versificator? I leave it to the composition itself to manifest these inquiries and, in its own way, address some of them.<\/p>\n\n\n\n<p>Link to the video of the premiere of the piece \u201cVersificator \u2013 Render 3\u201d by the Bergen-based vocal ensemble \u201cTabula Rasa\u201d: <a href=\"https:\/\/www.youtube.com\/watch?v=MGxBEbUMMt4&amp;ab_channel=JuanVassallo\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.youtube.com\/watch?v=MGxBEbUMMt4&amp;ab_channel=JuanVassallo<\/a><\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<div class=\"jetpack-video-wrapper\"><iframe loading=\"lazy\" title=\"Versificator - Render 3\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/MGxBEbUMMt4?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<\/div><figcaption class=\"wp-element-caption\">\u201cVersificator \u2013 Render 3\u201d by the Bergen-based vocal ensemble \u201cTabula Rasa\u201d.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<p>Grabe, Esther, Francis Nolan, and Low Ling. 2000. &#8220;Quantitative Characterizations of Speech Rhythm: Syllable-Timing in Singapore English.&#8221; <em>Language and Speech<\/em> 43 (4): 377\u2013401.<\/p>\n\n\n\n<p>Hillenbrand, James, Laura A. Getty, Michael J. Clark, and Kimberlee Wheeler. 1995. \u201cAcoustic characteristics of American English vowels.\u201d <em>Journal of the Acoustical Society of America<\/em> 97 (5): 3099\u20133111. <a href=\"https:\/\/doi.org\/10.1121\/1.411872\" target=\"_blank\" rel=\"noreferrer noopener\">doi.org\/10.1121\/1.411872<\/a>.<\/p>\n\n\n\n<p>Huxley, Aldous. 2013. <em>Brave New World<\/em>. London: Everyman\u2019s Library.<\/p>\n\n\n\n<p>Klatt, Dennis H. 1980. \u201cSoftware for a cascade\/parallel formant synthesizer.\u201d <em>The Journal of the Acoustical Society of America<\/em> 67 (3): 971\u2013995. <a href=\"https:\/\/doi.org\/10.1121\/1.383940\" target=\"_blank\" rel=\"noreferrer noopener\">doi.org\/10.1121\/1.383940<\/a>.<\/p>\n\n\n\n<p>Laurson, M. 1996. <em>PatchWork: A Visual Programming Language and Some Musical Applications<\/em>. Sibelius Academy.<\/p>\n\n\n\n<p>Nolan, Francis, and Eva Liina Asu. 2009. &#8220;The pairwise variability index and coexisting rhythms in language.&#8221; <em>Phonetica<\/em> 66 (1\u20132): 64\u201377. <a href=\"https:\/\/doi.org\/10.1159\/000208931\" target=\"_blank\" rel=\"noreferrer noopener\">doi.org\/10.1159\/000208931<\/a>.<\/p>\n\n\n\n<p>Orwell, George. 2023. <em>1984<\/em>. Biblios.<\/p>\n\n\n\n<p>Sandred, \u00d6rjan. 2017. <em>The musical fundamentals of computer assisted composition<\/em>. Winnipeg MB: Audiospective Media.<\/p>\n\n\n\n<p>Sandred, \u00d6rjan. 2010. \u201cPWMC, a Constraint-Solving System for Generating Music Scores.\u201d <em>Source: Computer Music Journal<\/em> 34 (2): 8\u201324. <a href=\"https:\/\/about.jstor.org\/terms\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/about.jstor.org\/terms<\/a>.<\/p>\n\n\n\n<p>Sandred, \u00d6rjan. 2009. \u201cApproaches to Using Rules as a Composition Method.\u201d <em>Contemporary Music Review<\/em> 28 (2): 149\u2013165. <a href=\"https:\/\/doi.org\/10.1080\/07494460903322430\" target=\"_blank\" rel=\"noreferrer noopener\">doi.org\/10.1080\/07494460903322430<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The idea for the piece \u201cVersificator \u2013 Render 3\u201d comes originally as a metaphor for [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-1379","post","type-post","status-publish","format-standard","hentry","category-conference-presentation"],"acf":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/posts\/1379","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/comments?post=1379"}],"version-history":[{"count":37,"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/posts\/1379\/revisions"}],"predecessor-version":[{"id":1455,"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/posts\/1379\/revisions\/1455"}],"wp:attachment":[{"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/media?parent=1379"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/categories?post=1379"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nivel.teak.fi\/carpa7\/wp-json\/wp\/v2\/tags?post=1379"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}