Next Article in Journal
Terahertz Displacement Sensing Based on Interface States of Hetero-Structures
Next Article in Special Issue
Pixel-Based Approach for Generating Original and Imitating Evolutionary Art
Previous Article in Journal
Effect of Temperature on Electromagnetic Performance of Active Phased Array Antenna
Previous Article in Special Issue
Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AntsOMG: A Framework Aiming to Automate Creativity and Intelligent Behavior with a Showcase on Cantus Firmus Composition and Style Development

by
Chun-Yien Chang
1,2,* and
Ying-Ping Chen
3,*
1
Open Culture Network for Humanities and Fine Arts, Taipei 10009, Taiwan
2
College of Humanities and Social Sciences, National Chiao Tung University, Hsinchu 30010, Taiwan
3
Department of Computer Science, National Chiao Tung University, Hsinchu 30010, Taiwan
*
Authors to whom correspondence should be addressed.
Electronics 2020, 9(8), 1212; https://doi.org/10.3390/electronics9081212
Submission received: 5 June 2020 / Revised: 16 July 2020 / Accepted: 24 July 2020 / Published: 28 July 2020
(This article belongs to the Special Issue Evolutionary Machine Learning for Nature-Inspired Problem Solving)

Abstract

:
Creative behavior is one of the most fascinating areas in intelligence. The development of specific styles is the most characteristic feature of creative behavior. All important creators, such as Picasso and Beethoven, have their own distinctive styles that even non-professional art lovers can easily recognize. Hence, in the present work, attempting to achieve cantus firmus composition and style development as well as inspired by the behavior of natural ants and the mechanism of ant colony optimization (ACO), this paper firstly proposes a meta-framework, called ants on multiple graphs (AntsOMG), mainly for roughly modeling creation activities and then presents an implementation derived from AntsOMG for composing cantus firmi, one of the essential genres in music. Although the mechanism in ACO is adopted for simulating ant behavior, AntsOMG is not designed as an optimization framework. Implementations can be built upon AntsOMG in order to automate creation behavior and realize autonomous development on different subjects in various disciplines. In particular, an implementation for composing cantus firmi is shown in this paper as a demonstration. Ants walk on multiple graphs to form certain trails that are composed of the interaction among the graph topology, the cost on edges, and the concentration of pheromone. The resultant graphs with the distribution of pheromone can be interpreted as a representation of cantus firmus style developed autonomously. Our obtained results indicate that the proposal has an intriguing effect, because significantly different styles may be autonomously developed from an identical initial configuration in separate runs, and cantus firmi of a certain style can be created in batch simply by using the corresponding outcome. The contribution of this paper is twofold. First, the presented implementation is immediately applicable to the creation of cantus firmi and possibly other music genres with slight modifications. Second, AntsOMG, as a meta-framework, may be employed for other kinds of autonomous development with appropriate implementations.

1. Introduction

Among the crucial goals in pursuing automation of creativity and intelligent behavior is music composition, of which the idea can be dated back for centuries [1], long before the development of modern computers composed of electronic components, such as vacuum tubes and transistors. The historical trajectory of the application of computer methodologies on music composition or generation can be traced via reviews and surveys in various related domains, such as evolutionary computation [2,3,4], computational intelligence and creativity [5,6,7], deep learning [8], and artificial intelligence [9,10,11,12]. Most of the studies in existence focus on the generation of musical sequences that consist of notes with little or no involvement of composers or music artists. These studies, which originate from the engineering or computational aspects, usually belong to one of the following categories: (1) generating notes according to some rules or certain mathematical constructs; (2) using example data to train models to mimic a given music style or to compose as some target composer; and, (3) considering composition as an optimization task in which the objective function determines many important properties and characteristics of the creation. Although there is no doubt that interesting results suitable for practical use may be obtained [13,14] in this fashion, many important and even fundamental issues regarding music itself and music creation are rarely addressed [15]. This study seeks to create new possibilities to the related fields by proposing a framework aiming to automate creativity and intelligent behavior and demonstrating its capability of autonomous music style development and cantus firmus composition by cultivating “composers” of different styles.
To the best of our limited knowledge, this paper presents the first attempt to achieve autonomous music style development as a showcase for automation of creativity and intelligent behavior. In particular, collaborating with a classical music composer, a meta-framework, called ants on multiple graphs (AntsOMG), mainly for roughly modeling creation activities, the mindset of a creator, and the accumulation of experience is proposed, followed by an implementation of AntsOMG for composing cantus firmi, one of the essential genres in music, in order to demonstrate the effectiveness of the proposal. Inspired by the self-organizing behavior of natural ants [16], the mechanism designed in ant colony optimization (ACO) [17,18,19,20] for simulating ants is adopted. The self-organizing property is considered as the conduit for a “composer” to go from a common, initial condition to establishing and possessing distinctive styles. In what available, closest to the present study in the literature, ACO has been adapted to tasks of music generation [21,22,23], in which music generation is considered to be an optimization task. Unlike those studies employing optimization techniques for training models or considering music generation as an optimization task, there is no objective function, which is actually a deciding factor to the generated music, in AntsOMG. The experimental results show that significantly different styles can be autonomously developed from an identical initial configuration, and cantus firmi of a certain style can be created in batch by using the cultivated composer. The contribution of this paper is twofold. First, the presented implementation can be immediately used to generate cantus firmi and possibly other music genres with slight modifications. Second, AntsOMG, as a meta-framework, may be employed for other kinds of autonomous development with appropriate implementations.
The remainder of this paper is organized, as follows. Section 2 gives some background of this study, focusing on the creative ideas and mindset of a music composer while composing music pieces as well as cantus firmus, chosen as the target music genre in this study. Because cantus firmus composition and style development is employed as a showcase for the porposed AntsOMG, Section 3 provides a literature review on the generation of music via methods in artificial intelligence and evolutionary computation. Section 4 describes the proposed meta-framework, AntsOMG, designed for autonomous development and the implementation specific for autonomously developing styles of cantus firmi. Section 5 presents the developed Style Models, or considered “composers”, via the operation of the implementation. Section 6 analyzes the cantus firmi created by the obtained Style Models from a professional, musical point of view, and discusses the distinctiveness in music styles. Finally, this paper is concluded by Section 7.

2. Background

2.1. The Complexity of Creative Ideas

When composers are at work, their creative ideas can be extremely complicated and sophisticated, many music ideas are being developed simultaneously from mixed materials of various facets at different levels. These ideas not only influence each other, but also overlap in the layers of interpretation. Very few music materials represent only one single meaning in a composer’s mind, most of them go through the process of being ruminated on over and over countless times. Moreover, with the finalization, even the publication of the piece, when the composer personally performs it, the piece may still seem as if yet another new work.
What is equally complicated is that each composer has different growing background, musical training, and audiovisual experience, usually forming the foundation of this particular composer’s personal style. However, a very high percentage of this element has hardly been discussed when simulating the action of composing music with computation, or it is just filled up with some general theories. Hence, a compositional or personal style that is based on a certain time period or music genre becomes a subject that is even more rarely discussed.
Nowadays, we often utilize machine computation, commonly known as Artificial Intelligence, in order to simulate human activities, such as playing chess, recognizing objects or patterns, and even creating something to a certain degree. Among these methodologies, natural and evolutionary computation is one of the most used tools. In the process of using natural and evolutionary computation, the training process is mostly done with the composition results or by designing a computation model that directly produces notes. Yet, the mindset behind the creative process is often regarded as a black box, which is quite difficult to be discussed or even simulated.
Although the compositional creativity can also be described as a continuous process of “searching” for “optimal” “solutions”, these three words might be differently defined for people in the field of computation. The authors have been searching for an operational method that can roughly depict the complexity of human artists, which can also produce elements on a higher level during the computational process: compositional style or personality. Based on personal experience for creation, we found inspiration from the collective intelligence of ant colonies, propose the framework of ants on multiple graphs (AntsOMG) as the computational model to simulate the multi-dimensional, multi-thinking aspects of the compositional process, and apply the framework to cantus firmus, which originated from the middle-ages and blossomed in the seventeenth-century counterpoint textbooks.

2.2. Cantus Firmus in the Seventeenth Century

Cantus firmus is a fixed melody that has played a very important role in the transition from monophony to polyphony in western music. With a fixed melody from hymns, composers began to experiment with the possibilities to add on voices on top of it, and gradually shaped into one of the most fundamental theories in the traditional western music: the counterpoint. Although by the time of the High Renaissance, the mechanism of cantus firmus was gradually replaced with the more equal imitative counterpoint among voices, it had become an important pedagogic method in the most significant counterpoint textbook in the seventeenth century: Gradus ad Parnassum by J. J. Fux [24]. This method stresses the use of a clearly-ruled, simply-formed cantus firmus as a learning basis for species counterpoint. An example is shown in Figure 1.
Because of the crucial role and position of cantus firmus in the development of western music history, it may represent a performance practice as well as a basis to evolve into more complicated music genres. Its simplicity of form, just like the model organisms, Drosophila melanogaster and Arabidopsis thaliana, in biology, makes it a model organism in music as the basic form to explore more complex phenomena.
From the viewpoint of music, one of the common problems encountered with machine-generated music is the lack of consideration on how the music is performed, especially the melodies made for voices. On the other hand, all of the rules about cantus firmus are designed to allow humans to sing naturally and produce auditory accessibility. Human voices or instruments have some limitations as to what is singable or playable. Not all notes can be sung easily, thus making the design of the melodic flow very important for cantus firmus. Even when played by other instruments, such as pipe organs, it still aims to be similar to the concept of human singing. Please note that the rules in music are not equal to the concept of theorems or equations in mathematics or engineering, and following rules does not guarantee solutions or desired outcomes. The standard answer in the arts domain oftentimes cannot be determined by numbers or does not exist at all. Music rules are usually summarized to help learners to achieve a good direction and to preserve spaces that are needed to accommodate styles and variety.

3. Related Work on Music Generation and Automated Composition

While the primary goal of this study is to investigate the creation and intelligent behavior, cantus firmus composition and style development is selected as a showcase of the proposed framework, forming a secondary goal. Hence, in this section, studies in the domain of evolutionary computation related to music generation and automated composition are surveyed in order to provide the context regarding the current research progress on the relevant topics.
Evolutionary algorithms have been utilized to generate music for decades because of their flexibility and versatility. They are population-based, stochastic optimization frameworks that are relatively easy to be interfaced with the given task. Starting from their early days, genetic algorithms, one of the major branches of evolutionary algorithms, have been applied to computer-assisted music composition [25,26,27,28,29,30] as attempts of algorithmically composing music in a general sense. Subsequently, more complicated music constructs are utilized and considered, including music theory [31,32], chord progression [33], and measures and phrases [34,35]. Recently, a concert music piece, named Pygmalion [36], derived from a work [37] based on genetic algorithms creating a dialogue between creation and algorithmic operators, was formally performed at National Recital Hall, Taiwan.
Moreover, genetic algorithms have also been used to address certain specific issues in music, such as harmonization [38,39], melodic extension [40,41], rhythm generation [42], musical pattern generation [43,44], and even twelve tone row [45]. The hybridization of genetic algorithms with other methodologies or techniques are examined and investigated, including information theory [46], Kohonen grammar [47], and N-gram model [48,49,50].
In addition to genetic algorithms, genetic programming has been employed to compose 16th-century counterpoint [51], construct an interactive music system [52], and evolve the generation of music [53]. Moreover, ant colony optimization has been adapted to tasks of music generation [23] and creating melodies with Baroque harmony [21,22]. Falling into the same category of the computational paradigm as ant colony optimization, methods of swarm intelligence have also been used for music creation [13,54], music genre classification [55], and tasks of other various nature [56,57,58,59].
There are studies aiming to imitate specific music genres or styles. Jazz [60,61,62,63], a modern music genre, and Bach [64,65,66], the classic Baroque style, attract most attention of researchers. There even exists a study attempting to bridge the two ends [67]. In addition to jazz and Bach, researchers also investigated into music of other styles, including rock [68], Chinese folk music [69], and the fusion of Flamenco and Argentine Tango [70].
Evaluation on music in a general sense involves subjective assessment and qualitative judgment, and evaluation on the styles of music may only be harder, since it is on a higher, more abstract level of recognition. Because, unless there are specific goals or targets, evaluation is extremely difficult to be appropriately done with quantitative approaches, many studies proposed the incorporation of human components, usually for evaluating and guiding the created music pieces, and developed interactive systems [71,72,73,74,75,76,77,78,79] in order to strike a balance between arts and engineering.
On a broader sense of relevance, the arrangement of music for guitar was investigated [80], and it is a rare study in this field that does consider how, specifically on what instrument, the generated music is performed. In [81], artificial intelligence was considered as an artistic phenomenon, and the creative ability of artificial intelligence was discussed from a philosophical perspective. Accompanied with the creation by means of artificial intelligence, open questions regarding copyright law are raised and discussed in [82]. Finally, an approach for structurally describing music pieces based on principles of Linguistics was proposed in [83].
In summary, most related studies that are available in the AI literature addressed the task of music generation from an engineering perspective with directly manipulating notes or fitting certain chosen mathematical models in order to improve the system performance or even to optimize the objective function. These approaches are apparently barely similar to the process for a composer to write notes and create music. In fact, the authors are not against such an engineering way, since the mechanisms for the modern aircraft to fly cannot be obtained simply by mimicking birds after all. However, this study does aim to investigate creation and intelligent behavior from the side of imitating a creator’s mindset and hopefully grasps some essence of the mechanism of creativity, such that future studies might be intrigued and benefit from the proposed framework.
Furthermore, performance measure in the sense of engineering or objective functions in terms of optimization fundamentally dictate the results that were obtained by systems or frameworks incorporating those designs or components. That is why if a specific style, such as Bach, is the target, it is relatively easy to establish a system by using the existing AI techniques, and the built system can be evaluated with some precisely defined criteria. In this study, we make an attempt to construct something capable of creating, and we use autonomous cantus firmus style development as the target task. If well-defined, adequate criteria to evaluate creation or creativity exist, we can simply adopt them into a common optimization framework. However, unfortunately those are yet available or will probably never be available. Hence, the goal of this study and the way that this study is conducted contribute to the uniqueness of this study in that we try to propose a framework exhibiting certain properties of creating ability for the research community to extend, discuss, or criticize.
Finally, as to conclude this section, please allow us to use the two dimensions proposed in [84] to distinguish the present study from existing systems: (1) nature of the inputs/outputs of the system (Section 3.4 in [84]): our proposal apparently does not belong to one of the three classes, Symbolic, Audio, and Hybrid, because “generative models,” the term used by [84], are the outputs, instead of notes or audio. (2) generality of the system (Section 3.6 in [84]): as indicated in [84], the two ends of this spectrum are: (a) “.... tend to reflect... the aesthetic of the designer... rather than creating completely novel outputs...” and (b) “... corpus-based style-imitation system... from the corpus provided...” It is also apparent that our work cannot be put within this spectrum, because no human preference or music corpus is used.

4. Methods

4.1. The Design of the Basic Computation Framework

We propose ants on multiple graphs, AntsOMG, a meta-framework inspired by both the collective behavior of natural ants and the algorithmic mechanism of ant colony optimization in order to describe the multi-layered ideas of creative behavior and to achieve the effect of autonomously developed styles (Figure 2). The distinguishing feature of this framework mainly lies in two aspects: (1) the allowance to arrange multiple different graphs upon which the artificial ants can walk simultaneously in spaces of different dimensions, which would result in an effect that is similar to the creator’s mindset of creation; (2) instead of considering creative activities and creation processes as optimization tasks, no objective function is defined for evaluating the immediate results that were produced by ants.
There are several reasons to consider the collective intelligence of ant colonies as the blueprint. First, we can arrange the knowledge background that is equivalent to the creators’ into different graphs, and this structure is designed within the formation of edges and vertices. Because the artificial ants can walk simultaneously in multiple dimensions and every dimension can interact with each other, such a framework design may result in a highly intricate behavior that can be difficult to visualize and therefore exert intriguing properties, including personalities.
Secondly, another characteristic of ant colonies, also the main reason for adopting this model, is that ants’ walking is affected by the density of both cost given on graphs and pheromone left by ants. The former is part of the creative background knowledge, and the latter is the key factor for the system to autonomously develop its own style. The situation is the same as after a creator who has been working for years, a set of idioms will definitely be formed, furthermore to develop a distinct style.
Finally, ant colony optimization has a useful trait, which is, the spirit of explicit exploration, represented by a specific parameter of exploration probability. There is a certain probability that the choice of next move would not be affected by the aforementioned mechanism for the interaction of cost and pheromone. For optimization, it is the mechanism for the searching process to escape the capture of local optima and have the opportunity for finding better solutions. For the artistic creation, it is more like the unexpected creative inspiration of the creator, which becomes an important factor in creating and developing a unique style. It is one of the main goals and it would be the ideal result for this research.

4.2. The Implementation of Cantus Firmus Composer

Inheriting the framework of AntsOMG, in order to tackle the task of developing styles, which we consider as an important characteristic of intelligent behavior, we selected the creation of cantus firmi as a showcase. According to the cantus firmus specification [85] in the task, a minimal set of graphs, Graph x and Graph y, were defined. Although AntsOMG allows for ants to walk on multiple graphs at the same time, for this implementation, ants are designed to walk on Graph x and Graph y iteratively. Graph x describes the decision-making process for writing a note (Figure 3). Each turn starts with the “Start” vertex and decides whether to continue the previous trait (“directional”) or to balance it (“complemental”). These two directions could roughly describe the intention when a composer writes a cantus firmus, and it may be possible to develop different composing styles between computation instances.
It will then be divided into two paths: if one makes the “directional” choice, then one must select one from the two paths: “conjunct” or “disjunct”. If one makes the “complemental” choice, the next step is to choose between a balanced short-term trend or a long-term trend. The difference between these two choices is that the former only follows the direction of the last note and takes it to the contrary direction; the latter makes the choice according to the overall range of some written part. If the last note is higher in the register, then the decision will be going downwards, or vice versa.
According to the result of the moves on Graph x, a path will be chosen to determine the strategy for the moves on Graph y. The structure of Graph y is much more complicated than that of Graph x. It consists of a series of pitches, and the ants may walk out a complete cantus firmus through it. In order to define the cantus firmus style of the seventeenth century in this graph, the authors use the design of diatonic gamut at vertex, instead of the pitch class (Figure 4). The whole graph (Figure 5) consists of twelve pitches: D3, E3, F3, G3, A3, B3, C4, D4, E4, F4, G4, and A4, and, according to the induction of counterpoint scholar K. Jeppesen, it only connects the intervals that existed in the melodic style at that time period [85] (p. 85):
Ascending and Descending
  • Major and minor second.
  • Major and minor third.
  • Perfect fourth.
  • Perfect fifth.
  • Perfect octave.
Ascending only
  • Minor sixth.
It is worth noting that the role of "cost" in these two graphs, especially in Graph y, is basically for characterizing the style of the seventeenth century. For example, in most situations, it has the tendency for stepwise motion and a small leap in thirds, and directional appears more frequent than complemental, etc. These presets can certainly be subtly altered by the pheromone left by the artificial ants and, thus, creating different styles under the same historical background and genre. The moving mechanisms for simulating ants, including the choice of next move and the deposit of pheromone, are adapted from ant colony optimization in this implementation for generating cantus firmi. The source code developed for this study is available at https://github.com/nclab/cf.composer.

5. Results

5.1. Experiment Design

Based on the traits of ants depositing pheromone on route, the authors presume that after a large number of ants’ “developing”, it is possible that some unintended characteristics would be amplified on graphs as the level of pheromone, which leads to the formation of certain “autonomous music style models”. These models can be kept as the basis for producing cantus firmi in batch. In both stages, the identical AntsOMG framework is employed, with slightly different parameter settings.
The authors divide the experiment into two stages. In the first stage, in each run, 100,000 ants “crawl” on the set of graphs, and each graph set is independently developed. In the second stage, fewer “artificial ants" are used, with no pheromone deposits or evaporation, to compose cantus firmi, which are to be observed and analyzed to recognize possible autonomously developed styles.

5.2. The First Stage: Developing the “Style Models”

The parameters used at this stage are as follows:
  • Ecclesiastical Mode: Dorian
  • Cantus firmus length: sevemm notes minimum;
  • Ant (Music Thread) number: 100,000
  • Ant movement rule:
    p ( r , s ) = [ τ ( r , s ) ] α + [ η ( r , s ) ] β Σ u J ( r ) ( [ τ ( r , u ) ] α + [ η ( r , u ) ] β ) if q q 0 ; 1 | J ( r ) | otherwise .
    p ( r , s ) represents the probability for an ant to go from r to s. J ( r ) is the set of next vertices from r. τ ( r , s ) and η ( r , s ) represents the pheromone and the cost on the edge from r to s, respectively. α and β are the weights of τ ( r , s ) and η ( r , s ) for their relative importance. q 0 is the exploit chance and q is a random number uniformly distributed in [ 0 , 1 ] . The ant movement rule used in this study is an adaption of the hybridization of Ant System (Equation (4) of [17]) and Ant Colony System (Equation (3) of [18]). Different ant movement/state transition rules may be adopted for different purposes or requirements of the task to tackle.
  • Pheromone evaporation rate: 10%
  • Pheromone deposit amount: 1.0/per ant-edge
Graph x:
  • Factors of pheromone/cost: α = 2.0 , β = 1.0
  • Exploit chance: q 0 = 90%
Graph y:
  • Factors of pheromone/cost: α = 1.0 , β = 1.0
  • Exploit chance: q 0 = 80%
  • Dominant attraction factor = 10.0
In order to analyze and discuss the obtained results, we selected five developed styles, as Style Models, from several experiment trials: model no.4 from 14-56-06-148799200, model no.4, no.8 and no.9 from 15-16-18-586298200, and model no.3 from 17-03-43-745114. Table 1 shows a part of the graph data of these five Style Models. The values stand for the compound results of cost and pheromone. The percentage is the calculated probability to choose the path, regardless of the exploration rate. The paths of the highest probabilities are listed: two paths for Graph x and 5 paths for Graph y. From the consistency of the highest probability of paths on Graph x, most of the Style Models maintain the basic stepwise style of cantus firmus, but the variance is still significantly large. The second highest probability begins to diverse; Model 2 and 3 share the same order (Directional-Conjunct and Directional-Disjunct), but the two probabilities of Style Model 3 are much closer. Hence, each model appears to be very different. The situation of Graph y is also very interesting. Among the top five paths, most of them are stepwise. The conjunct paths to and from the final (D) and dominants (A) of Dorian Mode (D3 E3, D4 C4, E3 D3, A4 G4, B3 A3, G3 A3, etc.) are not unexpectedly ranked among the best, which properly forms the modal structure of cantus firmus. Under the strong influence of stepwise motion from Graph x, there are still a few combinations of thirds showing up in the top five list (E4 C4, G3 E3), indicating that the melodic style of each group may be very different from each other.

5.3. The Second Stage: Generating Cantus Firmi

In this stage, the authors use the aforementioned five selected Style Models to create the actual cantus firmus. The parameters of AntsOMG framework are similar, but there are some differences in using the graphs:
  • The structure of the graphs and the cost values which representing the known background are not changed.
  • The pheromone values developed from the first stage are loaded in each graph.
  • The Style Model is read-only when being used and the pheromone does not evaporate with time.
  • In order to keep the original appearances of the model during the process of composing, the authors use artificial ants that react to the preloaded pheromone but do not deposit pheromone.
Using the Style Models developed in the first stage, the authors only need 2% of ants in number to create a large body of cantus firmi in particular styles.
Finally, the authors used an filtering/evaluation function with three indicators to obtain the created cantus firmi for presentation. Please note that this evaluation function is definitely not an objective for optimization and cannot be used as one. It merely plays the role of a filter for discarding cantus firmi, which are not considered pleasing in general. The following three indicators are also simply some fundamental melodic properties. Other indicators can surely be adopted for the filtering purpose if desired or needed:
  • Range: 100 if the range is between fourth and octave, score being reduced when exceeding or insufficient.
  • Length: 100 between 7 and 15 notes, score being reduced if exceeding or insufficient.
  • Dominant count: 100 if the melody reaches dominant three times, score being reduced when exceeding or insufficient.
The score function for each indicator is defined as:
s c o r e i n d i c a t o r ( x ) = 100 , x [ l , u ] , 100 1 + min x l , x u , otherwise .
x is the indicator value of the corresponding cantus firmus. l and u are the lower and upper bounds for each indicator as aforementioned. The evaluation function is then defined as:
e v a l ( c f ) = s c o r e R a n g e ( R a n g e ( c f ) ) + s c o r e L e n g t h ( L e n g t h ( c f ) ) + s c o r e D C ( D C ( c f ) ) ,
where c f is a cantus firmus and D C stands for “Dominant count”.
Each Style Model has had five independent experiments, from which 100 melodies are selected according to their scores, to be analyzed and discussed in the next section. Please note that the evaluation function here is not an objective function, as those existing in an optimization task. The evaluation function here is a part of the implementation for cantus firmi, but not a part of AntsOMG. It is only used to filter out unwanted generated music pieces, and no optimization process is conducted upon the evaluation function.
As mentioned in Section 2.2, all of the rules of cantus firmus are intended to allow humans to sing naturally. Hence, in the following section, we will actually verify that the cantus firmi produced in our showcase is indeed singable by performing as a way of evaluation in the realm of music. Moreover, the links to the recordings are provided for readers to listen in order to show the auditory amiability of these cantus firmi, especially that listeners may easily sing along, which is one of the important characteristics of cantus firmus and can be qualitatively evaluated.

6. Discussion

In this section, the authors observe the cantus firmi created by using each Style Model from the perspective of music and discuss the creative style exhibited.
The authors have selected seven cantus firmi from each model, performed in a Medieval hurdy-gurdy-like manner, and recorded each of them in order to demonstrate the unique character of each Style Model. Melodies created from each Style Model will be discussed in the following analysis, with explanation of the characteristics, respectively. As illustration, the first two cantus firmi are marked with trend lines and brackets on the music score.
The so-called “style” here is not a decisive distinction and cannot be considered as a key to partition or classification. Sometimes a cantus firmus exhibiting some characteristic of one Style Model might also exhibit that of another Style Model. The tendency generated by different Style Models permits us to deliver the melodies featuring the characteristics described below. Thus, this would correspond to our idea of creative behavior.
The discussion of the five Styles is given in what follows. The first part of description in italic is how a musician would describe the style from the perspective of music, followed by a more direct, operational description, and the corresponding music score.
Style Model 1: Ascending and descending scales in a five-note group is the most frequently seen in this style, which creates an interesting flavor as in playing on a slide.
Stepwise motions are greatly preferred in cantus firmus style. This Style Model is characterized with successive stepwise motions in the same direction, which appears at least four times to form a five-note group, as shown in Figure 6 (see the brackets indicating the five-note groups. Listen to the selected examples, including the two shown in Figure 6 at https://e.cctcc.art/ec20m1. One can hear that the melodic flow is indeed smooth, without abrupt leaps or jerky gaps. More examples are available at https://e.cctcc.art/ec20m1g for this Style Model).
Style Model 2: More daring with larger disjunct (over fourth), some beautiful combinations would be created. It is especially interesting with a downward progression of the fourth at the beginning.
In this model, the cantus firmi are featured with at least one leap in interval of octave and two leaps in intervals greater than thirds, as shown in Figure 7 (the brackets mark the leaps). The definition of interval here refers to the space between two notes. It will subsequently be expressed using the musical terms, such as thirds, fourth, etc. Listen to the selected examples, including the two shown in Figure 7 at https://e.cctcc.art/ec20m2. Although there are some big leaps to create tension in musical expression, they are being crafted into a smooth arch, which are still accessible to sing. More examples are available at https://e.cctcc.art/ec20m2g for this Style Model).
Style Model 3: A mixture of stepwise motion and thirds, sometimes with an exotic flavor of the pentatonic scale.
In this Style Model, the melodic interval of cantus firmi is mainly comprised of seconds and thirds, and the combination of a successive seconds and thirds (or vice versa) takes at least 50%, as shown in Figure 8 (the brackets mark the successive seconds and thirds. Listen to the selected examples, including the two shown in Figure 8 at https://e.cctcc.art/ec20m3. The melody is no longer only stepwise, there are combinations of different intervals, which brings out exotic flavor of the pentatonic scales that are used in Asian music. More examples are available at https://e.cctcc.art/ec20m3g for this Style Model).
Style Model 4: Often going upwards with stepwise motion mixed with thirds from the very beginning, continues to the distance of the seventh, or even a bit further at times.
The cantus firmi with successive upward stepwise and thirds motions marked by brackets are shown in Figure 9 (listen to the selected examples, including the two shown in Figure 9 at https://e.cctcc.art/ec20m4. The beginning of the examples have a distinct tendency to go upward, and reaches up to a distance of seventh, which is one of the widest intervals in a scale. More examples are available at https://e.cctcc.art/ec20m4g for this Style Model).
Style Model 5: The notes swinging up and down to find a balance at the beginning of the melody, forming a lingering figure.
In this Style Model, the cantus firmi are initiated with a four-note figure which comprised of intervals smaller than fifth, while at least one interval is greater than second, followed by notes going in opposite directions, as shown in Figure 10 (notice the opening figures marked by the brackets. Listen to the selected examples, including the two shown in Figure 10 at https://e.cctcc.art/ec20m5. More examples are available at https://e.cctcc.art/ec20m5g for this Style Model).
From the results presented above, it is clearly observed that the AntsOMG implementation designed for autonomous music style development has expressed the tendencies to produce different music styles upon the basis of given background knowledge from analytical views from the music perspective on both the computational data as well as music styles.
Finally, in addition to the seven selected, performed, and recorded cantus firmi of each Style Model shown in this paper, more cantus firmus with their corresponding music scores can be listened to via https://e.cctcc.art/ec20 or https://github.com/nclab/cf.composer.

7. Conclusions

In this paper, a meta-framework, called ants on multiple graphs (AntsOMG), was proposed for autonomous development and creativity automation in general situations, and an implementation derived from AntsOMG specifically for composing cantus firmi was presented as a showcase in order to address the issue regarding automating creativity and intelligent behavior as well as autonomously developing music compositional styles. After conducting experiments on developing cantus firmi styles autonomously, five Style Models were observed, selected, and used to generate a large number of cantus firmi. Professional, musical analysis was then conducted on the generated pieces of music. It was found that each of the developed Style Models was indeed able to create cantus firmi of distinctive styles. Hence, the presented implementation can be used to create cantus firmi in batch and possibly also other music genres with minor modifications. Moreover, the proposal, AntsOMG, may be adapted to handle autonomous development in other domains or disciplines with corresponding and appropriate implementations.
The study that is presented in this paper creates new possibilities for research related to applications of computer methodologies on music, provides new directions of potential future work, and possibly assists humans to gain insights into what a creative mind might be like and how it might operate, such that understandings on intelligence might be pushed slightly further. Future work along this line of research includes, in the short term, adopting the developed music styles to generate music pieces in practical use on the music side and augmenting the components, e.g., enabling the graphs to be dynamically mutable, adaptable, and evolvable to reflect the developing process and remember the accumulated experience, in the proposed framework on the methodology side. In the medium term, AntsOMG may be employed to develop more complex music genres based on multiple layered musical thinking and meta-thinking, expressed as graphs and the actions of ants on graphs. Moreover, AntsOMG can also be adapted in other fields and domains, such as painting and artistic design. For our ultimate goal, AntsOMG may be extended to incorporate other features, properties, and characteristics that were observable in human creative activities and creation processes in order to construct an operational model capable of depicting creativity and ability to create in the long run.

Author Contributions

Conceptualization, C.-Y.C. and Y.-P.C.; Funding acquisition, Y.-P.C.; Investigation, C.-Y.C. and Y.-P.C.; Methodology, C.-Y.C.; Software, C.-Y.C.; Writing—original draft, C.-Y.C.; Writing—review and editing, Y.-P.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported in part by the Ministry of Science and Technology of Taiwan under Grant MOST 109-2221-E-009-127.

Acknowledgments

The authors would like to thank renowned Taiwanese cellist Sheng-wen Liu for her performance and Chih-fang Chiang for his assistance with the experiment data and baritone singing in the performance recordings.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lovelace, A. Translator’s notes to M. Menabrea’s memoir on Babbage’s analytical engine. Sci. Mem. 1843, 3, 691–731. [Google Scholar]
  2. Burton, A.R.; Vladimirova, T. Generation of musical sequences with genetic techniques. Comput. Music J. 1999, 23, 59–73. [Google Scholar] [CrossRef]
  3. Miranda, E.R. At the crossroads of evolutionary computation and music: Self-programming synthesizers, swarm orchestras and the origins of melody. Evol. Comput. 2004, 12, 137–158. [Google Scholar] [CrossRef] [PubMed]
  4. Loughran, R.; O’Neill, M. Evolutionary music: Applying evolutionary computation to the art of creating music. Genet. Program. Evolvable Mach. 2020, 21, 55–85. [Google Scholar] [CrossRef]
  5. Liu, C.H.; Ting, C.K. Computational intelligence in music composition: A survey. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 2–15. [Google Scholar] [CrossRef]
  6. Carnovalini, F.; Rodá, A. Computational creativity and music generation systems: An introduction to the state of the art. Front. Artif. Intell. 2020, 3, 1–20. [Google Scholar] [CrossRef] [Green Version]
  7. Gifford, T.; Knotts, S.; McCormack, J.; Kalonaris, S.; Yee-King, M.; d’Inverno, M. Computational systems for music improvisation. Digit. Creat. 2018, 29, 19–36. [Google Scholar] [CrossRef]
  8. Briot, J.P.; Hadjeres, G.; Pachet, F.D. Deep learning techniques for music generation—A survey, 2019. arXiv 2019, arXiv:cs.SD/1709.01620v4. [Google Scholar]
  9. Papadopoulos, G.; Wiggins, G. AI methods for algorithmic composition: A survey, a critical view and future prospects. In Proceedings of the AISB Symposium on Musical Creativity, Edinburgh, UK, 6–9 April 1999. [Google Scholar]
  10. Fernández, J.D.; Vico, F. AI methods in algorithmic composition: A comprehensive survey. J. Artif. Intell. Res. 2013, 48, 513–582. [Google Scholar]
  11. Herremans, D.; Chuan, C.H.; Chew, E. A functional taxonomy of music generation systems. ACM Comput. Surv. 2017, 50, 69. [Google Scholar] [CrossRef] [Green Version]
  12. Lopez-Rincon, O.; Starostenko, O.; Martín, G.A.S. Algoritmic music composition based on artificial intelligence: A survey. In Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula Puebla, Mexico, 21 February 2018; pp. 187–193. [Google Scholar]
  13. Blackwell, T.M.; Bentley, P. Improvised music with swarms. In Proceedings of the 2002 IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, HI, USA, 12–17 May 2002; pp. 1462–1467. [Google Scholar]
  14. Herremans, D.; Sörensen, K. FuX, an Android app that generates counterpoint. In Proceedings of the 2013 IEEE Symposium on Computational Intelligence for Creativity and Affective Computing, Singapore, 15–19 April 2013; pp. 48–55. [Google Scholar]
  15. Sturm, B.L.; Ben-Tal, O.; Monaghan, Ú.; Collins, N.; Herremans, D.; Chew, E.; Hadjeres, G.; Deruty, E.; Pachet, F. Machine learning research that matters for music creation: A case study. J. New Music Res. 2019, 48, 36–55. [Google Scholar] [CrossRef] [Green Version]
  16. Goss, S.; Aron, S.; Deneubourg, J.L.; Pasteels, J.M. Self-organized shortcuts in the Argentine ant. Naturwissenschaften 1989, 76, 579–581. [Google Scholar] [CrossRef]
  17. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef] [Green Version]
  19. Dorigo, M.; Gambardella, L.M. Ant colonies for the travelling salesman problem. BioSystems 1997, 43, 73–81. [Google Scholar] [CrossRef] [Green Version]
  20. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Inspiration for optimization from social insect behaviour. Nature 2000, 406, 39–42. [Google Scholar] [CrossRef]
  21. Geis, M.; Middendorf, M. An ant colony optimizer for melody creation with Baroque harmony. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, 25–28 September 2007; pp. 467–468. [Google Scholar]
  22. Geis, M.; Middendorf, M. Creating melodies and Baroque harmonies with ant colony optimization. Int. J. Intell. Comput. Cybern. 2008, 1, 213–238. [Google Scholar] [CrossRef]
  23. Guéret, C.; Monmarché, N.; Slimane, M. Ants can play music. In Proceedings of the International Workshop on Ant Colony Optimization and Swarm Intelligence (ANTS), Brussels, Belgium, 5–8 September 2004; pp. 310–317. [Google Scholar]
  24. Fux, J.J. Gradus ad Parnassum, Sive Manuductio ad Compositionem Musicæ Regularem: Metyhodo novâ, ac certâ, Nondum Antè tam Exacto Ordine in Lucem Edita; Johann Peter van Ghelen: Wien, Austria, 1725. [Google Scholar]
  25. Horner, A.; Goldberg, D.E. Genetic algorithms and computer-assisted music composition. In Proceedings of the 1991 International Conference on Genetic Algorithm (ICGA 1991), San Diego, CA, USA, 13–16 July 1991; pp. 437–441. [Google Scholar]
  26. Ricanek, K.; Homaifar, A.; Lebby, G. Genetic algorithm composes music. In Proceedings of the 1993 (25th) Southeastern Symposium on System Theory, Tuscaloosa, AL, USA, 7–9 March 1993; pp. 223–227. [Google Scholar]
  27. Jacob, B.L. Composing with genetic algorithms. In Proceedings of the 1995 International Computer Music Conference (ICMC), Banff, AB, Canada, 3–7 September 1995; pp. 452–455. [Google Scholar]
  28. Marques, M.; Oliveira, V.; Vieira, S.; Rosa, A.C. Music composition using genetic evolutionary algorithms. In Proceedings of the 2000 IEEE Congress on Evolutionary Computation (CEC 2000), La Jolla, CA, USA, 16–19 July 2000; pp. 714–719. [Google Scholar]
  29. Gartland-Jones, A.; Copley, P. The suitability of genetic algorithms for musical composition. Contemp. Music Rev. 2003, 22, 43–55. [Google Scholar] [CrossRef]
  30. Matić, D. A genetic algorithm for composing music. Yugosl. J. Oper. Res. 2010, 20, 157–177. [Google Scholar] [CrossRef]
  31. Liu, C.H.; Ting, C.K. Polyphonic accompaniment using genetic algorithm with music theory. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation (CEC 2012), Brisbane, Australia, 10–15 June 2012; pp. 1–7. [Google Scholar]
  32. Liu, C.H.; Ting, C.K. Evolutionary composition using music theory and charts. In Proceedings of the IEEE Symposium on Computational Intelligence for Creativity and Affective Computing (CICAC), Singapore, 15–19 April 2013; pp. 63–70. [Google Scholar]
  33. Kikuchi, M.; Osana, Y. Automatic melody generation considering chord progression by genetic algorithm. In Proceedings of the 6th World Congress on Nature and Biologically Inspired Computing (NaBIC), Porto, Portugal, 30 July–1 August 2014; pp. 190–195. [Google Scholar]
  34. Wu, C.L.; Liu, C.H.; Ting, C.K. A novel genetic algorithm considering measures and phrases for generating melody. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC 2014), Beijing, China, 6–11 July 2014; pp. 2101–2107. [Google Scholar]
  35. Ting, C.K.; Wu, C.-L.; Liu, C.H. A novel automatic composition system using evolutionary algorithm and phrase imitation. IEEE Syst. J. 2017, 11, 1284–1295. [Google Scholar] [CrossRef]
  36. Chang, C.Y. Lyric Prose for Unaccompanied Cello “Pygmalion”. 2019. Available online: https://github.com/canticum/pygmalion (accessed on 5 June 2020).
  37. Chang, C.Y.; Chen, Y.P. Fusing creative operations into evolutionary computation for composition: From a composer’s perspective. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC 2019), Wellington, New Zealand, 10–13 June 2019; pp. 2113–2120. [Google Scholar]
  38. Horner, A.; Ayers, L. Harmonization of musical progressions with genetic algorithms. In Proceedings of the 1995 International Computer Music Conference (ICMC), Banff, AB, Canada, 3–7 September 1995; pp. 483–484. [Google Scholar]
  39. Donnelly, P.; Sheppard, J. Evolving four-part harmony using genetic algorithms. In Applications of Evolutionary Computation (EvoApplications 2011); Springer: Berlin, Germany, 2011; pp. 273–282. [Google Scholar]
  40. Towsey, M.; Brown, A.; Wright, S.; Diederich, J. Towards melodic extension using genetic algorithms. J. Educ. Technol. Soc. 2001, 4, 54–65. [Google Scholar]
  41. Acevedo, A.G. Fugue composition with counterpoint melody generation using genetic algorithms. In Proceedings of the Second International Conference on Computer Music Modeling and Retrieval (CMMR 2004), Esbjerg, Denmark, 26–29 May 2004; pp. 96–106. [Google Scholar]
  42. Maeda, Y.; Kajihara, Y. Rhythm generation method for automatic musical composition using genetic algorithm. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Barcelona, Spain, 18–23 July 2010; pp. 1–7. [Google Scholar]
  43. Yamamoto, R.; Nakashima, S.; Ogawa, S.; Fukumoto, M. Proposal for automated creation of drum’s fill-in pattern using interactive genetic algorithm. In Proceedings of the International Conference on Biometrics and Kansei Engineering, Takamatsu, Kagawa, Japan, 19–22 September 2011; pp. 238–241. [Google Scholar]
  44. Vargas, F.V.; Fuster, J.A.; Castańõn, C.B. Artificial musical pattern generation with genetic algorithms. In Proceedings of the Latin America Congress on Computational Intelligence (LA-CCI), Curitiba, Brazil, 13–16 October 2015; pp. 1–5. [Google Scholar]
  45. Maeda, Y.; Kajihara, Y. Automatic generation method of twelve tone row for musical composition used genetic algorithm. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Jeju Island, Korea, 20–24 August 2009; pp. 963–968. [Google Scholar]
  46. Alfonseca, M.; Cebrian, M.; Ortega, A. A simple genetic algorithm for music generation by means of algorithmic information theory. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, 25–28 September 2007; pp. 3035–3042. [Google Scholar]
  47. Sheikholharam, P.; Teshnehlab, M. Music composition using combination of genetic algorithms and Kohonen grammar. In Proceedings of the International Symposium on Computational Intelligence and Design, Wuhan, China, 17–18 October 2008; pp. 255–260. [Google Scholar]
  48. Tomari, M.; Sato, M.; Osana, Y. Automatic composition system based on genetic algorithm and N-gram model. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2008; pp. 202–207. [Google Scholar]
  49. Fujii, K.; Takeshita, M.; Osana, Y. Automatic composition system using genetic algorithm and N-gram model—Influence of N in N-gram model. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC), Anchorage, AK, USA, 9-12 October 2011; pp. 1388–1393. [Google Scholar]
  50. Takano, M.; Osana, Y. Automatic composition system using genetic algorithm and N-gram model considering melody blocks. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation (CEC 2012), Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  51. Polito, J.; Daida, J.M.; Bersano-Begey, T.F. Musica ex machina: Composing 16th-century counterpoint with genetic programming and symbiosis. In Proceedings of the 6th International Conference on Evolutionary Programming (EP VI), Indianapolis, IN, USA, 13–16 April 1997; pp. 113–124. [Google Scholar]
  52. Johanson, B.; Poli, R. GP-music: An interactive genetic programming system for music generation with automated fitness raters. In Proceedings of the Third Annual Conference Genetic Programming (GP’98), Madison, WI, USA, 22–25 July 1998; pp. 181–186. [Google Scholar]
  53. Phon-Amnuaisuk, S.; Law, E.H.H.; Kuan, H.C. Evolving music generation with SOM-fitness genetic programming. In Proceedings of the EvoWorkshops on Applications of Evolutionary Computing, Tübingen, Germany, 15–17 April 2009; pp. 557–566. [Google Scholar]
  54. Rosiles, J.R.M.; Ponsich, A.; García, E.A.R.; Gutiérrez, R.A.M. Extension of the method of musical composition for the treatment of multi-objective optimization problems. In Proceedings of the 2014 Nature-Inspired Computation and Machine Learning (MICAI 2014), Tuxtla Gutiérrez, Mexico, 16–22 November 2014; pp. 38–49. [Google Scholar]
  55. Leartpantulak, K.; Kitjaidure, Y. Music genre classification of audio signals using particle swarm optimization and stacking ensemble. In Proceedings of the 2019 7th International Electrical Engineering Congress (iEECON2019), London, UK, 3–5 July 2019; pp. 1–4. [Google Scholar]
  56. Cheng, X.; Ciuonzo, D.; Rossi, P.S. Multibit decentralized detection through fusing smart and dumb sensors based on Rao test. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 1391–1405. [Google Scholar] [CrossRef]
  57. Ekola, T.; Laurikkala, M.; Lehto, T.; Koivisto, H. Network traffic analysis using clustering ants. In Proceedings of the World Automation Congress 2004 (WAC 2004), Seville, Spain, 28 June–1 July 2004; pp. 275–280. [Google Scholar]
  58. Cheng, X.; Ciuonzo, D.; Rossi, P.S.; Wang, X.; Shi, L. Multi-bit decentralized detection of a non-cooperative moving target through a generalized Rao test. In Proceedings of the 2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM), Hangzhou, China, 8–11 June 2020. [Google Scholar]
  59. Mandloi, M.; Bhatia, V. Congestion control based ant colony optimization algorithm for large MIMO detection. Expert Syst. Appl. 2015, 42, 3662–3669. [Google Scholar] [CrossRef]
  60. Biles, J.A. GenJam: A genetic algorithm for generating jazz solos. In Proceedings of the 1994 International Computer Music Conference (ICMC), Aarhus, Denmark, 12–17 September 1994; pp. 131–137. [Google Scholar]
  61. Papadopoulos, G.; Wiggins, G.A. A genetic algorithm for the generation of jazz melodies. In Proceedings of the Finnish Conference on Artificial Intelligence (STeP’98), Jyväskylä, Finland, 7–9 September 1998. [Google Scholar]
  62. Biles, J.A. GenJam: Evolution of a jazz improviser. In Creative Evolutionary Systems; Morgan Kaufmann Publishers: Burlington, MA, USA, 2002; Chapter 5; pp. 165–187. [Google Scholar]
  63. Ramirez, R.; Hazan, A.; Maestre, E.; Serra, X. A genetic rule-based model of expressive performance for Jazz saxophone. Comput. Music J. 2008, 32, 38–50. [Google Scholar] [CrossRef]
  64. McIntyre, R.A. Bach in a box: The evolution of four part Baroque harmony using the genetic algorithm. In Proceedings of the 1994 IEEE Congress on Evolutionary Computation (CEC 1994), Orlando, FL, USA, 29 June–1 July 1994; pp. 852–857. [Google Scholar]
  65. Liu, I.T.; Ramakrishnan, B. Bach in 2014: Music composition with recurrent neural network, 2014. arXiv 2014, arXiv:cs.AI/1412.3191v2. [Google Scholar]
  66. Hadjeres, G.; Pachet, F.; Nielsen, F. DeepBach: A steerable model for Bach chorales generation. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; pp. 1362–1371. [Google Scholar]
  67. Tzimeas, D.; Mangina, E. Jazz Sebastian Bach: A GA system for music style modification. In Proceedings of the 2006 International Conference on Systems and Networks Communications (ICSNC), Tahiti, French Polynesia, 29 October–3 November 2006; p. 36. [Google Scholar]
  68. Oliwa, T.M. Genetic algorithms and the abc music notation language for rock music composition. In Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference 2008 (GECCO-2008), Atlanta, GA, USA, 12–16 July 2008; pp. 1603–1610. [Google Scholar]
  69. Zheng, X.; Li, D.; Wang, L.; Zhu, Y.; Shen, L.; Gao, Y. Chinese folk music composition based on genetic algorithm. In Proceedings of the 3rd International Conference on Computational Intelligence & Communication Technology (ICICT), Ghaziabad, India, 9–10 February 2017; pp. 1–6. [Google Scholar]
  70. Liu, C.H.; Ting, C.K. Fusing Flamenco and Argentine Tango by evolutionary composition. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC 2017), Singapore, 25–28 September 2017; pp. 2645–2652. [Google Scholar]
  71. Moroni, A.; Manzolli, J.; Von Zuben, F. Composing with interactive genetic algorithms. In Proceedings of the Brazilian Symposium on Computer Music, Rio de Janeiro, Brazil, 19–22 July 1999. [Google Scholar]
  72. Tokui, N.; Iba, H. Music composition with interactive evolutionary computation. In Proceedings of the 3rd International Conference on Generative Art, Milan, Italy, 14–16 December 2000. [Google Scholar]
  73. Fu, T.Y.; Wu, T.Y.; Chen, C.Y.; Wu, K.C.; Chen, Y.P. Evolutionary interactive music composition. In Proceedings of the ACM SIGEVO Genetic and Evolutionary Computation Conference 2008 (GECCO-2006), Seattle, WA, USA, 8–12 July 2006; pp. 1863–1864. [Google Scholar]
  74. Zhu, H.; Wang, S.; Wang, Z. Emotional music generation using interactive genetic algorithm. In Proceedings of the International Conference on Computer Science and Software Engineering, Wuhan, China, 12–14 December 2008; pp. 345–348. [Google Scholar]
  75. Whalley, I. Software agents in music and sound art research/creative work: Current state and a possible direction. Org. Sound 2009, 14, 156–167. [Google Scholar] [CrossRef] [Green Version]
  76. Nakashima, S.; Imamura, Y.; Ogawa, S.; Fukumoto, M. Generation of appropriate user chord development based on interactive genetic algorithm. In Proceedings of the International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, Fukuoka, Japan, 4–6 November 2010; pp. 450–453. [Google Scholar]
  77. Fukumoto, M. Creation of music chord progression suited for user’s feelings based on interactive genetic algorithm. In Proceedings of the IIAI 3rd International Conference on Advanced Applied Informatics, Kokura, Japan, 31 August–4 September 2014; pp. 757–762. [Google Scholar]
  78. Morreale, F.; De Angeli, A. Collaborating with an autonomous agent to generate affective music. Comput. Entertain. 2016, 14, 5. [Google Scholar] [CrossRef]
  79. Masuda, N.; Iba, H. Musical composition by interactive evolutionary computation and latent space modeling. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 2792–2797. [Google Scholar]
  80. Tuohy, D.R.; Potter, W.D. GA-based music arranging for guitar. In Proceedings of the 2006 IEEE Congress on Evolutionary Computation (CEC 2006), Vancouver, BC, Canada, 16–21 July 2006; pp. 1065–1070. [Google Scholar]
  81. Kurt, D.E. Artistic Creativity in Artificial Intelligence. Master’s Thesis, Radboud University, Nijmegen, The Netherlands, 2018. [Google Scholar]
  82. Sturm, B.L.T.; Iglesias, M.; Ben-Tal, O.; Miron, M.; Gómez, E. Artificial intelligence and music: Open questions of copyright law and engineering praxis. Arts 2019, 8, 115. [Google Scholar] [CrossRef] [Green Version]
  83. Bimbot, F.; Le Blouch, O.; Sargent, G.; Vincent, E. Decomposition into Autonomous and Comparable Blocks: A Structural Description of Music Pieces. 2010. Available online: https://hal.inria.fr/inria-00473479/PDF/PI-1948.pdf (accessed on 5 June 2020).
  84. Pasquier, P.; Eigenfeldt, A.; Bown, O.; Dubnov, S. An introduction to musical metacreation. Comput. Entertain. 2016, 14, 2. [Google Scholar] [CrossRef]
  85. Jeppesen, K. Counterpoint: The Polyphonic Vocal Style of the Sixteenth Century; Dover Publications: New York, NY, USA, 1992. [Google Scholar]
Figure 1. A famous cantus firmus by J. J. Fux.
Figure 1. A famous cantus firmus by J. J. Fux.
Electronics 09 01212 g001
Figure 2. Overview of AntsOMG. n: the number of graphs.
Figure 2. Overview of AntsOMG. n: the number of graphs.
Electronics 09 01212 g002
Figure 3. Initial state of Graph x, with the cost on each edge.
Figure 3. Initial state of Graph x, with the cost on each edge.
Electronics 09 01212 g003
Figure 4. The gamut of Graph y.
Figure 4. The gamut of Graph y.
Electronics 09 01212 g004
Figure 5. Initial state of Graph y, with the cost on each edge.
Figure 5. Initial state of Graph y, with the cost on each edge.
Electronics 09 01212 g005
Figure 6. Cantus firmi from Style Model 14-56-06-148799200_4.
Figure 6. Cantus firmi from Style Model 14-56-06-148799200_4.
Electronics 09 01212 g006
Figure 7. Cantus firmi from Style Model 15-16-18-586298200_8.
Figure 7. Cantus firmi from Style Model 15-16-18-586298200_8.
Electronics 09 01212 g007
Figure 8. Cantus firmi from Style Model 15-16-18-586298200_9.
Figure 8. Cantus firmi from Style Model 15-16-18-586298200_9.
Electronics 09 01212 g008
Figure 9. Cantus firmi from Style Model 15-16-18-586298200_4.
Figure 9. Cantus firmi from Style Model 15-16-18-586298200_4.
Electronics 09 01212 g009
Figure 10. Cantus firmi from Style Model 17-03-43-745114_3.
Figure 10. Cantus firmi from Style Model 17-03-43-745114_3.
Electronics 09 01212 g010
Table 1. Top Edges of the Graphs.
Table 1. Top Edges of the Graphs.
Model12345
Graph x
1st Dr Dr Dr Dr Cp
49,083.95955,168.65464,532.31866,121.26535,009.786
Cj Cj Cj Cj Lt
33,518.67329,107.24939,186.27545,017.9620,315.491
42.59%33.78%41.58%46.18%40.05%
2nd Dr Dr Dr Dr Cp
49,083.95955,168.65464,532.31866,121.26535,009.786
Cj Cj Cj Cj Lt
33,518.67329,107.24939,186.27545,017.9620,315.491
42.59%33.78%41.58%46.18%40.05%
Graph y
1st Dr Dr Dr Dr Cp
5409.6685449.6367275.2726675.9984591.74
2nd Dr Dr Dr Dr Cp
4753.1875089.0886432.6166668.4174294.632
3rd Dr Dr Dr Dr Cp
4071.1194967.5375725.6575073.5714114.437
4th Dr Dr Dr Dr Cp
4061.5554276.9915632.0164394.8444018.67
5th Dr Dr Dr Dr Cp
3974.0993572.795019.6474331.9183597.508
Dr, Directional; Cj, Conjunct; Cp, Complemental; Lt, LongTerm; Dj, Disjunct; St, ShortTerm.

Share and Cite

MDPI and ACS Style

Chang, C.-Y.; Chen, Y.-P. AntsOMG: A Framework Aiming to Automate Creativity and Intelligent Behavior with a Showcase on Cantus Firmus Composition and Style Development. Electronics 2020, 9, 1212. https://doi.org/10.3390/electronics9081212

AMA Style

Chang C-Y, Chen Y-P. AntsOMG: A Framework Aiming to Automate Creativity and Intelligent Behavior with a Showcase on Cantus Firmus Composition and Style Development. Electronics. 2020; 9(8):1212. https://doi.org/10.3390/electronics9081212

Chicago/Turabian Style

Chang, Chun-Yien, and Ying-Ping Chen. 2020. "AntsOMG: A Framework Aiming to Automate Creativity and Intelligent Behavior with a Showcase on Cantus Firmus Composition and Style Development" Electronics 9, no. 8: 1212. https://doi.org/10.3390/electronics9081212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop