CollageNet: Fusing arbitrary melody and accompaniment into a coherent song
Main Authors: | Abudukelimu Wuerkaixi, Christodoulos Benetatos, Zhiyao Duan, Changshui Zhang |
---|---|
Format: | Proceeding Journal |
Terbitan: |
ISMIR
, 2021
|
Online Access: |
https://zenodo.org/record/5624619 |
Daftar Isi:
- When writing pop or hip-hop music, musicians sometimes sample from other songs and fuse the samples into their own music. We propose a new task in the symbolic music domain that is similar to the music sampling practice and a neural network model named CollageNet to fulfill this task. Specifically, given a piece of melody and an irrelevant accompaniment with the same length, we fuse them into harmonic two-track music after some necessary changes to the inputs. Besides, users are involved in the fusion process by providing controls to the amount of changes along several disentangled musical aspects: rhythm and pitch of the melody, and chord and texture of the accompaniment. We conduct objective and subjective experiments to demonstrate the validity of our model. Experimental results confirm that our model achieves significantly higher level of harmony than rule-based and data-driven baseline methods. Furthermore, the musicality of each of the tracks does not deteriorate after the transformation applied by CollageNet, which is also superior to the two baselines.