Title: A large-scale high-definition music performance strategy based on the combination of reality and Metaverse
Authors: Minglong Wang; Shimanqi Kong; Daohua Pan
Addresses: Shanghai Institute of Visual Arts, Shanghai, China ' Shanghai Institute of Visual Arts, Shanghai, China ' Heilongjiang Minzu College, Harbin, Heilongjiang, China
Abstract: In this paper, we explore the intersection of the Metaverse, music generation, deep learning and performance strategy. Deep learning techniques have shown promise in generating music, and can be applied to create personalised soundscapes for users in the Metaverse. However, creating music with deep learning is a complex process that requires careful consideration of performance strategy. Factors such as data quality, model selection and training methodology can significantly influence the quality of generated music. In this paper, we propose a method for large-scale high-definition music generation and dance performance by combining Metaverse and deep learning techniques. First, we use the Transformer model to generate polyphonic music. Then, we use the Variational Autoencoder model (VAE) to encode dance movements. Finally, we use a joint attention mechanism to map music to dance performances. Experimental results and comparative analysis show the effectiveness of the proposed method.
Keywords: reality and metaverse; deep learning; music generation; large-scale creation.
DOI: 10.1504/IJCAT.2025.150327
International Journal of Computer Applications in Technology, 2025 Vol.77 No.3/4, pp.207 - 214
Received: 30 Oct 2024
Accepted: 10 Jun 2025
Published online: 09 Dec 2025 *