МОНГОЛ УЛСЫН ИХ СУРГУУЛЬ

Бидний тухай


Багш ажилтан

 /  Бидний тухай  /  Багш ажилтан /  Дэлгэрэнгүй мэдээлэл

Дэлгэрэнгүй мэдээлэл


Судалгааны чиглэл:
Мэдээллийг профессор, багш, ажилтан МУИС-ийн мэдээллийн санд бүртгүүлснээр танд харуулж байна. Мэдээлэл дутуу, буруу тохиолдолд бид хариуцлага хүлээхгүй.
Зохиогч(ид): Б.Гэрэлмаа
"Цахилгаан хэлхээ: Онолын үндэс", 2024-5-24
Зохиогч(ид): С.Отгонцэцэг, Б.Гэрэлмаа, Д.Сумъяаханд
"MorinKhuur: A morin khuur dataset for processing" INTERNATIONAL JOURNAL OF TECHNOLOGY AND INNOVATION - KHURELTOGOOT, vol. 18, pp. 60-64, 2022-12-1

http://khureltogoot.mysa.mn

Хураангуй

Зохиогч(ид): Б.Гэрэлмаа, Ч.Лодойравсал, K.Gou
"Data Generation from Robotic Performer for Chord Recognition" IEEJ Transactions on Electronics, Information and Systems, vol. 141, no. 2, pp. 205-213, 2021-1-1

https://www.jstage.jst.go.jp/article/ieejeiss/141/2/141_205/_article/-char/en

Хураангуй

Зохиогч(ид): Ч.Лодойравсал, Б.Гэрэлмаа, G.Koutaki
"Guitar Chord Sensing and Recognition Using Multi-Task Learning and Physical Data Augmentation with Robotics" Sensors, vol. 20, no. 21, pp. 1-12, 2020-10-26

https://www.mdpi.com/1424-8220/20/21/6077

Хураангуй

In recent years, many researchers have shown increasing interest in music information retrieval (MIR) applications, with automatic chord recognition being one of the popular tasks. Many studies have achieved/demonstrated considerable improvement using deep learning based models in automatic chord recognition problems. However, most of the existing models have focused on simple chord recognition, which classifies the root note with the major, minor, and seventh chords. Furthermore, in learning-based recognition, it is critical to collect high-quality and large amounts of training data to achieve the desired performance. In this paper, we present a multi-task learning (MTL) model for a guitar chord recognition task, where the model is trained using a relatively large-vocabulary guitar chord dataset. To solve data scarcity issues, a physical data augmentation method that directly records the chord dataset from a robotic performer is employed. Deep learning based MTL is proposed to improve the performance of automatic chord recognition with the proposed physical data augmentation dataset. The proposed MTL model is compared with four baseline models and its corresponding single-task learning model using two types of datasets, including a human dataset and a human combined with the augmented dataset. The proposed methods outperform the baseline models, and the results show that most scores of the proposed multi-task learning model are better than those of the corresponding single-task learning model. The experimental results demonstrate that physical data augmentation is an effective method for increasing the dataset size for guitar chord recognition tasks.

Зохиогч(ид): Ч.Лодойравсал, Б.Гэрэлмаа, G.Koutaki
"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)", IEEE Global Conference on Consumer Electronics (GCCE), Япон, 2019-10-17, vol. 2019, pp. 859-860

Хураангуй

In this paper, we propose a new framework for generating big sized dataset using synthetic data generation by robotics. In learning-based recognition, for example, using convolutional neural networks (CNNs), it is critical for the performance, to collect high-quality and large amounts of training data. Previously, to increase the training data set, a data augmentation technique based on digital signal processing were applied to the original sound data. However, the data augmentation based on digital signal processing data is a limited method, because it depends on some previous knowledge of the data and cannot perform for all domains. On the other hand, we propose a new dataset collection technique using a robot that automatically plays instruments, by which it becomes possible to add high-quality data to training samples. Experimental results for guitar chord recognition show that the proposed method using CNNs and a guitar robot can outperform the CNN systems with the traditional data augmentation.





Сул хараатай иргэдэд
зориулсан хувилбар
Энгийн хувилбар