Flag Counter
AKILLI SİSTEMLER VE UYGULAMALARI DERGİSİ
JOURNAL OF INTELLIGENT SYSTEMS WITH APPLICATIONS
J. Intell. Syst. Appl.
E-ISSN: 2667-6893
Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.

Recognition of Turkish Command to Play Chess Game Using CNN

Satranç Oyunu için CNN Kullanılarak Türkçe Komutları Tanıma

How to cite: Kutlu Y, Karaca G. Recognition of turkish command to play chess game using cnn. Akıllı Sistemler ve Uygulamaları Dergisi (Journal of Intelligent Systems with Applications) 2022; 5(1): 71-73. DOI: 10.54856/jiswa.202205211

Full Text: PDF, in English.

Total number of downloads: 74

Title: Recognition of Turkish Command to Play Chess Game Using CNN

Abstract: A platform has been created that allows playing chess with Turkish voice commands. The aim of this study is to enable individuals with limited movement abilities as a result of congenital reasons or a certain disease or accident to play chess and perform a social activity without the help of another person, and to be rehabilitated at the same time. It consists of three parts: Chess module, Human-computer interaction module and Artificial Intelligence module. 29 words have been determined to provide movement in the game on the platform. Voice recordings from 151 people, 86 men and 65 women, were used. Feature selection was made on 43790 voice recordings by using mel frequency cepstral coefficients (MFCC) and gammatone cepstral coefficients (GTCC) methods. The results obtained were classified using the traditional CNN model. The data obtained after using MFCC and GTCC methods were used as inputs in the CNN model. In addition, the data obtained by the two methods were combined and trained in the model. Depending on the methods used in the created model, 83% to 85.9% results were obtained. It was determined that the results obtained using the MFCC method were more successful.

Keywords: Chess; MFCC; GTCC; deep learning; human-computer interaction


Başlık: Satranç Oyunu için CNN Kullanılarak Türkçe Komutları Tanıma

Özet: Türkçe ses komutları ile satranç oynanmasını sağlayan bir platform oluşturulmuştur. Bu çalışmanın amacı doğuştan, belirli bir hastalık ya da kaza sonucu hareket yetenekleri kıstlanmış bireylerin sosyal bir etkinlik olarak satranç oynamalarını ve başka bir kişinin yardımı olmadan sosyal bir aktivite gerçekleştirebilmelerini ve bir yandan da rehabilite olmalarını sağlamaktır. Satranç modülü, İnsan bilgisayar etkileşim modülü ve Yapay Zeka modülü olmak üzere üç bölümden oluşmaktadır. Platform üzerinde oyun içinde hareketin sağlanması için 29 sözcük belirlenmiştir. 86 erkek, 65 kadın olmak üzere 151 kişiden alınan ses kayıtları kullanılmıştır. 43790 ses kaydı üzerinde mel frekanslı cepstral katsayıları (MFCC) ve gammatone cepstral katsayıları (GTCC) yöntemleri kullanılarak öznitelik seçilimi yapılmıştır. Elde edilen sonuçlar geleneksel CNN modeli ile sınıflandırma işlemi yapılmıştır. CNN modelinde girdi olarak MFCC ve GTCC yöntemleri kullanıldıktan sonra elde edilen veriler kullanılmıştır. Ayrıca iki yöntem ile elde edilen veriler birleştirilerek model içinde eğitime alınmıştır. Oluşturulan model de kullanılan yöntemlere bağlı olarak %83 ile %85,9 sonuç elde edilmiştir. MFCC yöntemi kullanılarak elde edilen sonuçların daha başarılı olduğu belirlenmiştir.

Anahtar kelimeler: Satranç; MFCC; GTCC; derin öğrenme; insan-bilgisayar etkileşimi


Bibliography:
  • Janko V, Guid M. A program for progressive chess. Theoretical Computer Science 2016; 644: 76-91.
  • Nabiyev VV. Providing harmonization Aamong different notations in chess readings. In 2011 IEEE 19th Signal Processing and Communications Applications Conference (SIU), April 20-22, 2011, Antalya, Turkey, pp. 29-33.
  • Newell A, Shaw JC, Simon HA. Chess-playing programs and the problem of complexity. Book chapter in Computer Games I. Springer, New York, USA, 1988, pp. 89-115.
  • Yildirim O, Ucar A, Baloglu UB. Recognition of real-world texture images under challenging conditions with deep learning. Journal of Intelligent Systems with Applications 2018; 1(2): 122-126.
  • Narin A, Pamuk Z. Effect of different batch size parameters on predicting of COVID19 cases. Journal of Intelligent Systems with Applications 2020; 3(2): 69-72.
  • Dervisoglu S, Sarigul M, Karacan L. Interpolation-based smart video stabilization. Journal of Intelligent Systems with Applications 2021; 4(2): 153-156.
  • Fathima R, Raseena PE. Gammatone cepstral coefficient for speaker Identification. International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 2013; 2(1): 540-545.
  • Gerazov B, Ivanovski ZA. A speaker independent small vocabulary automatic speech recognition system in Macedonian. In Proceedings of the Second International Conference (TAKTONS), November 13-16, 2013, Novi Sad, Serbia.
  • Telceken M, Kutlu Y. Detecting abnormalities in heart sounds. Journal of Intelligent Systems with Applications 2021; 4(2): 137-143.
  • Balli O, Kutlu Y. Regional signal recognition of body sounds. Journal of Intelligent Systems with Applications 2021; 4(2): 157-160.
  • Hassine M, Boussaid L, Messaoud H. Maghrebian dialect recognition based on support vector machines and neural network classifiers. International Journal of Speech Technology 2016; 19(4): 687-695.
  • Khaing I, Lin KZ. Automatic speech segmentation for Myanmar Language. International Journal of Scientific Engineering and Technology Research 2014; 3(24): 4726-4729.
  • Nguyen QH, Cao TD. A novel method for recognizing Vietnamese voice commands on smartphones with support vector machine and convolutional neural networks. Wireless Communications and Mobile Computing 2020; 2020: 2312908.
  • Sumon SA, Chowdhury J, Debnath S, Mohammed N, Momen S. (2018, September). Bangla short speech commands recognition using convolutional neural networks. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), September 21-22, 2018, Sylhet, Bangladesh, pp. 1-6.
  • Li X, Zhou Z. Speech command recognition with convolutional neural network. CS229 Course Report, Stanford University, USA, 2017.
  • Pavan GS, Kumar N, Krishna KN, Manikandan J. (2020, May). Design of a real-time speech recognition system using CNN for consumer electronics. In 2020 Zooming Innovation in Consumer Technologies Conference (ZINC), May 26-27, 2020, Novi Sad, Serbia, pp. 5-10.
  • Huang JT, Li J, Gong Y. An analysis of convolutional neural networks for speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), April 19-24, 2015, South Brisbane, QLD, Australia, pp. 4989-4993.
  • Sayilgan E, Yuce YK, Isler Y. Frequency recognition from temporal and frequency depth of the brain-computer interface based on steady-state visual evoked potentials. Journal of Intelligent Systems with Applications 2021; 4(1): 68-73.
  • Degirmenci M, Sayilgan E, Isler Y. Evaluation of Wigner-Ville distribution features to estimate steady-state visual evoked potentials' stimulation frequency. Journal of Intelligent Systems with Applications 2021; 4(2): 133-136.
  • Karaca G, Kutlu Y. Turkish voice commands based chess game using gammatone cepstral coefficients. Journal of Artificial Intelligence with Applications 2020; 1(1): 1-4.