Flag Counter
AKILLI SÄ°STEMLER VE UYGULAMALARI DERGÄ°SÄ°
JOURNAL OF INTELLIGENT SYSTEMS WITH APPLICATIONS
J. Intell. Syst. Appl.
E-ISSN: 2667-6893
Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.

A Convolutional Neural Network Model for Road Flow Direction Detection

Yol Akış Yönünün Tespiti için Bir Konvolüsyonel Sinir Ağı Modeli

How to cite: Tümen V, Yıldırım Ã, Ergen B. A convolutional neural network model for road flow direction detection. Akıllı Sistemler ve Uygulamaları Dergisi (Journal of Intelligent Systems with Applications) 2019; 2(2): 94-99.

Full Text: PDF, in Turkish.

Total number of downloads: 1338

Title: A Convolutional Neural Network Model for Road Flow Direction Detection

Abstract: It is an important work area to determine realtime characteristics of roads where vehicles are in motion in critical areas where artificial intelligence is effectively used, such as driverless vehicles. The purpose of this article work is to present a deeper learning method that will allow a vehicle in motion to detect the direction of flow in the path. Convolutional Neural Networks (KSA) have been used as deep learning models for the determination of the direction of flow (YAY) in the study. The YAY-KSA model developed for flow direction detection is applied on 587 real road images in the CMU VASC image database. To compare the performances of the prepared model, Cifar model which is a common KSA model was applied on the same data. According to the classification results obtained, it was seen that the designed YAY-KSA model correctly determined flow direction at 80.1% level.

Keywords: Deep learning; image processing; Road direction detection; road flow detection; classification


Başlık: Yol Akış Yönünün Tespiti için Bir Konvolüsyonel Sinir Ağı Modeli

Özet: Sürücüsüz araçlar gibi yapay zekanın etkin olarak kullanıldığı kritik alanlarda araçların hareket halinde olduğu yola ait özelliklerin gerçek zamanlı olarak tespit edilmesi önemli bir çalışma alanıdır. Bu makale çalışmasının amacı, hareket halindeki bir aracın yolun akış yönünü tespit etmesini sağlayacak bir derin öğrenme yöntemi sunmaktır. Çalışmada, Yol Akış Yönü (YAY) tespiti için derin öğrenme modellerinden Konvolüsyonel Sinir Ağları (KSA) kullanılmıştır. Akış yönünün tespiti için geliştirilen YAYKSA modeli CMU VASC görüntü veri tabanında bulunan 587 adet gerçek yol resimleri üzerinde uygulanmıştır. Hazırlanan modelin başarımlarını kıyaslamak için aynı veriler üzerinde, yaygın KSA modeli olan Cifar modeli uygulanmıştır. Elde edilen sınıflandırma sonuçlarına göre, tasarlanan YAY-KSA modelinin %80.1 düzeyinde akış yönünü doğru olarak tespit ettiği görülmüştür.

Anahtar kelimeler: Derin öğrenme; görüntü işleme; yol yönü tespiti; yol akışı tespiti; sınıflandırma


Bibliography:
  • Krizhevsky A. Convolutional deep belief networks on CIFAR-10. Unpublished Manuscript, 2012.
  • Dale R, Stedmon A. To delegate or not to delegate: A review of control frameworks for autonomous cars. Applied Ergonomics 2016; 53(B): 383-388.
  • Brown B. The social life of autonomous cars. Computer 2017; 50(2): 92-96.
  • Victor N, Tudoran C. Road following for autonomous vehicle navigation using a concurrent neural classifier. In 2008 World Automation Congress, September 28-October 2, 2008, Waikoloa, HI, USA, pp. 1-6.
  • Neagoe V, Valcu M, Sabac B. A neural approach for detection of road direction in autonomous navigation. In International Conference on Computational Intelligence, 1999, pp. 324-333.
  • Ozguner U, Stiller C, Redmill K. Systems for safety and autonomous behavior in cars: The DARPA grand challenge experience. Proceedings of the IEEE 2007; 95(2): 397-412.
  • Pomerleau Dean A. ALVINN: An autonomous land vehicle in a neural network. In Advances in Neural Information Processing Systems 1 (NIPS 1988), 1989, pp. 305-313.
  • Schmiterlow M. Autonomous path following using convolutional networks. MSc Thesis, The Institute of Technology, Linkoping University, May 2012.
  • Neagoe VE, Tudoran CT. A neural machine vision model for road detection in autonomous navigation. University Politehnica of Bucharest Scientific Bulletin, Series C: Electrical Engineering 2011; 73(2): 167-178.
  • Tumen V, Ergen B. Icerik tabanli goruntu erisiminde derin ogrenme yontemlerinin kullanimi. In International Conference on Artificial Intelligence and Data Processing Symposium (IDAP'16), September 17-18, 2016, Malatya, Turkey, pp. 286-290.
  • Goodfellow IJ, Erhan D, Carrier PL, Courville A, Mirza M, Hamner B, Cukierski W, Tang Y, Thaler D, Lee DH, Zhou Y, Ramaiah C, Feng F, Li R, Wang X, Athanasakis D, Shawe-Taylor J, Milakov M, Park J, Ionescu R, Popescu M, Grozea C, Bergstra J, Xie J, Romaszko L, Xu B, Chuang Z, Bengio Y. Challenges in representation learning: A report on three machine learning contests. Neural Networks 2015; 64: 59-63.
  • Gehring J, Miao Y, Metze F, Waibel A. Extracting deep bottleneck features using stacked auto-encoders. In IEEE International Conference on Acoustics, Speech and Signal Processing, May 26-31, 2013, Vancouver, BC, Canada, pp. 3377-3381.
  • LeCun Y. LeNet-5: Convolutional Neural Networks. 2013, Retrieved from http://yann.lecun.com/exdb/lenet/
  • Krizhevsky A, Sutskever I, Hinton EG. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012, pp. 1097-1105.
  • Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MA, USA, pp. 1-9.
  • Simonyan K, Zisserman A. Very deep convolutional networks for large-scale visual recognition. Retrieved from http://www.robots.ox.ac.uk/~vgg/research/very_deep/
  • He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27-30, 2016, Las Vegas, NV, USA, pp. 770-778.
  • UFDL Tutorial. Pooling. Retrieved from http://ufldl.stanford.edu/tutorial/supervised/Pooling/