Application of Data Mining Algorithms in Determination of Voting Tendencies in Turkey

Application of Data Mining Algorithms in Determination of Voting Tendencies in Turkey

Ali Bayır (Moni Information Solutions Inc., Turkey), Sevinç Gülseçen (Independent Researcher, Turkey) and Gökhan Türkmen (Independent Researcher, Turkey)
DOI: 10.4018/978-1-7998-3045-0.ch008
OnDemand PDF Download:
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Political elections are influenced by a number of factors such as political tendencies, voters' perceptions, and preferences. The results of a political election could also be based on specific attributes of candidates: age, gender, occupancy, education, etc. Although it is very difficult to understand all the factors which could have influenced the outcome of the election, many of the attributes mentioned above could be included in a data set, and by using current data mining techniques, undiscovered patterns can be revealed. Despite unpredictability of human behaviors and/or choices involved, data mining techniques still could help in predicting the election outcomes. In this study, the results of the survey prepared by KONDA Research and Consultancy Company before 2011 elections in Turkey were used as raw data. This study may help in understanding how data mining methods and techniques could be used in political sciences research. The study may also reveal whether voting tendencies in elections could be a factor for the outcome of the election.
Chapter Preview
Top

Introduction

Humanbeings had been constantly researching until they discover that the world is round, reaching the conclusion that there is the buoyancy of water and finding a way of setting foot on the moon. As a result of numerous measurements, experiments and observations, they have reached this information and made discoveries recorded in history. In every step they took throughout this process, they have tried to use ancient assumptions, questions and concrete values and search for answers of the solutions that came to their minds. In every response of these questions, they received values and realized exploring a new meaning after processing these values. Pieced together, these meaningful values gave birth of data concept.

Nowadays, all of the data emerged as a response to questions such as “What? Why? How?”. Considering this, it can be said that the number of people asking these questions and the frequency of asking, producing the enormous volume of data that cannot be controlled in terms of quantity and size. The excess in the amount of data, considering the fact that it is multidimensional and of different types, has made difficult to store it. In consequence of fundamental requirements, Data Base Management Systems (DBMS) have been developed and these systems have enabled the large quantities of data to be easily stored and used in computer environment. The surplus of data in the increasingly data-driven world, which plays an important role in discovery of knowledge, has created the need to filter it. Furthermore, various new methods and algorithms for data analysis have been developped. In this way, significant distances have been taken in terms of access to usable knowledge and internalized information. Data mining which is the process of collecting, processing, cleaning, analysing, transforming and modeling data with the purpose of exploring the required useful knowledge, has enabled researchers to integrate the pattern processing ability with the processing capacity of the computer and has drawn a new way of accessing with the aim of reaching information. In a broader perspective, data mining has given a new, genuine depth to access data while combining pattern recognition capability of human and processing power of computers.

Throughout history, development of Information and Communication Technology (ICT) has provided large amount of data in various areas. The important studies in data and information storage as well as data base technnologies gave rise to an approach to store and manipulate data for further processing and exploring precious knowledge (Bharati, 2010).

The colossal data has been collected in a systematic way in many areas such as finance, marketing, energy, medicine, business and business management, credit and risk analysis, aviation, maritime and transportation systems, map information systems, social network analysis, meteorology and political analysis. After processing this data, meaningful information and knowledge has been obtained from this large data stack. Being able to keep unprocessed data on disks which are getting smaller in size but larger in capacity constitutes an important source of information as well.

As a result of the operations performed to calculate the information per capita in the world, machines’ application specific capacity has approximately doubles every 14 months, and the number of computers where data is produced generally doubles every 18 months. As Hilbert (2011) stated during the period from 1986 to 2007, the world’s storage capacity has roughly doubled every 40 months. For example, in a single jet flight 40 TB (1TB = 1012 bayt) of data can be produced in every 60 minutes. Considering more than 25000 airway flights in every day, the rate of throughput added to the universe reaches the ammount of Petabytes (1EB = 1024 TB) (Dijcks, 2013). It is predicted that, the digital world would grow by 50 times more from the beginning of 2010 to the end of 2020, and the data source to be formed by the end of 2020 is expected to be over 40000 Exabytes (1EB = CU1024) (Gantz and Reinsel, 2012).

Complete Chapter List

Search this Book:
Reset