Modified Support Vector Machine Algorithm to Reduce Misclassification and Optimizing Time Complexity

Modified Support Vector Machine Algorithm to Reduce Misclassification and Optimizing Time Complexity

Aditya Ashvin Doshi, Prabu Sevugan, P. Swarnalatha
DOI: 10.4018/978-1-5225-3643-7.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A number of methodologies are available in the field of data mining, machine learning, and pattern recognition for solving classification problems. In past few years, retrieval and extraction of information from a large amount of data is growing rapidly. Classification is nothing but a stepwise process of prediction of responses using some existing data. Some of the existing prediction algorithms are support vector machine and k-nearest neighbor. But there is always some drawback of each algorithm depending upon the type of data. To reduce misclassification, a new methodology of support vector machine is introduced. Instead of having the hyperplane exactly in middle, the position of hyperplane is to be change per number of data points of class available near the hyperplane. To optimize the time consumption for computation of classification algorithm, some multi-core architecture is used to compute more than one independent module simultaneously. All this results in reduction in misclassification and faster computation of class for data point.
Chapter Preview
Top

Introduction

These days, numerus organizations are using “big data”, “machine learning” technologies for data analysis. These are the terms which describes that available data is so complex as well as large so that it becomes distinctly clumsy to work with existing statistical algorithms which restricts size and type of data. The existing data mining algorithm usually can be divided in to sub types like, “associate rule mining”, “classification”, “clustering” (A Survey on Feature Selection Techniques and Classification Algorithms for Efficient Text Classification.) Classification technique works with association of unstructured to well-structured data. Numerous amount of classification techniques is introduced in the fields of big data. As every algorithm has its own pros and cons depending upon type of data need to be classified. The performance of these techniques is generally measured in terminology of cost and cost is nothing but required computation time and misclassification.

What is Machine Learning?

Consider “machine learning” this way. As a human being, and as a client of innovation, you finish certain errands that oblige you to choose or group something. For example, when you read your inbox in the morning, you choose to stamp that “Win a Free Cruise on the off chance that You Click Here” email as spam. How might a computer know to do a similar thing? Machine learning is involved algorithms that instruct computers to perform assignments that individuals do day by day.

The primary endeavour’s at counterfeit consciousness included instructing a computer by composing a run the show. On the off chance that we needed to educate a computer to make proposals considering the climate, then we may compose a decide that stated: IF the climate is shady AND the possibility of precipitation is more prominent than half, THEN recommend taking an umbrella. The issue with this approach utilized as a part of conventional master frameworks, in any case, is that we don't know how much certainty to put on the run the show.

Hence, machine learning has developed to imitate the example coordinating that human brains perform. Today, machine learning algorithms instruct computers to perceive components of a protest. In these models, for instance, a computer is demonstrated an apple and told that it is an apple. The computer then uses that data to characterize the different attributes of an apple, expanding upon new data each time. At initial, a computer may arrange an apple as round, and fabricate a model that expresses that if something is around, it's an apple. At that point later, when an orange is presented, the computer discovers that if something is around AND red, it's an apple. At that point a tomato is presented, etc. The computer should persistently alter its model considering new data and allot a prescient incentive to each model, showing the level of certainty that a question is one thing over another. For instance, yellow is a more prescient incentive for a banana than red is for an apple.

So Why is Everyone Talking about Machine Learning?

These essential algorithms for instructing a machine to finish assignments and order like a human go back a very long while. The contrast amongst now and when the models were initially designed is that the more data is sustained into the algorithms, the more precise they move toward becoming. The previous couple of decades have seen enormous versatility of information and data, taking into consideration a great deal more precise forecasts than were ever conceivable in the long history of machine learning.

New systems in the field of machine learning – that generally include consolidating pieces that as of now existed in the past – have empowered a remarkable research exertion in “Deep Neural Networks (DNN)”. This has not been the consequence of a noteworthy leap forward, yet rather of substantially speedier computers and a great many analysts contributing incremental upgrades. This has empowered scientists to extend what's conceivable in machine learning, to the point that machines are beating people for troublesome yet barely characterized errands, for example, perceiving appearances or playing the round of Go.

Complete Chapter List

Search this Book:
Reset