Multi-Agents Machine Learning (MML) System for Plagiarism Detection

Multi-Agents Machine Learning (MML) System for Plagiarism Detection

Hadj Ahmed Bouarara
Copyright: © 2016 |Pages: 17
DOI: 10.4018/IJATS.2016010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Day after day the cases of plagiarism increase and become a crucial problem in the modern world caused by the quantity of textual information available in the web. Data mining becomes the foundation for many different domains as one of its chores is the text categorization, which can be used in order to resolve the impediment of automatic plagiarism detection. This article is devoted to a new approach for combating plagiarism named MML (Multi-agents Machine learning system) and is composed of three modules: data preparation and digitalization, using n-gram character or bag of words as methods for the text representation; TF*IDF as weighting to calculate the importance of each term in the corpus in order to transform each document to a vector; and learning and voting phase using three supervised learning algorithms (decision tree c4.5, naïve Bayes and support vector machine).
Article Preview
Top

1. Introduction And Problematic

With increasing numbers of documents available on the web where 80% of information that exists is in textual kind and with the development of means of communication, find and know the original author or the possessor of information has become a crucial subject following to the evolution of society and technology, we observe clearly those cases of plagiarism in the works of scholars and researchers. The basis of this problem is numerous and crossed, there are many websites which articles and ready documents are available these sites are ideal for the plagiarists all this have raised the requisite of an effective plagiarism detector tool.

To give you a global view about our work many people do not know exactly the word plagiarism means that defined as the wrongful misuse and steal of thoughts, ideas and words or expressions of the original work of someone in the same language or in a different language (Basile, 2009). Depending on the behavior of plagiarist, we can distinguish several types of plagiarism as a plagiarism verbatim when the plagiarist copied the words or sentence from a book, magazine or web page as like it without putting it in quotation marks and / or without citing source or buy a work online, the paraphrase when the words or the syntax of sentence copied are changing and finally the cases of plagiarism the most difficult to detect are plagiarism with translation and plagiarism of ideas when summarizing the original idea of ​​the author expressed in his own words partially or completely (Stein, 2007).

In the former years the classical way to detect plagiarism is that each document must be examined manually, this process is generally slow, so why tow techniques for automatic plagiarism detection have emerged among them: the external plagiarism detection based on external information to compare the suspicious document with the referencedocuments to be able to detect if it is considered as plagiarized or not (Stein, 2007) (Basile, 2009). Internal plagiarism detection based on stylometry method and each document has a specific style will be compared to a base of style, plagiarism will be detected depending on how the document is writing and if there is a change in style between the paragraphs, this technic ignores the external information and it is very difficult to achieve because an author can have different style (Meyer, Sven, & Benno, 2008).Despite the searches existing in the world of plagiarism detection, but most of them do not take in consideration the semantic side for e.g. a plagiarist wants to write a document in French he takes a paragraph from an external source and tried to translate it in order to mask plagiarism to be undetectable.

Most automatic systems of classical plagiarism detection suffer from different problems in terms of execution time and quality of results, because they are based on a single agent that does all the work it read all the documents and began to treat, analysis and tracking the plagiarized documents, for e.g. we are confronted with a lot of documents and requiring a single expert to look into each document and sort it as plagiarized or not, this process can be very slow and an expert can have a difficulty to analyse well when the number of documents is very wide, so for this reason we remembered to call others experts to run together and every single of them analyses a subset of these documents and the final answer will be after a meeting between them at this level, it positioned our problem is to transfer this idea to the machine in order to facilitate our tasks and save time our article takes place in the intersection of different domain like shown in Figure 1.

Figure 1.

Positioning of our problem

IJATS.2016010101.f01

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 9: 1 Issue (2017)
Volume 8: 1 Issue (2016)
Volume 7: 3 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing