Learning Full-Sentence Co-Related Verb Argument Preferences from Web Corpora

Learning Full-Sentence Co-Related Verb Argument Preferences from Web Corpora

Hiram Calvo, Kentaro Inui, Yuji Matsumoto
ISBN13: 9781609608811|ISBN10: 160960881X|EISBN13: 9781609608828
DOI: 10.4018/978-1-60960-881-1.ch007
Cite Chapter Cite Chapter

MLA

Ramon F. Brena and Adolfo Guzman-Arenas. "Learning Full-Sentence Co-Related Verb Argument Preferences from Web Corpora." Quantitative Semantics and Soft Computing Methods for the Web: Perspectives and Applications, IGI Global, 2012, pp.137-162. https://doi.org/10.4018/978-1-60960-881-1.ch007

APA

R. Brena & A. Guzman-Arenas (2012). Learning Full-Sentence Co-Related Verb Argument Preferences from Web Corpora. IGI Global. https://doi.org/10.4018/978-1-60960-881-1.ch007

Chicago

Ramon F. Brena and Adolfo Guzman-Arenas. "Learning Full-Sentence Co-Related Verb Argument Preferences from Web Corpora." In Quantitative Semantics and Soft Computing Methods for the Web: Perspectives and Applications. Hershey, PA: IGI Global, 2012. https://doi.org/10.4018/978-1-60960-881-1.ch007

Export Reference

Mendeley
Favorite

Abstract

Learning verb argument preferences has been approached as a verb and argument problem, or at most as a tri-nary relationship between subject, verb and object. However, the simultaneous correlation of all arguments in a sentence has not been explored thoroughly for sentence plausibility mensuration because of the increased number of potential combinations and data sparseness. In this work the authors present a review of some common methods for learning argument preferences beginning with the simplest case of considering binary co-relations, then they compare with tri-nary co-relations, and finally they consider all arguments. For this latter, the authors use an ensemble model for machine learning using discriminative and generative models, using co-occurrence features, and semantic features in different arrangements. They seek to answer questions about the number of optimal topics required for PLSI and LDA models, as well as the number of co-occurrences that should be required for improving performance. They explore the implications of using different ways of projecting co-relations, i.e., into a word space, or directly into a co-occurrence features space. The authors conducted tests using a pseudo-disambiguation task learning from large corpora extracted from Internet.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.