IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Positive Unlabelled Learning for Document Classification

Positive Unlabelled Learning for Document Classification
View Sample PDF
Author(s): Xiao-Li Li (Institute for Infocomm Research, A* STAR, Singapore)
Copyright: 2009
Pages: 6
Source title: Encyclopedia of Data Warehousing and Mining, Second Edition
Source Author(s)/Editor(s): John Wang (Montclair State University, USA)
DOI: 10.4018/978-1-60566-010-3.ch238

Purchase

View Positive Unlabelled Learning for Document Classification on the publisher's website for pricing and purchasing information.

Abstract

In traditional supervised learning, a large number of labeled positive and negative examples are typically required to learn an accurate classifier. However, in practice, it is costly to obtain the class labels for large sets of training examples, and oftentimes the negative examples are lacking. Such practical considerations motivate the development of a new set of classification algorithms that can learn from a set of labeled positive examples P augmented with a set of unlabeled examples U (which contains both hidden positive and hidden negative examples). That is, we want to build a classifier using P and U in the absence of negative examples to classify the data in U as well as future test data. This problem is called the Positive Unlabelled learning problem or PU learning problem. For instance, a computer scientist may want to build an up-to-date repository of machine learning (ML) papers. Ideally, one can start with an initial set of ML papers (e.g., a personal collection) and then use it to find other ML papers from related online journals or conference series, e.g., Artificial Intelligence journal, AAAI (National Conference on Artificial Intelligence), IJCAI (International Joint Conferences on Artificial Intelligence), SIGIR (ACM Conference on Research & Development on Information Retrieval), and KDD (ACM International Conference on Knowledge Discovery and Data Mining) etc. With the enormous volume of text documents on the Web, Internet news feeds, and digital libraries, finding those documents that are related to one’s interest can be a real challenge. In the application above, the class of documents that one is interested in is called the positive documents (i.e. the ML papers in the online sources). The set of known positive documents are represented as P (namely, the initial personal collection of ML papers). The unlabelled set U (papers from AAAI Proceedings etc) contains two groups of documents. One group contains documents of class P, which are the hidden positive documents in U (e.g., the ML papers in an AAAI Proceedings). The other group, which comprises the rest of the documents in U, are the negative documents (e.g., the non-ML papers in an AAAI Proceedings) since they do not belong to positive class. Given a positive set P, PU learning aims to identify a particular class P of documents from U or classify the future test set into positive and negative classes. Note that collecting unlabeled documents is normally easy and inexpensive, especially those involving online sources.

Related Content

Girija Ramdas, Irfan Naufal Umar, Nurullizam Jamiat, Nurul Azni Mhd Alkasirah. © 2024. 18 pages.
Natalia Riapina. © 2024. 29 pages.
Xinyu Chen, Wan Ahmad Jaafar Wan Yahaya. © 2024. 21 pages.
Fatema Ahmed Wali, Zahra Tammam. © 2024. 24 pages.
Su Jiayuan, Jingru Zhang. © 2024. 26 pages.
Pua Shiau Chen. © 2024. 21 pages.
Minh Tung Tran, Thu Trinh Thi, Lan Duong Hoai. © 2024. 23 pages.
Body Bottom