The Reuters-21578 benchmark corpus, ApteMod version This is a publically available version of the well-known Reuters-21578 "ApteMod" corpus for text categorization. It has been used in publications like these: * Yiming Yang and X. Liu. "A re-examination of text categorization methods". 1999. Proceedings of 22nd Annual International SIGIR. http://citeseer.nj.nec.com/yang99reexamination.html * Thorsten Joachims. "Text categorization with support vector machines: learning with many relevant features". 1998. Proceedings of ECML-98, 10th European Conference on Machine Learning. http://citeseer.nj.nec.com/joachims98text.html ApteMod is a collection of 10,788 documents from the Reuters financial newswire service, partitioned into a training set with 7769 documents and a test set with 3019 documents. The total size of the corpus is about 43 MB. It is also available for download from http://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html , which includes a more extensive history of the data revisions. The distribution of categories in the ApteMod corpus is highly skewed, with 36.7% of the documents in the most common category, and only 0.0185% (2 documents) in each of the five least common categories. In fact, the original data source is even more skewed---in creating the corpus, any categories that did not contain at least one document in the training set and one document in the test set were removed from the corpus by its original creator. In the ApteMod corpus, each document belongs to one or more categories. There are 90 categories in the corpus. The average number of categories per document is 1.235, and the average number of documents per category is about 148, or 1.37% of the corpus. -Ken Williams ken@mathforum.org Copyright & Notification (extracted from the README at the UCI address above) The copyright for the text of newswire articles and Reuters annotations in the Reuters-21578 collection resides with Reuters Ltd. Reuters Ltd. and Carnegie Group, Inc. have agreed to allow the free distribution of this data *for research purposes only*. If you publish results based on this data set, please acknowledge its use, refer to the data set by the name "Reuters-21578, Distribution 1.0", and inform your readers of the current location of the data set (see "Availability & Questions").