Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/119597
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPang, G.-
dc.contributor.authorCao, L.-
dc.contributor.authorChen, L.-
dc.contributor.authorLiu, H.-
dc.date.issued2018-
dc.identifier.citationProceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp.2041-2050-
dc.identifier.isbn9781450355520-
dc.identifier.urihttp://hdl.handle.net/2440/119597-
dc.description.abstractLearning expressive low-dimensional representations of ultrahigh-dimensional data, e.g., data with thousands/millions of features, has been a major way to enable learning methods to address the curse of dimensionality. However, existing unsupervised representation learning methods mainly focus on preserving the data regularity information and learning the representations independently of subsequent outlier detection methods, which can result in suboptimal and unstable performance of detecting irregularities (i.e., outliers). This paper introduces a ranking model-based framework, called RAMODO, to address this issue. RAMODO unifies representation learning and outlier detection to learn low-dimensional representations that are tailored for a state-of-the-art outlier detection approach - the random distance-based approach. This customized learning yields more optimal and stable representations for the targeted outlier detectors. Additionally, RAMODO can leverage little labeled data as prior knowledge to learn more expressive and application-relevant representations. We instantiate RAMODO to an efficient method called REPEN to demonstrate the performance of RAMODO. Extensive empirical results on eight real-world ultrahigh dimensional data sets show that REPEN (i) enables a random distance-based detector to obtain significantly better AUC performance and two orders of magnitude speedup; (ii) performs substantially better and more stably than four state-of-the-art representation learning methods; and (iii) leverages less than 1% labeled data to achieve up to 32% AUC improvement.-
dc.description.statementofresponsibilityGuansong Pang, Longbing Cao, Ling Chen and Huan Liu-
dc.language.isoen-
dc.publisherAssociation for Computing Machinery-
dc.rights© 2018 Association for Computing Machinery.-
dc.source.urihttp://dx.doi.org/10.1145/3219819.3220042-
dc.subjectOutlier detection; representation learning; ultrahigh-dimensional; data; dimension reduction-
dc.titleLearning representations of ultrahigh-dimensional data for random distance-based outlier detection-
dc.typeConference paper-
dc.contributor.conferenceInternational Conference on Knowledge Discovery and Data Mining (KDD) (19 Aug 2018 - 23 Aug 2018 : London, UK)-
dc.identifier.doi10.1145/3219819.3220042-
dc.publisher.placeNew York-
dc.relation.granthttp://purl.org/au-research/grants/arc/DP180100966-
pubs.publication-statusPublished-
dc.identifier.orcidPang, G. [0000-0002-9877-2716]-
Appears in Collections:Aurora harvest 4
Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.