Authors: Aron Henriksson
Addresses: Department of Computer and Systems Sciences, Stockholm University, 164 07 Kista, Sweden
Abstract: The scarcity of large labelled datasets comprising clinical text that can be exploited within the paradigm of supervised machine learning creates barriers for the secondary use of data from electronic health records. It is therefore important to develop capabilities to leverage the large amounts of unlabelled data that, indeed, tend to be readily available. One technique utilises distributional semantics to create word representations in a wholly unsupervised manner and uses existing training data to learn prototypical representations of predefined semantic categories. Features describing whether a given word belongs to a certain category are then provided to the learning algorithm. It has been shown that using multiple distributional semantic models, each employing a different word order strategy, can lead to enhanced predictive performance. Here, another hyperparameter is also varied - the size of the context window - and an experimental investigation shows that this leads to further performance gains.
Keywords: distributional semantics; semantic space ensembles; random indexing; named entity recognition; electronic health records; EHRs; de-identification; clinical texts; machine learning; supervised learning; unsupervised learning; unlabelled data; word representations; context window.
International Journal of Data Mining and Bioinformatics, 2015 Vol.13 No.4, pp.395 - 411
Received: 12 May 2015
Accepted: 15 May 2015
Published online: 28 Oct 2015 *