Authors: Michel Gagnon; Amal Zouaq; Francisco Aranha; Faezeh Ensan; Ludovic Jean-Louis
Addresses: Department of Computer Engineering and Software Engineering, Polytechnique Montréal, C.P. 6079, succ. Centre-ville, H3C 3A7, Montréal QC, Canada ' Department of Computer Engineering and Software Engineering, Polytechnique Montréal, C.P. 6079, succ. Centre-ville, H3C 3A7, Montréal QC, Canada; School of Electrical Engineering and Computer Science, University of Ottawa, 800 King Edward Ave., K1N 6N5, Ottawa, Ontario, Canada ' Fundaçao Getulio Vargas, Escola de Administraçao de Empresas de Sao Paulo, Rua Itapeva, 474 - 9o andar, cep 01332-000 - Bela Vista, Sao Paulo, SP, Brazil ' Ferdowsi University of Mashhad, Mashhad Iran; University of New Brunswick, Fredericton, Canada ' Netmail, 180 Peel Street, Suite 333, H3C 2G7, Montreal, QC, Canada
Abstract: Semantic annotation, the process of identifying key phrases in texts and linking them to concepts in a knowledge base, is an important basis for semantic information retrieval and the semantic web uptake. Despite the emergence of semantic annotation systems, very few comparative studies have been published on their performance. In this paper, we provide an evaluation of the performance of existing systems over three tasks: full semantic annotation, named entity recognition, and keyword detection. More specifically, the spotting capability (recognition of relevant surface forms in text) is evaluated for all three tasks, whereas the disambiguation (correctly associating an entity from Wikipedia or DBpedia to the spotted surface forms) is evaluated only for the first two tasks. We use logistic regression to identify significant performance differences. Although some of the annotators are specifically targeted at some task (NE, SA, KW), our results show that they do not necessarily obtain the best performance on those tasks. In fact, systems identified as full semantic annotators beat all other systems on all data sets. We also show that there is still much room for improvement for the identification of the most relevant entities described in a text.
Keywords: semantic annotation; linked data cloud; performance evaluation.
International Journal of Metadata, Semantics and Ontologies, 2019 Vol.13 No.4, pp.317 - 329
Available online: 30 Sep 2019 *