Calls for papers

 

International Journal of Data Mining and Bioinformatics
International Journal of Data Mining and Bioinformatics

 

OA Special Issue on "Explainable AI for Text Mining: Interoperability and Transparency"


Guest Editors:
Dr. Claudia Duran, Universidad Tecnológica Metropolitana, Chile
Dr. Jacson Rodrigues Correia-Silva, Federal University of Espirito Santo, Brazil
Dr. Arup Kumar Sahoo, SOA University, India


Artificial intelligence (AI) is disrupting the text-mining world by greatly reducing the time and resources needed to extract relevant information. As the growth in data has outpaced the capabilities of humans, AI can be used to analyse, process and interpret content quickly. The benefits of AI include increased accuracy, a better user experience and higher productivity. The application of AI in text mining has drastically changed the way we process and analyse data. It has revolutionised our ability to continuously improve user experience, enhance the quality of products and services, or predict future trends.

The significance of AI in text mining is its ability to extract information from large amounts of unstructured data. Using AI, we can get valuable insights about customers or users by analysing the content they post on social media and improving their product experience. Text mining is extracting information from large amounts of unstructured text. It involves identifying patterns, clusters and trends in large documents. AI can be used for text extraction and sentiment analysis to help us understand how people feel about the world around them. As AI and machine learning (ML) become more widespread in text mining, the need for explainable AI systems has become increasingly important. Explainable AI systems provide transparency and a clear understanding of their decision-making processes, making them more trustworthy, reliable and usable.

There is a pressing need for more effective systems for text mining and better ways to explain how these systems work and why they produce their results. This special issue explores the challenges of interoperability and transparency in AI. Explainable AI is one of the most important milestones in the field of artificial intelligence and has the potential to fundamentally change how humans interact with technology. In this special issue, we aim to explore methods that can be used for explainable AI. As such, we invite submissions for papers that will contribute to an open discussion about research directions or dissemination issues related to explainable AI. We seek contributions of a theoretical or empirical nature that explore these issues from an industry perspective and as part of broader research goals. This special issue aims to bring together research and development in this area, exploring the current state of the field and its future directions.

Subject Coverage
Suitable topics include, but are not limited, to the following:

  • Models for explainability in the context of interpretability, transparency and confidence
  • Theoretical and empirical research on explainable AI systems
  • Techniques and frameworks for explaining machine learning models, including interpretability methods for deep neural networks
  • Methods for evaluating the effectiveness of explainable AI systems
  • Technical aspects of explainable AI, including explanation techniques and models that generate explanations of their outputs
  • Applications of explainable AI in real-world settings such as healthcare, cybersecurity transportation and more
  • The design and development of explainable AI systems for text mining, focusing on interoperability
  • The evaluation of the transparency and interpretability of AI systems for text mining and their impact on user trust and adoption
  • The use of natural language processing (NLP) techniques for providing explanations in text mining
  • Applying explainable AI for text mining in different domains such as healthcare, finance and law
  • Future directions for research in explainable AI for text mining, with a focus on promoting interoperability and standards

Notes for Prospective Authors

Submitted papers should not have been previously published nor be currently under consideration for publication elsewhere. (N.B. Conference papers may only be submitted if the paper has been completely re-written and if appropriate written permissions have been obtained from any copyright holders of the original paper).

All papers are refereed through a peer review process.

All papers must be submitted online. To submit a paper, please read our Submitting articles page.

This is an Open Access Special Issue. There is an article processing charge of EUR€2000 per paper to publish in this Special Issue for authors. You can find more information on Open Access here.


Important Dates

Manuscripts due by: 30 June, 2026