Making machine learning useable by revealing internal states update - a transparent approach Online publication date: Tue, 08-Nov-2016
by Jianlong Zhou; M. Asif Khawaja; Zhidong Li; Jinjun Sun; Yang Wang; Fang Chen
International Journal of Computational Science and Engineering (IJCSE), Vol. 13, No. 4, 2016
Abstract: Machine learning (ML) techniques are often found difficult to apply effectively in practice because of their complexities. Therefore, making ML useable is emerging as one of active research fields recently. Furthermore, an ML algorithm is still a 'black-box'. This 'black-box' approach makes it difficult for users to understand complicated ML models. As a result, the user is uncertain about the usefulness of ML results and this affects the effectiveness of ML methods. This paper focuses on making a 'black-box' ML process transparent by presenting real-time internal status update of the ML process to users explicitly. A user study was performed to investigate the impact of revealing internal status update to users on the easiness of understanding data analysis process, meaningfulness of real-time status update, and convincingness of ML results. The study showed that revealing of the internal states of ML process can help improve easiness of understanding the data analysis process, make real-time status update more meaningful, and make ML results more convincing.
Online publication date: Tue, 08-Nov-2016
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Science and Engineering (IJCSE):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email firstname.lastname@example.org