On managing security in smart e-health applications
by Fiammetta Marulli; Emanuele Bellini; Stefano Marrone
International Journal of Computational Science and Engineering (IJCSE), Vol. 24, No. 6, 2021

Abstract: Distributed machine learning can give an adaptable but strong shared condition for the design of trusted AI applications; this is mainly due to lack of privacy of centralised remote learning mechanisms. This notwithstanding, also distributed approaches have been compromised by several attack models (mainly data poisoning): in such a situation, a malicious member of the learning party may inject bad data. As such applications are growing in criticality, learning models must face with security and protection just as with versatility issues. The aim of the paper is to improve these applications by providing extra security features for distributed and federated learning mechanisms: more in the details, the paper examines specific concerns such as the utilisation of blockchain, homomorphic cryptography and meta-modelling techniques to ensure protection as well as other non-functional properties.

Online publication date: Tue, 04-Jan-2022

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Science and Engineering (IJCSE):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com