The fast convergence of Artificial Intelligence (AI) technologies has the potential to reshape most organisations whether it is in the private or public sector (Daugherty et al., 2019). In particular, there is emerging anecdotal evidence of the use of AI to influence patient journey and medical practices and its potential to change the landscape of healthcare. Although AI holds the promise to liberate patient data and provide a better personalized and evidence-based medicine (He et al., 2019), it comes with its own concerns that lead to patient distrust and ethical concerns.
The collection and utilisation of personal data by AI and analytical algorithms gives rise to serious issues where individuals suffer from privacy invasion, fraud, lack of transparency, misuse of algorithm, information leakage, and identity theft (Sivarajah et al., 2017; Wearn et al., 2019). In fact, 63% of the surveyed UK adults are uncomfortable with allowing AI to replace doctors and nurses for some tasks such as suggesting treatments and 49% of them are not willing to share their personal health data for developing algorithms that might improve quality of care (Fenech et al., 2018). AI could trigger potential risks for care delivery by devaluing physicians’ skills, dissatisfying transparency standards, underestimating the biases of algorithms, and neglecting the fairness of clinical deployment (Vayena et al., 2018). Such ethical dilemma and concerns, if not well addressed when implementing AI for digital health and medical analytics, not only can it lead to a negative impact on patients but it may also hamper the healthcare organisations’ reputation (Wang et al., 2018). To tackle these ethical challenges, data protection regulations have come into force in many countries, such as the Data Protection Act 2018, which is the UK's implementation of the General Data Protection Regulation (GDPR) formulated by the European Union. These regulations have the potential to improve individuals’ confidence in sharing personal information with healthcare organisations. This trend has led to a scholarly and practical focus on the responsible use of AI.
Responsible AI refers to the integration of ethical and responsible use of AI into the strategic implementation and organisational planning process (Wang et al., 2020). Responsible AI aims at designing and implementing ethical, transparent, and accountable AI solutions that help organisations maintain trust and minimize privacy invasion. Responsible AI places humans (e.g. patients) at the centre and meets stakeholder expectations and applicable regulations and laws. The ultimate goal of responsible AI is to strike a balance between satisfying patient needs with the responsible use of AI and attaining long-term economic value for healthcare organizations. Despite its importance for organisation prosperity and the significant attention devoted to it, Responsible AI use in healthcare is still in its infancy.
Current research on the use of AI mainly focuses on the technological understanding of its implementation and exploring the economic value of AI applications. The practice, mechanism, infrastructure and ecosystem supporting responsible AI use for healthcare is lacking. Thus, there is an urgent need to develop research in AI for healthcare from the social responsibility perspective that would turn the ethical considerations from a barrier to an opportunity as it improves trust and engagement of patients. Understanding the role of responsible AI use in creating value in healthcare is not only contributing to an emerging information systems (IS) research field, but also provides practical recommendations for healthcare practitioners. As such, this special issue welcomes submissions from multiple disciplines including information systems, digital health, medical analytics, marketing, operations, computer science, psychology, and sociology. All approaches (empirical, analytical, or conceptual) that create new insights for a deeper understanding of responsible AI are welcome.
Some of the indicative topics include but are not limited to the following:
Key Dates
Submission Guideline:
Papers submitted to the special issue will be subject to the Journal review process and submission guidelines. Submission of extended abstracts, due 31 March 2020 Submission of an extended abstract is not compulsory to the full paper submission, but it is highly encouraged. Extended abstracts must not exceed 1500 words or 6 double-spaced pages. The page limit includes everything – the cover page (which contains the title, abstract and keywords), all text, figures, tables, references, and appendices. Extended abstracts will be reviewed only by the guest editors. Authors can submit the extended abstract to the guest editors, which will be open from 1 March to 31 March 2020. All authors submitting an extended abstract will be invited to attend an invitation-only workshop to be held at the International Conference on Digital Health and Medical Analytics (DHA 2020) in Beijing, China (1-3 July 2020). Constructive feedback will be provided on each presentation in the conference workshop. The workshop on responsible AI will aim to develop the papers further and use presentations and panels to invite dialog on the best way to develop and review all the papers submitted to make it the strongest special issue possible.
Full Paper submission, due 30 September 2020 Full papers are due September 30, 2020, via the Information Systems Frontiers submission system. All papers must be submitted to the Special Issue: “Responsible AI for Digital Healthand Medical Analytics” category.
Guest Editors:
Dr Uthayasankar Sivarajah, School of Management, University of Bradford, UK (Lead Guest Editor)
Dr Yichuan Wang, Sheffield University Management School, University of Sheffield, UK
Dr Hossein Olya, Sheffield University Management School, University of Sheffield, UK
Mr Sherin Mathew, Analytics and AI Lead, Microsoft GBS UK at IBM & Founder of AI Tech North, UK
References
Daugherty, P. R., Wilson, H. J., & Chowdhury, R. (2019). Using Artificial Intelligence to
Promote Diversity. MIT Sloan Management Review, 60(2), 1.
Fenech M, Strukelj N, Buston O. (2018). Ethical, social and political challenges of artificial intelligence in health. Available at: https://wellcome.ac.uk/sites/default/files/ai-in-health-ethical-social-political-challenges.pdf.
He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical
implementation of artificial intelligence technologies in medicine. Nature Medicine, 25(1),
30-36.
Sivarajah, U., Kamal, M.M., Irani, Z. and Weerakkody, V. (2017). Critical Analysis of Big
Data Challenges and Analytical Methods, Journal of Business Research, 70, 263-286.
Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing
ethical challenges. PLoS Medicine, 15(11), e1002689.
Wang, Y., Kung, L., & Byrd, T. A. (2018). Big data analytics: Understanding its capabilities
and potential benefits for healthcare organizations. Technological Forecasting and Social
Change, 126, 3-13.