Can the development of AI in medicine be reconciled with privacy?

The question is asked whenever the unauthorised use of patient health data is discovered. Considering the great potential that artificial intelligence (AI) has for medicine, it is necessary to address the issue in a systematic and transparent way.

In recent days, a great deal of attention has been paid to Google’s secret project Nightingale, revealed by the Wall Street Journal in which the web giant has been working since 2018 with Ascension, an American mutual society, collecting sensitive clinical data, including patient names and birth dates, to design a new generation of clinical systems based on the AI.

In the past, there have been other similar incidents where IT companies, in collaboration with healthcare institutions and hospitals, have collected and processed sensitive data, more or less anonymous, without the consent of patients.

Of course, large volumes of data are needed to develop, train and refine AI systems for medicine. The data that these systems must ingest, use and store concern sensitive patient information whose use is regulated in Europe by the GDPR and in the USA by the HIPAA.

In theory, AI systems work with anonymous data, i.e. without any useful elements to identify people. However, the more information these systems collect and correlate, the greater the risk of being able to identify a person from his demographic, social and clinical profile.

The amount and variety of information is also necessary to reduce or eliminate algorithmic bias, i.e. the error that people who train AI can determine for various reasons (injury, data shortage, incorrect initial assumptions, etc..).

The AI, to be effective and precise, therefore needs large amounts of data and a level of detail deepened. How, then, is it possible to reconcile this need with the legitimate right to privacy?

The current privacy regulations were issued before the great and rapid development of the AI and are mainly focused on the processing of data for the treatment of patients. Concepts such as “relevance” and “non excess” do not fit in very well with AI and, in particular, with some branches of it such as data mining and deep learning.

It would therefore be necessary, in a short time, to deal with the matter, but possibly by putting together doctors, researchers, IT technicians and lawyers (not only the latter) at the same table. This area should be specifically regulated, trying to safeguard the right to privacy without castrating medical and scientific research.

Easy to say but difficult to achieve. In some countries, such as Finland, it is some universities and research institutions that hold this data and regulate its access and use by companies that develop solutions based on AI, in a clear and transparent way.

Another aspect that could be regulated is the donation of one’s own data for scientific and research purposes. This is an area of great interest that could have a great development in the short term.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s