By Maya Schmidt
Welcome to the fifteenth bi-weekly Tech News Digest by the GISA Technology and Security Initiative. Our goal is to give you an easy-to-read overview of what has been happening in the world of technology and security, so we pick the top news stories from the last two weeks and present you with a summary. If you are interested in knowing more follow the links below.
AI for Health Event
On December 9th TechSec organised a panel discussion on AI for Health. Dr. Caroline Perrin, Executive Director of the Geneva Digital Health Hub, moderated the discussion between Alice Liu, Capacity Development Director and Partnerships Lead for I-DAIR, and Dr. Sohrab Ferdowsi, machine learning and image analysis expert at EarlySight SA. They provided the audience with insightful analysis of the current stage of the research, examples of good applications of AI in the health sector and reflections on future development in this field.
Dr. Sohrab Ferdowsi provided some valuable insights into the technical aspects of machine learning. He pointed out that the phrase “Artificial Intelligence” is an umbrella term which describes the ability of programs to mimic human behaviour, while machine learning describes an approach to programming which uses input/output mapping to identify patterns. Dr. Ferdowsi noted that problems with simple chains of reasoning and large data sets are the easiest to apply machine learning to look for solutions, offering an array of applications to the medical field in areas such as diagnostics.
Alice Liu provided information about several key projects taking place at I-DAIR, highlighting their strategic framework, which you can read about here. Liu also highlighted the work of Palindrome Data on using machine learning to identify patterns in HIV transmission, and Macro-Eyes’ work on the STRIATA project, which strives to optimise supply and demand.
Thinking critically this raises some interesting questions;
How do we balance the protection of those whose data is used in these projects with the potential benefits of accomplishing projects which allow for advancements like potentially successfully tracking HIV transmission, particularly as we see an emergence of literature on data colonialism?
What can we do about the bias inherent in most datasets, which can become enhanced when used in machine learning algorithms?
How do we address and mitigate the consequences of infrastructural disparities when they limit access to the “fruits” of AI labour?
Peeking Inside the Black Box
The more rapidly machine learning advances, the more difficult it becomes to grasp the technical aspects of the constantly evolving technology. One of the consequences of this, pointed out by Abeba Birhane and Deborah Raji, is that the decisions made by real people, which go into the construction of the machine learning process, are easily forgotten. People are required for; the collection of data, editing and selection of data subsets, the designing of specific machine learning algorithms, interpreting and analysis of results, and the setting of policy based on these results. We sacrifice our agency in the machine eaning and AI process if we view these things as separate from ourselves.
Read more on WIRED.
GPT stands for “Generative pre-trained transformers”. ChatGPT, launched by OpenAI in November 2022, has surprised and impressed the general public with its realistic results and range of applications.Read more about this at The Atlantic.