Help Aleteia continue its mission by making a tax-deductible donation. In this way, Aleteia's future will be yours as well.
*Your donation is tax deductible!
A few years ago, if it was necessary to study some problem that concerned a large number of citizens of a given country, the normal solution was to entrust the task to a social scientist who was specialized in ethnography and who had good knowledge of the context that needed to be analyzed, so that he or she could collect detailed information with the consent of the participants in the study. Nowadays, by contrast, the procedure in such a case is very different. The task can be carried out today remotely, by collecting great volumes of data from widely diverse sources, ranging from official documents to social networks, quite probably without the people involved knowing anything about what is happening.
Situations like the one just described feed the fire of the argument between defenders and detractors of this way of proceeding. Recently, this debate has been the focus of work by the Association of Internet Researchers (AoIR); by the Pervade project, funded by a grant from the National Science Foundation; and by Nature magazine, among others. These discussions are centered on this question: Why are professionals in this field more interested in discovering how to receive more funding than in determining how they can contribute to society and improve our lives?
Underlying this controversy is an attitude that supposedly leads many who dedicate themselves to this work to think too egoistically, according to one of the leading critical voices in this area, entrepreneur Kalev Leetaru. In his opinion, many data scientists behave in a way that doesn’t reflect any moral preoccupation or commitment; they simply consider which questions could be answered with the available tools and material, and with that as a starting point, they draw up their projects. Consequently, Leetaru says, what they are after is funding, publication prestige, and public attention. Their contribution to the common well-being is relegated to the background.
Giants in the information sector, such as Bloomberg, which sells finance software and financial news services, have begun initiatives to combat this problem, which is fundamentally ethical but which could end up having serious legal repercussions. This multinational, which controls approximately a third of the work market for financial information, like its rival Thomson Reuters, works with BrightHive and Data for Democracy to develop the ethical dimension of data science, artificial intelligence, etc. Until now, companies and universities have defended themselves by referring to their competition as an excuse for continuing to access information about third parties in order to manipulate it at their pleasure.
Their argument is essentially, “If everyone else is doing the same thing in their respective areas, why should we be the ones to start to change?” Consequently, the situation has become chronic, and no solution is reached. Not even some of the most important public agencies that fund this kind of research in the United States take this requirement seriously. Even authors of academic or commercial studies that use data on health, find ways to get around administrative restrictions such as those imposed by the Family Educational Rights and Privacy Act (FERPA), the Health Insurance Portability and Accountability Act (HIPAA), and other regulatory legislation.
The complaints of Kalev Leetaru—who has worked for Yahoo!, Google, Georgetown University, and the World Economic Forum Global Agenda Council on the Future of Government—are applicable to technicians and managers of various related activities: big data, data mining, machine learning, artificial intelligence, the internet of things, etc. At the head of these activities are professionals who come from disciplines such as information technology, who, due to their experience and position, are more accustomed to answering questions than to asking them, Leetaru says.