First, the official facts: what does Facebook say about all of this?
According to the network’s official note, are some interesting facts:
- Dr. Aleksandr Kogan (SCL / Cambridge Analytica Russian scholar at the University of Cambridge) developed an app that operated on Facebook, thisisyourdigitallife;
- Facebook then required certification that this information shared with third parties was destroyed, which they provided;
- Recently, Facebook received information that SCL / Cambridge Analytica, Wylie and Kogan did not effectively destroy this information, which gave rise to their suspension from operating on Facebook;
- By the way, it is important to mention to those who believe we are far away from this mess in Brazil, Cambridge Analytica also has offices in our territory and operates in Brazil (Rua Estados Unidos, 367 Jardim Paulista, São Paulo – SP). Recently,the Brazilian political consultant agency CA Ponte, who had an partnership with Cambridge Analitica for the 2018 elections, declared that it has suspended their agreement after the British company was accused by Facebook of illegally buying data from users of the platform for campaign use.
Privacy? Understand how thisisyourdigitallife worked
To grasp the ability to get data through apps like thisisyourdigitallife, just understand its appeal: a personality test, which paid people who wished to submit to it. There was, however, an important selection criterion: having a Facebook account to which the app was linked, as well as owning the right to vote in the United States. The surreptitious intention here was to outline potential voters’ preference profiles, in which such information was relevant to direct advertising (positive or negative).
The test results were recorded, collected and, at the same time, with the consent of the user, additional data was obtained from his Facebook account, as well as those available from his friends (remember, in general, Facebook settings tend to be public in their default mode).
The test then sought to establish relationships in patterns in order to construct predictions for other users of the network, with similar profiles. The research network expanded through access to the public information of friends who were tested, making its algorithm increasingly sharp and therefore valuable.
According to information about the initial operation of the app, a beta version of the test reached a thousand users, who then expanded to an information network corresponding to about 160,000 users. That is, access to an average of 160 profiles from each person participating in the test. Eventually, that number grew exponentially.
What is Cambridge Analytica and why is its performance disturbing
According to information from former Cambridge Analytica employees and leaked documents, the number of Facebook users who had their information used improperly exceeds 50 million (number not confirmed by the platform), a figure significantly higher than the 270,000 users who subscribed to the thisisyourdigitallife app. From these extremely detailed metrics, the company provides a thorough analysis of the profile of each network user, allowing the efficient targeting of ads, videos, posts and other campaign strategies, whether they are official or not.
It is natural for the company to be concerned about the perceived reliability of a network that is basically supported by personal information. Users need to feel secure to share their data with the network, although the viewing options are only available to friends, or selected lists. That is, it has hit Facebook in suspending those involved in its platforms.
After all, the performance of Cambridge Analytica has not gone unnoticed by authorities either in the United Kingdom or in the United States. In the United Kingdom, the use of personal data without the consent of users is prohibited, including when consent is given for one (academic) purpose and used with another (commercial). In the United States, the company was advised by its legal department not to employ foreign data scientists in its election campaign teams (Ted Cruz and Donald Trump). The situation is further complicated by recent investigations into the involvement of Russians in the last US election.
Application providers can not shy away from this discussion: it is necessary to take under consideration privacy and the mediation role they play in public debate
Mark Zuckerberg has already stated that privacy would no longer be the social norm. Basically, just like in his social network, the default option of the 21st century would be information sharing, community living, (online) interaction. In practice, this discourse faced some resistance, especially in jurisdictions with greater restrictions on the individual’s ability to dispose of his or her right to privacy.
However, just like other major social networks, the Facebook has also corroborated good online privacy practices. With regard to GDPR and the new levels of protection on the European continent, Facebook has advanced to the entry into force of the regulations and already presents concrete actions in favor of the freedom of the user to choose what information to share and how. In addition, for its more than 2.2 billion users, privacy and transparency about what is done with their personal data can be important assets (and therefore concerns) for Facebook’s business model.
But how does this relate to public debate in social networks? The dangers of indiscriminately using users’ personal data for profiling and directing news, postings and online campaigns is precisely in the bias of opinion that can be exercised through these tools. Whoever has the knowledge and resources to execute these techniques can effectively dictate the paths of legislative elections, plebiscites, elections and other public debates.
It is impossible to deny the intermediary role that these platforms have in the current context of digital insertion and inclusion. In principle, Zuckerberg and other application providers did not want to interfere in the issue of false news directing, saying that it was not up to them to get involved with content that was published in their timelines. However, the pressure that this application provider and others, such as Twitter and Google, have been pushing for active actions to disable automated profiles (posting bots), preference for viewing posts by friends and family, among others.
Data science also underway in Brazil
Personally, I had the opportunity to participate in 2016 as a listener of a meeting about using big data and data science for user profiling in Perestroika (amusingly, they call themselves the “worst school in the world”) in Belo Horizonte. The meeting, called Beer & Data: empowerment through data, was promoted with the (underlying) intention of publicizing the services of a company in Porto Alegre, Cappra Data Science. At that time, company representative and speaker of the day, Letícia Pozza, said that the company’s chief, Ricardo Cappra, contributed with the strategies of using data science in the US election, by the campaign of the then candidate, Barack Obama.
Curiously, Cappra analyzes in this video the “clustering” of fake news in the last US elections, warns of its dangers of distortion of reality and concludes by the future division of the world into two types of humans: people who can control algorithms and those that are controlled by them. Notice to Portuguese speakers: the talk is in Brasília, in Portuguese, but all the slides are in English. Go figure.
It is the dream of any advertiser: from the analysis of many and many profiles, develop strategies for targeting campaigns that go well beyond the gender, age group and social class of the “person.” To control. According to Leticia’s presentation in Perestroika, it is possible to cross-reference this data in a geolocalized way, taking into account the likes and interests of specific pages and themes of the post, with browsing history from clicks on Facebook, for example. More or less what Kogan did with Cambridge Analytics. It is the sophistication of a profiling of consumers, even the so-called “long tail” of the consumption chart of a given product or service can be better utilized.
The modern presentation, filled with memes, anglicisms, Beyoncé images and flawless design, did not fail to bother me: at no time does the question of the privacy of the targeted users of the searches arise. The audience did not worry either. As is quite common in events of this type, there remained the awe.
Obviously, it does not help the fact that in Brazil we do not yet have specific legislation to regulate the subject, despite sparse mentions in the Brazilian Internet Bill of Rights and its Regulatory Decree. Privacy is not at the top of the agenda of concerns of the Brazilian, I suspect that even for cultural reasons. Impossible to escape the cliché of the Brazilian who has nothing to hide and whose life “is an open book”.
What does the Cambridge Analytica suspension teach us?
Moral of the story: The debate over privacy and protection of personal data online has never been so important and necessary. We are in the midst of a turbulent political process in Brazil, of great polarization, increasing viralization of fake news, with several proposals for monitoring and even removal of online content. A society so mediated by electronic devices and their respective applications, from the WhatsApp family group to the timeline of Facebook posts, needs to have a greater appreciation for the origin, truthfulness and reliability of the information circulating there. More than that, why have you been exposed to them and who wants this to be so?
Do you want to know more about privacy and data protection? Check out here a Bytes de Informação, specifically made for this issue!