Although this is not a book review, I believe Shoshana Zuboff’s new publication, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, is one of the most essential books for internet governance in 2019. You can find a great analysis by Ricardo Abramovay here, but my intention is to go slightly beyond the thesis of the book: if we are really living a technological evolution of informational and financial capitalism – surveillance capitalism – what does this mean for the normative framework that surrounds digital rights? More precisely, are we about to see a reformulation of traditional intellectual property rights, such as trade secrets, copyright and patents?
Surveillance capitalism is more than a means of productive organization
Alongside the industrial Fordist and Toyotian revolutions, there was clearly the addition of human labor factors to the value chain. In this logic of accumulation, wage labor was essential to generate consumer markets and, consequently, to drive a rise on market demand. This productive organization became more flexible with the flows of financial capital in the last decades of the last century, but still in a speculative way and, in turn, entailing diverse regulations, such as investment fund supervision, taxation, minimum percentage anchorage in non-speculative investments, subjection to risk assessments by specialized agencies, among others.
Unlike several earlier phases of the capitalist means of production, human labor is not necessarily used as an input in the production of wealth here. Through organizational advances, financial capitalism developed and transformed itself to embrace even more sophisticated accumulation logics, embedded in the fabric of society through technology and increasingly determining the relationships it mediates.
Zuboff describes a major strategy shift on Google’s accumulation model as a turning point in 2001, since up until then its creators even rejected the use of paid ads as a form of search tool financing. The dot-com bubble at the turn of the century, together with the rejection of legislative and institutional protections for US users’ privacy rights (after 09/11), enabled the design of a fertile field of online experimentation with new ways of generating value from the collection, analysis and use of personal data.
This is how Google started to generate value without its users performing the traditional functions of workers nor customers in its business model. Users ceased to be productive forces, as they were in back in the day in the factories, and they also lost the condition of customers of manufactured goods (within this business models, because they still consume physical goods elsewhere). Now, according to the author, the digital user population provides raw material for data mining across platforms. The advertisers who hire the platforms are the real customers, who reward these companies for creating this increasingly targeted virtual advertising space.
The insertion of the user as raw material in the information value chain enables the generation of what Shoshana Zuboff names behavioral surplus, the additional data we provide when we are connected, in addition to the commands that are effectively considered input (what we post on our pages and which we inform as being personal data). The more (and higher-quality) surplus behavior provided by users, the greater the accuracy for the commercial offering.
Instead of the traditional view that “we work for Google” in return for their free services, it would be far more appropriate to think about these platforms as marketplaces. In such virtual markets, platforms such as Google or Facebook aim to keep users as long as possible online so as to expose them to the (increasingly) customized advertising of their real customers: the advertisers and the payed ranked pages.
Users’ metrics generation is both efficient and exponentially more complete: heatmaps, discrimination and reaction analysis, speech and image recognition, geolocation and dwell time, among others. It would hardly be possible to collect so much personal (and telling) information in traditional offline advertising models. It is no wonder that companies that generate wealth from this sophistication of informational financial capitalism today head the lists of highest market value in the world and the technology sector.
All previous phases of deep transformation of capitalist systems were also accompanied by legal reforms in order to corroborate such changes, or even to protect economic agents at risk. The industrial revolution corroborated the regulation of patent systems and copyright. The Fordist model of production was accompanied by social and labor gains, with the intention of also fueling the demand in the nascent market. The toyotista productive flexibilization required more efficient rules of export and import of inputs for the creation of global chains. The increase in the flow of speculative financial capital depended on highly regulated and secure banking systems, as well as more independent and predictable central national banks. What legislative changes can the increase arising from surveillance capitalism engender?
An unquestionable intellectual property system?
What do current intellectual property protection systems have to do with the development of surveillance capitalism? Among the main assets of these business models are immaterial goods, such as trademark (especially in the case of Apple, for example), copyrights, trade secrets, patents and computer programs. Depending on how each country decides to regulate intellectual property rights, the algorithms behind the features of these digital platforms can be protected by trade secrets, patents and computer programs.
Recent discussions on the European Digital Single Market demonstrate the strength of the mechanisms established for the protection of copyright, through the protection and compensation guarantees for producers of copyright content, publishers and traditional distributors. Meanwhile, social networks and digital platforms are going to be responsible for the unauthorized sharing by their users of copyrighted materials, which is a legislative attempt to regulate intellectual aspects also in the digital environment, a real challenge to Lessig’s theory that “code is law”.
The General Data Protection Regulation (GDPR) already represented a major limitation on the performance of digital platforms, but only concerns aspects related to the protection and privacy of users’ personal data. The Copyright Directive, however, denotes an attempt by political (legislative) institutions and more established markets (the entertainment industry) to counteract these more innovative business models, which still test the limits of their performance in contemporary times.
In the case of trade secrets, one of the conditions for protection against competition is precisely the maintenance of secrecy as to the exact form of operation of these computer codes. One knows what the inputs and outputs are, but not how to process, analyze, and produce inferences from the information provided, a true black box. In other words, much of the success of the largest companies in terms of value generation today depends heavily on the secrecy of their ways of functioning. Eventually, other success factors can also be considered, such as adherence to these platforms by a large user base (Facebook vs. Google+, or WhatsApp vs. Telegram, for example) and the first-mover advantage.
But what to do when questioning automated decision outputs? What if a user does not agree with the inferences made from his or her information, or with the customization of the content that is shown online? How to question and, more importantly, how to prosecute these issues when the automated operation of the platforms is protected by trade secrets? Authors like Jamie Bartlett (2018), for example, will also warn about the various impacts that technology has on political and representative systems, adding even more burdensome elements to the crisis of democracy in the contemporary world.
How to audit trade secrets
The GDPR introduces a “right to the explanation” in one of its recitals (71) as one of the ways to answer questions arising from automated decisions. On one hand, the limits and binding force of a recital (and not the main text of the regulation) are still to be tested in court, but this is an important step towards redefining the parameters of functioning of surveillance capitalism. On the other hand, the European Union itself promulgated the Directive 2016/943, which reinforces the parameters of maintaining trade secrets in the Member States and allows for higher degrees of protection. Inevitably, European judges will be confronted with the tension between the right to explanation and the need to protect the trade secrets that form the backbone of digital platforms. The effective application of a right to explanation in the EU will involve analyzing the way business models based on algorithms work. The trade secret will be opposed to this right of the users and, depending on how it will be applied, could be a barrier to its practical attainment.
In Brazil, resistance to the application of possible forms of auditing of algorithmic systems is already organized, as is the case of the Legislative Agenda of Industry in 2019. The document has a positive opinion on the Brazilian General Law on the Protection of Personal Data, but expresses concern regarding the competences of the National Data Protection Authority, which should not have this power of audit, under the excuse of a possible “violation of trade secrets”.
It is important to remember that intellectual property rights are not absolute and should be applied in a systematic way, to consider also aspects of competition, public law, and human rights, such as in the compulsory licensing of patents in cases of unfair competition or national emergency. The relativization of the trade secret of the source code of algorithms is not necessarily something that turns the business model of these platforms unfeasible. The administrative and procedural forms of analysis of trade secrets vary according to jurisdiction, but, in general, there are mechanisms to guarantee the confidentiality of what is analyzed by the courts: specialized expert evidence, confidentiality of parties involved and of hearings, among others. The fear of the market (and of society) is understandable in relation to the lack of commitment of some magistrates to the prerogatives of the parties, especially the secrecy of the procedural stages of a proceeding, but this does not mean that their analysis by the courts cannot be made whilst preserving the rights of the parties. The involvement of independent third parties in the conduct of these analyzes is also to be considered, which is controversial, but contractually more transparent in terms of responsibility allocation for intermediaries in contexts such as Brazil’s.
Of course there is fear and strong resistance among tech companies to embrace this and other types of scrutiny and regulation, just as there was during the approval of the Brazilian Bill of Internet Rights, or in discussions regarding the European digital single market. However, it is precisely this type of clash that reflections such as that of Shoshana Zuboff entail. If it is true that we are experiencing a technological sophistication of the informational media of value production, why not think of reformulations also of the means of regulating these new forms of exploitation?
If you have an interest in how the European courts regulate the internet, take this opportunity to read more about it here!