Error and innovation: the reason for an artificial intelligence 99% accurate
8 de July de 2019
Much of the discussion about the expansion of the role of artificial intelligence in contemporary society regards the reduction (or complete elimination) of its error margins: accuracy in the automated evaluation of judicial decisions; reduction of risks in driving autonomous vehicles; improvement of public security systems by the facial recognition of suspects in public environments; predictive analysis of financial markets for better use of investment funds; among others. These are quite blunt examples that justify the development of impeccable artificial intelligences. However, what if the error also played an important role in human and technological development? Is it really beneficial to seek out the absolute elimination of imperfections in automatic data processing systems? This text seeks to offer a contribution to an artificial intelligence that includes margins of error, even if minimal.
It makes sense to aim for impeccable technologies
It is natural that technological development aims to eliminate errors. A medical treatment seeks precision in the fight against a certain disease, as well as the reduction of possible side effects. The team of engineers responsible for launching a satellite aims to consider all factors that may influence its trajectory in order to reach the ideal orbit position.
In the creation of automated data-processing systems, the logic is similar. The artificial intelligence involved in recognizing the language on Google Translate, in its neural applications for example, seeks flexibility in identifying rare words and vast contextualization, increasing the chances of a correct translation output.
One of the biggest regulatory challenges of autonomous vehicles is the reduction of system errors, to avoid accidents that endanger human lives, assets and the traffic order. Although these systems sometimes have lower rates of error than the human, the insecurity that technology provokes and the need for allocation of responsibility still require better performance standards than the current ones for the dissemination of this intelligence.
In other scenarios, the mining of large data in scientific journals, for example, can corroborate the evidence of correlations that human eyes do not see at first, even by their capacity for analysis. Recently, a study published in Nature demonstrated results of machine learning applied to 3.3 million abstracts of scientific papers on materials science, revealing in advance the discoveries of new thermoelectric materials and suggesting new combinations of materials still unknown. That is, new technologies not only contribute to a systematic structuring of knowledge, but also point to more promising development paths through efficiency in the analysis of research results, be they positive or negative.
Taking into account this context, it makes sense that the development of new technologies aims, in general, the complete elimination of imperfections. But in the context of artificial intelligence, what does “error” mean?
What do we mean by error?
The scientific fascination for righteousness permeates various areas of knowledge, from philosophical dilemmas to the practical application of physical, chemical, and mathematical principles. However, the very definition of error entails criticism and reflection, as well as a binary logic of right and wrong. After all, what we understand by this concept also reconciles aspects related to contextualization, purpose and even ethics.
For example, the train dilemma (1967) by Philippa Foot, could be presented to an artificial intelligence, whose solution proposal would depend basically on the criteria provided as a parameter for its decision-making. If we take utilitarian parameters as the criterion, the simplest answer would perhaps be to choose the fewest possible deaths. However, if more factors are added, such as gender, age, and other subjective elements, the dividing line between right and wrong becomes increasingly blurred. You can test some of these moral limits (and their response to these hypothetical dilemmas) in the Moral Machine tool, developed by the MIT Media Lab.
In addition, many results interpreted as error indicate that there is (quite) a lot of room for learning from these failures. Successful public policies, for example, take into account past misunderstandings, especially when accompanied by effective and comprehensive mechanisms for measuring causal variables. In an automated analysis of the success or failure of certain public policies, it would be important for such processing to consider as many data and variables as possible, including measures that have failed in the past. Focusing on the analysis of errors, in these cases, can inform more about how to design better policies in the future than simply analyzing the successful measures of other comparative bases (as is very common in comparative studies and in the traditional custom of legislative import).
What we call intuition, or spontaneity, can also be applied in the scientific process, often giving rise to such misconceptions. Intuition in decision making leads to the elaboration of causal inferences not necessarily correct, but innovative. Several subjective aspects of human reasoning could hardly be imbued with the processing of artificial intelligence data. For this reason, highly specialized artificial intelligences have better performance levels (ie, fewer mistakes) than generalists who pretend to resemble human intelligence. Although more susceptible to error, the process of human reasoning can also be more creative.
These circumstances demonstrate the possibility of nuancing the analysis of results, as well as extracting positive aspects from these seemingly negative scenarios. The emergence of specialized journals precisely in the publication of negative resultsin several areas of knowledge corroborates the importance of error for science.
Artificial Intelligence, error, and innovation
The history of innovation processes shows that error is fundamental to scientific progress. A classic example that illustrates the possibility of errors generating new products is Teflon. This is a product used in non-stick cookware, which was the result of an experiment by chemist Roy Plunkett in 1928. The scientist intended to use chlorofluorocarbon (CFC) in refrigeration processes. The result, however, was diverse, as it generated a substance with lubricating and non-stick properties.
The importance of error is not limited to this classic example. Several companies increasingly recognize the need to “reward the error“. Sanofi and Siemens are examples of this reality, as they begin to promote initiatives to reduce the stigma of error within the company. What is sought is to reward evaluations of mistakes made by the company, but also institutionalize these analyzes, so as to map them, with transparency.
Error is not only a fundamental element of scientific progress, but is also essential to the artistic process. A good musician is forged from mistakes and hits, which reveal themselves in rigid and disciplined trajectories. However, it is possible to re-signify the relation between error and artistic elaboration. An example of this is Naná Vasconcelos, who extrapolated the potentialities of the instruments and allowed to transform, what would be treated as a wrong way of playing the instrument, in ring innovations. This understanding of error can be expanded to various other artistic expressions, such as painting and sculpture.
The innovation generated by artificial intelligence, in turn, is characterized by the precision in managing a huge amount of data. It is known that the production of works of art by this technology is already a reality. In 2018, the first work made from artificial intelligence was bought at auction, for $ 432,500. It is the work Portrait d’Édouad Belamy, elaborated by the group Obvious, by means of two algorithms that were fed with 15,000 classic portraits, painted between centuries XIV and XX. The painter Roman Lipski uses the program, developed by Florian Dohmann, to transform his works into others. The program recognizes objects, through neural systems, and creates variations of existing works. This relationship between painter and artificial intelligence goes further, as Roman Lipski assumes that the works of the program influence new elaborations. Thus, this dialogue ends up being a kind of “creative looping”.
Artificial intelligence has also been used in many other situations. Recently, Apothecary launched a perfume with fragrance made with artificial intelligence. This type of technology has also been used for text creation, including writing dialogues for books or films. Another application is in the field of musical composition, with programs such as Amper, which has assisted several artists who do not necessarily have a musical background.
We one again return to the fact that error is not an exogenous element to the phenomenon of innovation. As seen, it is a pre-existing condition in some processes of technological and artistic development. Given this, how to deal with error and unpredictability when involving processes mediated by artificial intelligence?
Reduction of asymmetries and justified bias
The debate on ethics in the development of artificial intelligence often includes parameters relating to the purposes of inclusion, promotion of diversity, fairness and equity in these technologies, as reflected in recent publications by the OECD and the European Union. But what if these purposes need these automated systems to work in a counterintuitive way to the way they were programmed? Whether due to the database used, or to the decision-making parameters themselves, artificial intelligence performance results, however correct, may not be ideal or socially desired.
As an example, we can cite the organic results of research in image banks, already denounced by various members of civil society (Desabafo Social, University of Washington, among others) as containing distortions regarding the database and the system of recognition of their representations . As much of the available online images already reveal problems of racial representativeness, for example, the results of searches in these banks also reflect these inequalities.
In addition, there are examples of predictive policing and automation of decisions about conditional release in New York, which use databases of a penal system that is already biased in racial and economic terms, with serious consequences for the perpetuation of these inequalities. Although magistrates, as human beings, are also subject to diverse subjectivities in the making of decisions, the automated systems developed run the risk of reproducing them and even improving them.
Therefore, in some cases, it is important that certain technologies are not 100% accurate, or even that the human component is involved in stages of measuring, guidance and ratification of their decisions. The purpose of automated data processing may be to correct the asymmetries already present in databases (usually “offline” data) by means of a “positive” bias.
That is, it is possible to aim for the reduction of biases in processes intermediated by automated technologies, precisely to make them more fair, diverse and inclusive. In these cases, it may be socially ideal to incorporate non-organic aspects, theoretically imbuing some artificial intelligences with “errors”.
Conclusion
The error can be accepted as a constituent element of the artificial intelligence problem solving process. This finding allows us to re-signify the infallibility of the forecasts resulting from these technologies. Accepting the possibility of error highlights the need to rethink legal, moral and philosophical aspects about the adoption of these types of solutions. It also allows us to move forward in two axes: to deconstruct a dogmatic vision of unquestionable precision of the decisions governed by these technologies; and to understand error as essential to the scientific process in general, but also to that mediated by artificial intelligence. This proposal of artificial intelligences 99% precise “scientificizes” the process of technological development, mitigating the false duality between error and accuracy in automatic data processing.
If you want to know more about the role of artificial intelligence in geoeconomic and technological disputes, check out this other post on the IRIS blog.