Event Name Facial recognition technologies and the old problem of human categorization
Start Date 16th Oct 2017 3:30pm
End Date 16th Oct 2017 5:00pm
Duration 1 hour and 30 minutes
Description

Technologies of identification, from early efforts to contemporary biometrics, seek to categorize human difference in order to identify individual subjects. During the 19th and the beginning of the 20th century, practitioners of physical anthropology and anthropometry developed a complex and refined way to classify humans, mostly in terms of race. These methods were very influential to other fields. For example, early criminal identification systems rely on physical anthropology and, in particular, on anthropometry. These early efforts of human identification and classification used different methodologies of standardization, measuring, recording and archiving of ‘relevant’ body fragments. In these systems, identification depends on different ways of categorizing and differentiating body traits that are considered unique and unchangeable. In general, all these methods of identification are part of a metrical (measuring) approach to the body. They follow the basic idea to interlink the individualization and classification of humans.

Recently, as a result of the current terrorist threat and the migration crisis, there has been a constant growth in the development of digital biometrics as a means of security and surveillance. These new technologies, developed by the collaboration of computer and cognitive sciences departments, are usually understood as unrelated to older efforts. However, a closer analysis of how they work tells a different story.

Just as early systems of identification, new biometric technologies, depend on procedures of categorization. New biometric technologies, and in particular facial recognition systems (FRSs) are composed by learning algorithms or neural networks. In brief, these neural networks are able to recognize patterns that are previously ‘taught’ with the use of a specific database – in the case of FRSs, a database of portraits. This training of algorithms (e.g., to recognize a certain face) is fundamental to performance. During the training phase, the algorithm not only incorporates the design choices and relevant terms and categories of the scientist who built them (esp. of male programmers). They also learn and adjust performance based on peoples’ attitudes and behavior, since they are trained by means of data produced by humans.

This paper aims to analyze how certain categories used in database construction and training of algorithms affect human identification. It shows that the selection of certain categorization systems of human difference can lead to discriminatory outcomes (e.g., racial profiling, different benefit access, biased web behavior). To achieve this, it centers on the case of ‘FERET’, a landmark database of portrait images and the FRS trained by it, which was designed during the early 1990s. ‘FERET’ became a standard for the development and adjustment of later algorithms. It was used by developers around the globe to prove the efficacy of their own developments and to compare results. Through the analysis of the construction of the database ‘FERET’ this paper explores (i) which systems of categorizations are used in the development of new biometric technologies (ii) whether applications of biometric technologies inherit older approaches’ tendencies to not only perform categorization, but discrimination. Lastly, (iii) it is argued that facial biometrics can be seen as a prime example of technologies in the era of big data that necessitates reconsidering the neutrality and ‘epistemic virtues’ (e.g., objectivity) usually associated with machines.