Decisiones automatizadas y discriminación en el trabajo

  1. PILAR RIVAS VALLEJO 1
  1. 1 Universitat de Barcelona
    info

    Universitat de Barcelona

    Barcelona, España

    ROR https://ror.org/021018s57

Revista:
Revista General de Derecho del Trabajo y de la Seguridad Social

ISSN: 1696-9626

Ano de publicación: 2023

Número: 66

Tipo: Artigo

Outras publicacións en: Revista General de Derecho del Trabajo y de la Seguridad Social

Resumo

Automated decisions, that is, the product of recommendations or predictions resulting from the intervention of artificial intelligence data driven systems, are being used to manage work relationships. But their use is not free of risks: the main one is the discriminatory impact caused by the biases that can be incorporated in their design or machine learning and in their application, amplified as an effect of their opacity and low motivation, together with their apparent neutrality and infallibility. Its irruption in the reality of labour relations can cause problems of discrimination of people, groups and situations that both the science of artificial intelligence and the law must mitigate and avoid. This paper focuses on analyzing the discriminatory impact of the use of algorithms or automated decision-making mechanisms for work management and personnel selection -called algorithmic discrimination- from a legal perspective, to detect sources of that risk, identify discrimination and its legal qualification in each case, and also to analyze the corrective responses from the law and the responsibility for discrimination at work.

Referencias bibliográficas

  • Allhutter, C.; Fischer, F.; Grill, G. y Mager, A., «Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective», Front big data, 2020, DOI: 10.3389/fdata.2020.00005.
  • Álvarez, H., «El impacto de la tecnología en las relaciones laborales: retos presentes y desafíos futuros». Justicia y Trabajo, núm. 2, 2023, https://revistajusticiaytrabajo.colex.es/el-impacto-de-la-tecnologia-en-las-relaciones-laborales-retos-presentes-y-desafios-futuros/?utm_content=253633930&utm_medium=social&utm_source=linkedin&hss_channel=lcp-14798867.
  • Angwin, J., Larson, J., Mattu, S. y Kirchner, L., «Machine Bias», en Ethics of Data and Analytics, 2016.
  • Annany, M., «Seeing Like an Algorithmic Error: What are Algorithmic Mistakes, Why Do They Matter, How Might They Be Public Problems?», Yale Journal of Law y Technology 2022, https://law.yale.edu/isp/publications/digital-public-sphere/healthy-digital-public-sphere/seeing-algorithmic-error-what-are-algorithmic-mistakes-why-do-they-matter-how-might-they-be-public.
  • Brkan, Maja, IA, aprendizaje automático, algoritmos y protección de datos en el marco del RGPD y más allá. Universitat Oberta de Catalunya, https://openaccess.uoc.edu/bitstream/10609/142586/2/Entornos%20digitales%20y%20nuevos%20retos%20para%20la%20protecci%C3%B3n%20de%20datos_M%C3%B3dulo%202_%20Inteligencia%20artificial%2C%20aprendizaje%20autom%C3%A1tico%2C%20algoritmos%20y%20protecci%C3%B3n%20de%20datos%20en%20el%20marco%20del%20RGPD%20y%20m%C3%A1s%20all%C3%A1.pdf.
  • Barocas, S. y Selbst, A. D., «Big data’s disparate impact», California Law Review, núm. 104, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899.
  • Bersin, J., «The Skills of The Future Are Now Clear: And Despite What You Think, They’re Not Technical». Blog del autor, https://joshbersin.com/2019/09/the-skills-of-the-future-are-now-clear-anddespite-what-you-think-theyre-not-technical/ (8/9/2019).
  • Bogen, M. y Rieke, A., Help wanted. An examination of Hiring Algorithms, Equity and Bias, Upturn, 2018.
  • Bucher, T., If... Then: Algorithmic Power and Politics, Oxford University Press, 2018, DOI: 10.1093/oso/9780190493028.001.0001.
  • Burrell, J., «How the machine ‘thinks’: understanding opacity in machine learning algorithms», Big Data & Society, vol. 3, núm. 1, 2016, https://doi.org/10.1177/2053951715622512.
  • Centre for Data Ethics and Innovation, «Algorithmic Transparency Standard. Guidance for Public Sector Organisations’», 2021.
  • Chouldechova, A., «Fair prediction with disparate impact: a study of bias in recidivism prediction instruments», 2016, pp. 1-17, https://arxiv.org/abs/1610.07524.
  • Comisión Europea, Directrices éticas para una IA fiable, Oficina de Publicaciones, 2019, https://data.europa.eu/doi/10.2759/14078.
  • Comisión Europea, Libro Blanco sobre la Inteligencia Artificial - un enfoque orientado a la excelencia y confianza. 2020.
  • Comité Económico y Social Europeo. Dictamen del Comité Económico y Social Europeo sobre “Inteligencia Artificial: anticipar su impacto en el trabajo para garantizar una transición justa” Diario Oficial de la Unión Europea C 440/01, 19/9/2018.
  • Comunicación de la COMISIÓN (UE), Plan Coordinado sobre la Inteligencia Artificial, 2018.
  • Consejo de Europa, Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications, Committee of experts on internet intermediaries, MSI-NET(2016)06 rev6, 2017, https://rm.coe.int/study-hr-dimension-of-automated-dataprocessing-incl-algorithms/168075b94a.
  • Crenshaw, K., «Demarginalizing the intersection of race and sex: a Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics», The University of Chicago Legal Forum: 1989, Hein Online-1989 U. Chi. Legal F. 139, https://philpapers.org/archive/CREDTI.pdf?ncid=txtlnkusaolp00000603.
  • Duggan, J., «Algorithmic management and app-work in the gig-economy: A research agenda for employment relations and HRM», Human Resource Management Journal, Vol. 30, p.114-132, https://onlinelibrary.wiley.com/doi/abs/10.1111/1748-8583.12258.
  • Ebers, M., «Ethical and legal challenges», en Algorithms and law. Ebers, M. y Navas, S., dirs., Cambridge University Press 2020.
  • Estoica, AA; Riederer, C. y Chaintreau, A., «Techo de cristal algorítmico en redes sociales: los efectos de las recomendaciones sociales en la diversidad de redes». Actas de la Web Conference 2018, Lyon. ACM, Nueva York, pp. 923-932, https://doi.org/10.1145/3178876.3186140.
  • Flores, A.W.; Bechtel, K. y Lowenkamp, C.T. «False Positives, False Negatives, and False Analyses: A Rejoinder to ‘Machine Bias’: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks», Federal Probation. Vol. 80, núm. 2, 2016.
  • FRA - European Union Agency for Fundamental Rights (Agencia de los Derechos Fundamentales de la Unión Europea): Inequalities and multiple discrimination in access to and quality of healthcare, 2010, http://fra.europa.eu/en/publication/2013/inequalities-discriminationhealthcare.
  • Franco, M.; Roehrig, B. y Pring, P., ¿Qué haremos cuando las máquinas lo hagan todo?, 2018.
  • Fröhlich, W. y Döhmann, S., «Können Algorithmen diskriminieren?», VerfBlog, 2018/12/26, https://verfassungsblog.de/koennen-algorithmen-diskriminieren/, DOI: 10.17176/20190211-224048-0.
  • Gaster, R., Behemoth, Amazon Rising: Power and Seduction in the Age of Amazon, Incumetrics Press, Washington, 2020. Genesereth, M., «What is Computational Law?», Complaw Corner, Codex: The Stanford Center for Legal Informatics, 2021, https://law.stanford.edu/2021/03/10/what-is-computational-law/.
  • Gerards, J. y Xanidis, R., «Algorithmic discrimination in Europe: Challenges and Opportunities for EU equality law», European Futures, 3/12/2020, https://www.europeanfutures.ed.ac.uk/algorithmic-discrimination-in-europe-challenges-andopportunities-for-eu-equality-law/.
  • Gillespie, T., «Algorithm». En Digital Keywords: A Vocabulary of Information Society and Culture, ed. B. Peters (Princeton, NJ: Princeton University Press), 2016, DOI: 10.1515/9781400880553-004, y https://www.researchgate.net/publication/309964434_2_Algorithm.
  • Goodman, B. y Flaxman, S., «European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation», AI Magazine, vol. 38, núm. 3, pp.50-57, 2017, https://doi.org/10.1609/aimag.v38i3.2741.
  • Grove, M., Zald, D.H.; Lebow, B.S.; Snitz, B.E. y Nelson, C., «Clinical versus mechanical prediction: a meta-analysis». Psychological Assessment, vol. 12, núm. 1, 2000.
  • Gunning, D., «Explainable Artificial Intelligence (XAI)», 2017, https://www.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf.
  • Hajian, S.; Bonchi, F. y Castillo, C., «Algorithmic Bias: from Discrimination Discovery to Fairness-aware Data Mining», 2016, DOI: 10.1145/2939672.2945386, https://www.researchgate.net/publication/305997939.
  • Hildebrandt, M., «Algorithmic regulation and the rule of law», Philosophical Transactions of the Royal Society A, vol. 376, núm. 2128. DOI: http://dx.doi.org/10.1098/rsta.2017.0355.
  • Hildebrandt, M., «The issue of bias. The framing powers of ML», Computer Science, DOI:10.2139/ssrn.3497597. En M. Pelillo, T. Scantamburlo (eds.), Machine We Trust. Perspectives on Dependable AI, MIT Press 2021, http://dx.doi.org/10.2139/ssrn.3497597. Versión preimpresión.
  • Ho, D. E. y Xiang, A., «Affirmative Algorithms: The Legal Grounds for Fairness as Awareness», The University of Chicago Law Review Online (30/10/2020), https://lawreviewblog.uchicago.edu/2020/10/30/aaho-xiang/.
  • Kelogg, K., Valentine, M. y Christin, A., «Algorithms at work: the new contested terrain of control». Academy of Management Annals, vol. 14, núm. 1, 2020, pp. 366-410, https://doi.org/10.5465/annals.2018.0174.
  • Laaksonen, S-M.; Haapoja, J.; Kinnunen, T; Nelimarkka, M, y Pöyhtäri, R., «The Datafication of Hate: Expectations and Challenges in Automated Hate Speech Monitoring». Front. Big Data, 5/2/2020, https://doi.org/10.3389/fdata.2020.00003.
  • Mackenzie, A., Machine Learners: Archaeology of Data Practice. Cambridge, MA: The MIT Press, 2017.
  • Madera, AJ, Gestión algorítmica. Consecuencias para la organización del trabajo y las condiciones de trabajo, 2021, Comisión Europea (Sevilla), https://joint-research-centre.ec.europa.eu/system/files/2021-05/jrc124874.pdf.
  • Maitland, Al., Work in the Age of Data, BBVAopenmind.com, 2019, pp. 150-159, https://www.bbvaopenmind.com/wp-content/uploads/2020/02/BBVA-OpenMind-book-2020-Work-in-the-Age-of-Data.pdf.
  • Makkonen, T., Multiple, Compound and Intersectional Discrimination: bringing the experiences of the most marginalized to the fore, Institute For Human Rights, Abo Akademi University, 2002.
  • Mayer-Schönberger, V. y Cukier, K., Big Data. Turner. Madrid, 2013, http://catedradatos.com.ar/media/3.-Big-data.-La-revolucion-de-los-datos-masivos-Noema-SpanishEdition-Viktor-Mayer-Schonberger-Kenneth-Cukier.pdf.
  • Mayson, S., «Bias In, Bias Out», The Yale Law Journal, vol. 128, núm. 8, 2019, https://www.yalelawjournal.org/article/bias-in-biasout#:~:text=abstract.,to%20have%20disparate%20racial%20impacts.
  • Miné, M., «The concepts of direct and indirect discrimination», paper conference: Fight against discrimination: the new 2000 Directives about Equality, 31/3-1/4/2003, Trier, http://www.era-comm.eu/oldoku/Adiskri/02_Key_concepts/2003_Mine_ES.pdf. (el texto ya no se encuentra disponible).
  • Nguyen, A., «The Constant Boss, Work Under Digital Surveillance», Data & Society, https://datasociety.net/wp-content/uploads/2021/05/The_Constant_Boss.pdf.
  • OIT, The role of digital labour platforms in transforming the world of work, 2021, https://www.ilo.org/global/research/global-reports/weso/2021/WCMS_771749/lang--en/index.htm.
  • O'Neil, C., Armas de destrucción matemática: cómo los datos masivos aumentan la desigualdad y amenazan la democracia. Penguin Books, Londres, 2017.
  • Pampouktsi, P.; Avdimiotis, K. y Avlonitis, M., «A 3-in-1 framework for human resources' selection and positioning based on machine learning tools», 2021, DOI: 10.1504/IJDATS.2021.10043787.
  • Pampouktsi, P.; Avdimiotis, K.; Ìaragoudakis, M. y Avlonitis, M., «Applied Machine Learning Techniques on Selection and Positioning of Human Resources in the Public Sector», Open Journal of Business and Management, 9, 2021, doi: 10.4236/ojbm.2021.92030.
  • Parlamento Europeo, Online advertising: the impact of targeted advertising on advertisers, market access and consumer choice, 2021.
  • Pasquale, F., New laws of robotics: defending human expertise in the age of AI, The Belknap Press, 2020.
  • Protasiewicz, J.; Pedrycz, W.; Kozłowski, M.; Dadas, S.; Stanisławek, T.M.; Kopacz, A. y Gałężewska, M., «A recommender system of reviewers and experts in reviewing problems», Knowledge-Based Systems, vol. 106, 2016, DOI: 10.1016 j.knosys.2016.05.041.
  • Pujol, O., «The concept of ‘artificial intelligence’. Opacity and societal impact», en Artificial Intelligence and the law, García Mexía, P. y Pérez Bes, F., dirs., Wolters Kluwers, Valencia, 2021.
  • Raub, M., «Bots, bias and big data: artificial intelligence, algorithmic bias and disparate impact liability in hiring practices», Arkansas Law Review, vol. 71, núm. 2, 2018, pp. 529-570.
  • Rivas Vallejo, P. (dir.), Discriminación algorítmica en el ámbito laboral: perspectiva de género e intervención. Thomson Reuters Aranzadi, Cizur Menor, 2022.
  • Rivas Vallejo, P.: La aplicación de la Inteligencia artificial al trabajo y su impacto discriminatorio, Aranzadi, Cizur Menor, 2020. Roig, A., Las garantías frente a las decisiones automatizadas: del Reglamento General de Protección de Datos a la gobernanza algorítmica. Bosch, Barcelona, 2020.
  • Rosenblat, A., Uberland: How Algorithms Are Rewriting the Rules of Work. University of California Press, 2018.
  • Ruiz-Gallardón, I., «La equidad: una justicia más justa», Foro Nueva Época, vol. 20, núm. 2, 2017, pp. 173-191, http://dx.doi.org/10.5209/FORO.59013.
  • Scantamburlo, T., «Non-empirical problems in fair machine learning», Ethics and Information Technology, núm. 23, 2021, https://doi.org/10.1007/s10676-021-09608-9.
  • Schiek, D. y Lawson, A. (dirs.), European Union Non-Discrimination Law and Intersectionality: Investigating the triangle of racial, gender and disability discrimination, London-New York: Routledge, 2016.
  • Serra Cristóbal, R. (dir.), La discriminación múltiple en los ordenamientos jurídicos español y europeo, Valencia: Tirant lo Blanch, 2013.
  • Serra Cristóbal, R., «El reconocimiento de la discriminación múltiple por los tribunales», Teoría y derecho, núm. 27, 2020, DOI: https://doi.org/10.36151/td.2020.008.
  • Spiegelhalter, D. y Harford, T., «Big data: are we making a big mistake?», The Financial Times, 28/3/2014, https://www.ft.com/content/21a6e7d8-b479-11e3-a09a-00144feabdc0.
  • Stoica, A.A.; Riederer, C. y Chaintreau, A., «Algorithmic glass ceiling in social networks: the effects of social recommendations on network diversity». Proceedings of the Web Conference 2018, Lyon. ACM, Nueva York, pp. 923-932, https://doi.org/10.1145/3178876.3186140.
  • Torra, V., «La inteligencia artificial». Lychnos, Cuadernos de la Fundación General CSIC, núm. 7, 2011, en https://fgcsic.es/lychnos/es_es/articulos/inteligencia_artificial#DEST1 [última consulta 24/10/2022].
  • Umoja Noble, S., Algorithms of Oppression: How Search Engines Reinforce Racism, NYU Press, 2018.
  • Vantin, S., «Inteligencia artificial y derecho antidiscriminatorio», en Llano Alonso, F. y Garrido Martín, J. (dirs.), Inteligencia artificial y derecho. El jurista ante los retos de la era digital. Thomson Reuters Aranzadi, Cizur Menor, 2021.
  • Wachter, S., «Affinity profiling and discrimination by association in online behavioural advertising», Berkeley Technology Law Journal, núm. 35, 2020, https://btlj.org/data/articles2020/35_2/01-Wachter_WEB_0325-21.pdf.
  • Wyner, A., «An ontology in OWL for legal case-based reasoning», Artificial Intelligence and Law, vol.16, núm. 4, pp. 361-387, doi: 10.1007/s10506-008-9070-8.
  • Xenidis, R. y Senden, L., «EU non-discrimination law in the era of artificial intelligence: mapping the challenges of algorithmic discrimination», en U. Bernitz et al. (dirs.), General principles of EU law and the EU digital order. Kluwer Law Int., 2020.
  • Xenidis, R., «Tuning EU equality law to algorithmic discrimination: three pathways to resilience», Maastricht Journal of European and Comparative Law 2020, vol. 27, núm. 6, 4/1/2021, https://doi.org/10.1177/1023263X20982173.