Neuromodulación para la mejora de la agencia moralel neurofeedback

  1. García Díaz, Paloma J. 1
  1. 1 Universidad de Granada
    info

    Universidad de Granada

    Granada, España

    ROR https://ror.org/04njjy449

Revista:
Dilemata

ISSN: 1989-7022

Año de publicación: 2021

Título del ejemplar: Tecnologías socialmente disruptivas

Número: 34

Páginas: 105-119

Tipo: Artículo

Otras publicaciones en: Dilemata

Resumen

Este artículo defiende que el proyecto de mejora moral precisa de una mayor atención a las dimensiones racionales y deliberativas de la agencia moral. Para ello, se presenta la contribución del neurofeedback a la mejora de dichas deliberaciones morales y de la autonomía. Esta interfaz cerebro-ordenador, por lo demás, se toma como un posible componente de un asistente moral socrático (Lara y Deckers 2020) que vela porque la mejora moral se produzca gracias a una interacción plena entre los agentes morales y dicho asistente. Esta propuesta se aleja del proyecto de biomejora moral a través de la neurofarmacología, centrado en la mejora de la emociones. Asimismo, se distancia de la idea de delegar la toma de decisiones morales en las agencias morales artificiales, lo que supondría apostar por un modelo continuista entre las agencias morales humanas y artificiales.

Referencias bibliográficas

  • Agar, N. (2013). “Why is it possible to enhance moral status and why doing so is wrong?”. Journal of Medicine Ethics, 39, pp. 67-74.
  • Brey, P. (2018). “The strategic role of technology in a good society”. Technology in Society, 52, pp. 39-45.
  • Bryson, J. J. (2010). “Robots should be slaves”, en Wilks, Y. (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues. Amsterdam, John Benjamins, pp. 63–74.
  • Bryson, J. J. (2018). “Patiency is not a virtue: the design of intelligent systems and systems of ethics”. Ethics and Information Technology, 20, pp. 15–26.
  • Cervantes, J. A., López, Sonia y Rodríguez, L. F., Cervantes, S., Cervantes, F. Y Ramos, F. (2020). “Artificial Moral Agents: A Survey of the Current Status”. Science and Engineering Ethics. 26, pp. 501–532. DOI: https://doi.org/10.1007/s11948-019-00151-x
  • Coeckelberhg, M. (2020). “Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explain-ability”. Science and Engineering Ethics, 26, pp. 2051–2068. DOI: https://doi.org/10-1007/s11948-019-00146-8
  • Coeckelberhg, M. (2018). “Technology and the good society: A polemical essay on social ontology, political prin-ciples, and responsibility for technology”. Technology in Society, 52, pp. 4-9. DOI: https://dx.doi.org/10.1016/j.techsoc.2016.12.002
  • Darby, R. R. & Pascual-Leone, A. (2017). “Moral Enhancement Using Non-invasive Brain Stimulation”. Frontiers in Human Neuroscience, 11 (77). DOI: https://doi.org/10.3389/fnhum.2017.00077
  • De Melo-Martín, I. & Salles, A. (2015). “Moral Bioenhancement: Much Ado About Nothing”. Bioethics, 29 (4), pp. 223–232. DOI: https://doi.org/10.1111/bioe.12100
  • DeGrazia, D. (2014). “Moral enhancement, freedom, and what we (should) value in moral behaviour”. Journal of Medical Ethics, 40 (6), pp. 361-368.
  • Douglas, T. (2008). “Moral enhancement. Journal of Applied Philosophy”, 25 (3), pp. 228-245.
  • Douglas, T. (2013). “Moral Enhancement via Direct Emotion Modulation: A Reply to John Harris”. Bioethics, 27 (3), pp. 160-168.
  • Dubljević, V. & Racine, E. (2017). “Moral Enhancement Meets Normative and Empirical Reality: Assessing the Practical Feasibility of Moral Enhancement Neurotechnologies”. Bioethics, 31 (5), pp. 338–348. DOI: https://doi.org/10.1111/j.1467-8519.2011.01919.x
  • Enriquez-Geppert, S., Huster, R. J., Ros, R. J. & Wood, G. (2017a). “Neurofeedback”, en Colzato L. (Ed.). Theo-ry-Driven Approaches to Cognitive Enhancement. Cham, Switzerland,Springer, pp. 149-165. DOI: https://doi.org/10.1007/978-3-319-57505-6
  • Enriquez-Geppert, S. Huster R. J & Herrmann, C. S. (2017b). “EEG-Neurofeedback as a Tool to Modulate Cogni-tion and Behavior: A Review Tutorial”. Frontiers in Human Neuroscience, 11 (51). DOI: https://doi.org/10.3389/fnhum.2017.00051
  • Fossa, F. (2018). “Artificial Moral Agents: Moral mentors or sensible tools”. Ethics and Information Technology, 20, pp. 115–126. DOI: https://doi.org/10-1007/s10676-018-9451-y
  • Floridi, L., & Sanders, J. W. (2004). “On the morality of artificial agents”. Minds and Machines, 14 (3), pp. 349–379.
  • Floridi, L. (2014). “Artificial Agents and Their Moral Nature”, en Kroes P. y Verbeek, P. P. (Eds.). The Moral Status of Technical Artefacts, Philosophy of Engineering and Technology. Heidelberg, New York, Dordrecht, London, Springer, pp. 185-212. DOI: http://doi.org/10.1007/978-94-007-7914-3_2.
  • Floridi, L. (2017). “Infraethics–on the Conditions of Possibility of Morality”. Philosophy of Technology, 30, pp. 391–394. DOI: https://doi.org/10.1007/s13347-017-0291-1
  • Fronda, G., Crivelli, D., & Balconi, M. (2019). “Neurocognitive enhancement: Applications and ethical issues”. NeuroRegulation, 6 (3), pp. 161–168. DOI: https://doi.org/10.15540/nr.6.3.161
  • Gruzelier, J. H. (2014a). “EEG-neurofeedback for optimising performance. I: A review of cognitive and affective outcome in healthy participants”. Neuroscience and Biobehavioral Reviews, 44, pp. 124-141. DOI: http://doi: 10.1016/j.neubiorev.2013.09.015
  • Gruzelier, J. H. (2014b). “EEG-neurofeedback for optimising performance II: Creativity, the performing arts and ecological validity”. Neuroscience and Biobehavioral Reviews, 44, pp. 142-158. DOI: https://doi.org/10.1016/j.neubiorev.2013.11.004
  • Gunkel, D. J. (2012).The machine question: critical perspectives on AI, robots, and ethics. Cambridge, MIT Press.
  • Gunkel, D. J. (2020). “Mind the gap: responsible robotics and the problem of responsibility”. Ethics and Informa-tion Technology, 22, pp. 307–320. DOI: https://doi.org/10.1007/s10676-017-9428-2
  • Hammond, D. C. (2011). “What is Neurofeedback?: An Update”. Journal of Neurotherapy, 15, pp. 305–336. DOI: https://doi.org/10.1080/10874208.2011.623090
  • Harris, J. (2013a). “Ethics is for bad guys! Putting the ‘Moral’ into Moral Enhancement”. Bioehics, 27 (3), pp. 169–173. DOI: https://doi.org/10.1111/j.1467-8519.2011.01946.x
  • Harris, J. (2013b). “Moral Progress and Moral Enhancement”, Bioethics 27 (5), pp. 285–290. DOI: https://doi.org/10.1111/j.1467-8519.2012.01965.x
  • Hauskeller, M. (2013). Better humans? Understanding the enhancement project. Durham, Acumen.
  • Himma, K. E. (2009). “Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?” Ethics and Information Technology, 11, pp. 19–29. DOI: https://doi.org/10.1007/s10676-008-9167-5
  • Johnson, D. & Verdicchio, M. (2018). “Why robots should not be treated as animals”. Ethics and Information Tech-nology, 20, pp. 291–301. DOI: https://doi.org/10.1007/s10676-018-9481-5
  • Lagrandeur, K. (2015). “Emotion, Artificial Intelligence, and Ethics”, en Romportl, J. Zackonova, E. y Kelemen, J. (Eds.) Beyond Artificial Intelligence. The disappearing of Human-Machine Divide. Cham Heidelberg New York Dordrecht London, Springer, pp. 97-110. DOI: https://doi.org/10.1007/978-3-319-09668-1
  • Lara, F. & Deckers, J. (2020). “Artificial Intelligence as a Socratic Assistant for Moral Enhancement”. Neuroethics13, pp. 275–287. DOI: https://doi.org/10/s12152-019-09401-y
  • Latour, B. (2002). “Morality and Technology. The End of Means”. Theory, Culture and Society, 19(6), pp. 247-260.
  • Maslen, H. & Savulescu, J. (2016). “Neurofeedback for Moral Enhancement: Irreversibility, Freedom, and Advantag-es Over Drugs”. AJOB Neuroscience 7 (2), pp. 120-122. DOI: https://doi.org/10.1080/21507740.2016.1189976
  • Moor, J. H. (2006). “The nature, importance, and difficulty of machine ethics”. IEEE Intelligent Systems, 21(4), pp. 18–21.
  • Nakazawa, E., Yamamoto, K., Tachibana, K., Toda, S. Takimoto, Y., y Akabayashi, A. (2016). Ethics of decoden neu-rofeedback in clinical research, treatment, and moral enhancement. AJOB Neuroscience,7 (2), pp. 110-117. DOI: https://doi.org/10.1080/21507740.2016.1172134.
  • Parens, E. (2005). “Authenticity and Ambivalence: Toward Understanding the Enhancement Debate”. Hastings Center Report, 35 (3), pp. 34-41.
  • Persson, I. & Savulescu, J. (2014). Unfit for the Future: The Need for Moral Enhancement. Oxford, Oxford Univer-sity Press.
  • Racine, E., Dublejevic ́, V., Jox, R. J., Baertschi, B., Christensen, J. F., Farisco, M. Jotterand, F., Kahane, G. & Müller, S. (2017). “Can neuroscience contribute to practical ethics? A critical review and discussion of the method-ological and translational challenges of the neuroscience of ethics”. Bioethics, 31 (5), pp. 328–337. DOI: https://doi.org/10.1111/bioe.12357
  • Sandel, M. (2009). “The Case Against Perfection: What’s Wrong with Designer Children, Bionic Athletes, and Genetic Engineering”, en Savulescu, J. y Bostrom, N. (Eds.) Human Enhancement. Oxford, Oxford University Press, pp. 71-89.
  • Savulescu, J. & Maslen, H. (2015). “Moral enhancement and Artificial Intelligence: Moral AI?”, en Romportl, J. Zack-onova, E. y Kelemen, J. (Eds.) Beyond Artificial Intelligence. The disappearing of Human-Machine Divide. Cham Heidelberg New York Dordrecht London, Springer, pp. 79-95. DOI: https://doi.org/10.1007/978-3-319-09668-1
  • Sharkey, A. (2020). “Can we program or train robots to be good?”. Ethics and Information Technology, 22, pp. 283–295.
  • Specker, J. Focquaert, F., Raus, K., Sterckx, S & Schermer, M. (2014). “The ethical desirability of moral bioen-hancement: a review of reasons”. BMC Medical Ethics, 15 (67).
  • Tachibana, K. (2017). “Neurofeedback-Based Moral Enhancement and the Notion of Morality”. The Annals of the University of Bucharest - Philosophy Series, 66 (2), pp. 25-41.
  • Tachibana, K. (2018a). “Neurofeedback-Based Moral Enhancement and Traditional Moral Education”. Humana Mente, 11 (33), 19-42.
  • Tachibana, K. (2018b). “The Dual Application of Neurofeedback Technique and the Blurred Lines Between the Mental, the Social, and the Moral”. Journal of Cognitive Enhancement, 2, pp. 397–403. DOI: https://doi.org/10.1007/s41465-018-0112-1
  • van Wynsberghe, A. & Robbins, S. (2018). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25 (3), pp. 719-735. DOI: https://doi.org/10.1007/s11948-018-0030-8
  • Verbeek, P.P (2014). Some Misunderstandings About the Moral Significance of Technology (2014), en Kroes, P. and Verbeek, P.P. (Eds.). The Moral Status of Technical Artifacts, Philosophy of Engineering and Technology, 17, Heidel-berg, New York, Dordrecht, London, Springer, pp. 75-88. DOI: https://doi.org/10.1007/978-94-007-7914-3_5,
  • Wallach, W. (2010). “Robot minds and human ethics: The need for a comprehensive model of moral decision making”. Ethics and Information Technology, 12 (3), pp. 243–250.
  • Zotev, V. Phillips, R., Yuan, H., Misaki, M. & Bodurka, J. (2014). “Self-regulation of human brain activity using si-multaneous real-time fMRI and EEG neurofeedback”. NeuroImage, 85, pp. 985-995.