The Human Being in the Age of Its Algorithmic Reproducibility: Splinters of Thought
I have written this little text. A human being. With everything that characterizes me and is reflected here both intellectually and linguistically. Errors in content as well as in form are to be expected, stylistic extravagances are to be expected, and not even a reading pleasure or even a gain in knowledge can be guaranteed to the reader. What may have seemed deficient yesterday may be a firework of welcome finitude today and tomorrow. The oft-cited and little-represented “celebrate mistakes” dictum takes on unexpected relevance in the age of the supremacy of digital high technologies over core elements of human beings. It is a new player in the field, already here and now superior in many ways, at least if one adopts a machine-oriented perspective.
Should further work be done on the final hybrid of an artificial person, and should it emerge — be it as a result of those efforts, be it emergent — such a perspective would be ethically required. Currently, not yet, anthropomorphisms are only anthropomorphisms and are still to be seen critically. Technomorphisms are even more critical, to take measure of us is required, not of machines. We will still think back in a social-technological romantic way to the oblique work results of AI as we know them today. Or even pensively of our prompting competence. Perhaps even tasks that hardly seemed sensible to solve even with the smartest dialog with an AI tailored to us as “precision AI”. What a time that was! But also, the current times have it in themselves: Q*, GAI and friend are ante portas and more! This year (2023) has also seen a significant improvement in AI legislation: US President Joe Biden issued an Executive Order on AI this week, and over 25 countries agreed on a very thin but at least common approach to AI security at the AI Security Summit in London yesterday. Soon there will be an AI law — AI ACT — in the EU as well.
However, this lil’ piece of writing is not merely about admonitions, warnings, or even lamentations — leaving aside the potential for AI abuse by humans, dystopian extinction worries at the global human level due to the supreme singularity itself. It’s actually about the opposite: how we use AI without losing ourselves. No neo-prohibition to continue using AI. Nor, a swan song to our human culture with the final AI application. But rather an invitation to break away from the seductions in the guise of simple promises of solutions and pragmatic benefit dimensions, but ultimately subcutaneously meandering intimacy cataclysms, which can infiltrate our being in its culturally experienceable social form. As long as we still can. In addition, so that it does not remain with a bland appeal, a small collection of concrete suggestions to deal with AI privately as well as professionally in a way that preserves our being. For all those who seek to prove their dignity as humans from a grandiose responsible handling of their finiteness before the appearance of supposedly infinite digital systems. To remain biologically human is an obligation just as technologically to clean up the mess of postmodernity and to prevent the worst. Only AI can save us — from the effects of our penetrating fixation on infinity. This rescue, however, if not approached sensitively, can itself become our downfall. I, for one, would not want to become a co-pilot for an AI, be it super-intelligent or even self-aware. The right to be forgotten and the right to not know — is probably not only quite crucial for us humans from an ethical and life-pragmatic perspective — but for the time being, it is still a way from teaching more or less super-intelligent algorithms how to train down - like a good strength athlete in the off-season.
Certainly, one will read a lot of unpopular things in these few lines, like thoughts about a reminder of thinking, reading, and speaking abilities as a characteristic of humanity and consequent avoidance of reckless externalization of AI use where the person using it could do it without AI. Or a necessary legal prohibition and ethical ostracism of AI applications of military or neurobiological kind (“mind reading”), the development of artificial persons, the replacement of real existing, natural persons by AI, the use of AI in emotional matters.
But perhaps also encouraging about the smart use of smart digitality. The boundaries between plagiarism, inspiration, and creation may be blurred, but an operationalizable, applicable form of handling is necessary in order not to perpetuate the almost systematic promotion of externalization in the direction of an almost socially irreversible erosion of competence. In an epoch, in which competencies decrease in cognitive judgment and emotional empathy in depressing depth and breadth, AI appears as an alcopop, which is all the sadder, as AI is a non-instrument to be used instrumentally and therefore offers the final chance to really live the responsibility discussed since time immemorial in technology development and use that is inscribed in humans as essentially ambivalent.
Our clock is ticking, and the autonomy budget of all of us is finite anyway. Placed in an immoral competition with AI, we will lose. Not because we are merely human, but because an AI is merely an AI. AI is welcome if we sustainably fulfill our generation-shaping responsibility, on the one hand, not let it become a moral actor or, in the end, still an actor, and on the other hand, as the most powerful instrument in still our history, to let our actor autonomy grow and become effective in tempered development and vigilant but courageous commitment, finally.
For we are on the threshold of a new era. It is no longer about “digital” but about “societal” transformation. It is no longer about “automation” but about “autonomization.” It’s no longer about “data,” but about “thinking.” It’s no longer about “calculating,” but about “creating.” It is no longer about “attention quantity”, but about “intimacy quality”. It is no longer about “deliberation”, but about “decision”. Again, never in our history has an instrument been less instrumental and so powerful, less for technical reasons than for social reasons.
Because we still give it that power. Masayoshi Son, CEO of SoftBank, is cheerful and confident that “strong AI” (Artificial General Intelligence, AGI) will surpass human intelligence tenfold in the coming decade. As long as it is “only” a superintelligence — but the possibly emergent, but certainly irreversible step to an AI inner life, a self, an “I” — with suffering, love, intentions, interests, actions for reasons and much more may just be ante societas. At the same time, thousands and thousands of actors signed an ineffective memorandum to decelerate AI development. AI pioneer Geoffrey Hinton says the world is heeding warnings about the technology, and new regulation worldwide (more or less) in the making, at the same time, evolving AI threats require AI-backed cybersecurity, and privacy is finally becoming descriptively questionable in GenAI times. According to Jamie Dimon, CEO of JPMorgan, AI has the potential to productively shorten the workweek to 3.5 days for the next generation, at the same time, substitution studies are rolling over with job loss predictions. Many people should be quite happy with their algorithmization-enabled jobs, and many companies should simply translate the AI efficiency delta into more ROAI (return on AI, so to speak).
AIs undermine “truth” and “meaning,” enable “deep fakes,” are dangerous to whatever form of government, and at the same time are barely substitutable solutions to the self-created problems of the 21st century. It’s an up and down, back and forth, between emerging AI Luddite and loud AI enthusiasts.
So, Houston, we all have a problem. The final chance to solidly balance the anthropological ambivalence of technology and to use it sensibly, and that can only mean ethically proven, socially acceptable, and sustainable, is in the fire if the extremes — be it pro or contra AI — substantially gain the upper hand.
Otherwise, technology will again only partially redeem us from our natural limitations, but at the same time bind us back to them in an ominous way, making us dependent, not non-dependent. If we now set out to conceptualize our mind itself as a natural limitation, we will hardly experience liberation, but it is the opposite. No more autonomy, but heteronomy. If AIs ultimately think and speak for us — both spheres are not without reason most closely epistemologically and ontologically related — we will be left, perhaps, with the Freiheit eines Bratenwenders (“freedom of a roast turner”) chided by Kant, which also, once it has been raised, performs the movements of its own accord.
It would be better to recall another of Kant’s moving insights: “Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them: the starry heavens above and the moral law within.” No GAI and no substitution of core human traits: Thinking and Feeling.
To preserve these principles, we indeed need widely discussed values such as transparency, accountability, ethics by design, and so on. However, it is probably even more important to achieve a worldwide alliance in order, firstly, to ban the use of AI in weapons. Second, to classify AI activities axiologically lower than human decisions and actions. Third, to ban a too clear anthropomorphization of AI (including an escalating robotization) as well as a corresponding algorithmization of humans. Fourth, the moral obligation to discuss the responsible development and use of AI broadly and ultimately — this may be surprising — to affirm it; because the unnecessary general renunciation of responsible technology use is itself immoral, an ethical impact assessment should be placed next to a technology impact assessment. Fifth, to strengthen AI literacy to the maximum, in order not to subcutaneously legitimize an erosion of competence under the cloak of “inspiration”, “research”, etc. — AI-friendly in education is desirable exactly when AI does not become K.O. here either. These are all just small visions. More and more, they are becoming big facts. Let’s put AI in its place, and many things will be good.
But it turns out that the development — especially due to the legions of bedroom AI producers alongside the big and small techs — is so breathtakingly fast that there is hardly any time left to discuss what is still ethically acceptable. The run towards a GAI, ultimately a supreme singularity, is emotionally understandable to me as a fascination, but ethically, as already said, dubious. It is possible that if such an entity were a moral agent, it would reprimand us for having created it. At the same time, the world in the 21st century is in a situation that can hardly be managed without superior systems of thought, unless humans themselves were to rethink and act differently (sufficiency, values, sharing, etc.) — which is probably not to be expected. Ultimately, we have made it almost impossible for ourselves to do without a deep use of technology. On the other hand, a less extensive use that is responsible, controlled, and humane would not only be welcome but also unethical without necessity. But where is the limit? Who negotiates it, how, in which discourse processes, and with which arguments? Regulating away without sense is not an option, data protection hysteria is socially unattractive. But neither is waving it through without sensitivity, and ethically insensitive tech euphoria is inappropriate. The middle ground, appropriate, humane consideration and social negotiation is a sensible path. But it is also the most difficult, lengthy, and difficult to implement globally. As a generative AI agent, you may have similar problems — we’ll see. In any case, these systems would be all too human and would not help us to solve our problems, but would bring their own with them.
Certainly, a well-trained (by me trained) GenAI would have come to still further points and aspects in the context of the given word number, certainly also somewhat more readably and less linguistically affected. But, this is how I think and write, and it has also become much better, there are always people who understand my texts. Personally, I want to write my next petit récit myself and maybe collaborate on the grands récits nouveaux du AI. In my opinion, a grand new narrative is needed that is capable of coordinating the common ideas and activities in societies in a new and convincing way. What could it look like? Instead of collective solidarity, individual sovereignty? A difficult question for society that cannot wait. It would depend on a successful synthesis, informed solidarity, and collective sovereignty. In any case, I don’t want to leave the field to an AI.
About Prof. Dr. Stefan Heinemann
Prof. Dr. Stefan Heinemann is Professor of Business Ethics at the FOM University of Applied Sciences and spokesperson of the Ethics Ellipse Smart Hospital at the Universitymedicine Essen and focuses on the economic and ethical perspective on digital medicine and the healthcare industry. He is the Scientific Director of the HAUPTSTADTKONGRESS Lab (Springer Medicine, Wiso) as well as the head of the research group "Ethics of the Digital Health Economy & Medicine" at the Institut für Gesundheit & Soziales of the FOM University of Applied Sciences, member of the "Working Group AI in Internal Medicine" within the commission "Digital transformation of internal medicine" and an expert advisor in various research and educational institutions.References
Heinemann, S., Hirsch, M. Tiefe Heilung – ein Kommentar zu ethischen Risiken und Chancen der künstlichen Intelligenz. Innere Medizin 64, 1072–1076 (2023). https://doi.org/10.1007/s00108-023-01603-0Heinemann, S. (2022): Ethik der Telemedizin – Ein Einstieg, in: Herzog-Zwitter | Landolt | Jorzig (Hrsg.): Digitalisierung und Telemedizin im Gesundheitswesen, Genf, S. 213-224
Heinemann, S. (2022). Technopflege – Kann eine technologisierte Nähe menschlich bleiben? Ethische Einordnungen der digitalen Wende in der Pflege. In: Lux, G., Matusiewicz, D. (eds) Pflegemanagement und Innovation in der Pflege. FOM-Edition. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-35631-6_21
Heinemann, S. (2022): Digitale Diabetologie braucht Ethik, in: Digitalisierungs- und Technologiereport Diabetes 2022, S. 152-162
Heinemann, S. (2021): Digital Health Literacy Divide, in: Langkafel, P./Matusiewicz, D. (Hrsg.): Digitale Gesundheitskompetenz Brauchen wir den digitalen Führerschein für die Medizin?S. 69-82
Heinemann, S. (2021): One AI to Rule Them All?! Ethical consideration of Greatness and Limits of data-driven smart medicine, in: HealthManagement.org The Journal, Volume 21, Issue 3, 2021, S. 84-88
Heinemann, S. und Matusiewicz, D. (Hrsg.) (2020): Digitalisierung und Ethik in Medizin und Gesundheitswesen, Berlin
Stefan Heinemann, Alice Martin; Zukunftsdermatologie: Digitale Chancen mit ethischem Anspruch. Kompass Dermatol 19 November 2020; 8 (4): 170–173. https://doi.org/10.1159/000512464
Heinemann, S. (2020): Einführung, in: Matusiewicz, D./Henningsen, M. /Ehlers, J.P. (Hrsg.): Digitale Medizin - Kompendium für Studium und Praxis, Berlin, S. 1-32
Heinemann, S. (2020): Nur noch KI kann uns retten?, in: Handelsblatt JOURNAL Sep2020, S. 9-10
Heinemann, S. (2020): Digitale Heilung - Analoge Ethik, in: Jochen A. Werner (Hrsg.) | Michael Forsting (Hrsg.) | Thorsten Kaatze (Hrsg.) | Andrea Schmidt-Rumposch (Hrsg.): Smart Hospital - Digitale und empathische Zukunftsmedizin, S. 195-204 6
Heinemann, S., Miggelbrink, R. (2020). Medizinethik für Ärzte und Manager im digitalen Zeitalter. In: Thielscher, C. (eds) Handbuch Medizinökonomie I. Springer Reference Wirtschaft. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-17975-5_4-1
Heinemann, S. Nur noch künstliche Intelligenz kann uns heilen?. Urologe 58, 1007–1015 (2019a). https://doi.org/10.1007/s00120-019-1011-5
Heinemann, S. (2019b): KI, die begeistert? Ethische Reflexionen auf ein unerhört faszinierendes Phänomen. In: mdi, Heft 2, Juni 2019, Jahrgang 21. S. 49-51
Heinemann, S. (2019c): Grundlinien eines „Ethikatlas der digitalen Medizin und Gesundheitswirtschaft, in: BARMER Gesundheitswesen aktuell 2019 S. 146–168
Further parts of the following text by the author have been included in this article: Heinemann, S. (2023): KI, STATT KO! - Der Mensch im Zeitalter seiner algorithmischen Reproduzierbarkeit, in: EHEALTHCOM 6 / 23