Philosophy   Truth as a Function

Two children on a hill looking down on a bombed city.
A realistic AI-generated image, but one that could well be true. © Adobe

But the question about truth is much more than just a concern for fans of subtleties — truth has a fundamentally orientational function for human beings. We want and need to know what is true and untrue because we cannot rely on instincts alone — instead, we have to try to find our own way in this confusingly complex world by differentiating between what is true and untrue.  

But the question about truth is much more than just a concern for fans of subtleties — truth has a fundamentally orientational function for human beings. We want and need to know what is true and untrue because we cannot rely on instincts alone — instead, we have to try to find our own way in this confusingly complex world by differentiating between what is true and untrue.  

Truth Is a Characteristic of Statements, Not Things 

Obviously, the need for such a distinction presents itself with a varying degree of urgency: No one actually gives two hoots whether Taylor Swift can really sing or not, whereas Harvey Weinstein being rightfully sentenced does matter. Because truth has this fundamental orientational function, we want to know, for instance, whether pictures reflect reality, whether they are genuine or fake. And the more this means to us emotionally, the greater the import. For example, if we retrospectively realize that we believed something untrue. This is why it is so disastrous that truth is the first victim in war.  

But one thing that’s sure is the following: Truth is a characteristic of statements, not things. Things are real or unreal, but they aren’t true (or untrue). Those are only statements about the reality of these things. Therefore, truth is  relational. In erudite language, the formula has remained the same since Thomas Aquinas: adaequatio intellectus ad rem — an appropriateness of cognition to reality. Whether an idea or a statement is true depends on whether it is appropriate to the thing. You can work out for yourself that this statement — which is, after all, a statement about when statements are true and leads to the question of whether there is any such thing as a truth beyond language — had never been satisfactory. But we cannot allow ourselves to get lost in the hundreds of meters of bookcases full of arguments and counterarguments that would fill any midsize university library because we are supposed to be talking about something different here. What we are talking about is truth and artificial intelligence — and the question of whether a functional concept of truth can exist. 

To which Problem Is the Idea of Truth the Solution?  

According to functionalism in sociology, the question to ask is what function a phenomenon fulfills in a society. For example, what function is fulfilled by art, money, love, digital media? Or looking at it the other way around: What problem does a thing solve? For our context: To which problem is the idea of truth the solution? You might then arrive at the fairly startling realization that truth is quite simply the characteristic of statements that people say are applicable. According to the definition of German sociologist and philosopher of social science Niklas Luhmann and in the language of systems theory: “Truth is always present if and insofar as the communication partners reach a consensus that a reported selection is to be treated as an experience by both parties. In other words, it is to be ascribed to the world.”[1]

By “selection” in this context, Luhmann simply means a segment of reality. The communication partners choose a selection from the available “reality” on which they reach a consensus. They say: Such and such is so. Truth is thus the solution to the problem of complexity reduction and action coordination. If communication partners agree about what is true and what is untrue, they find it easier to decide on joint actions. The world becomes more straightforward if consensus exists about what is true. 

Something False Is Difficult to Establish as a Norm Throughout Society  

This does not correspond with the usual understanding of truth; we want something to be true, regardless of whether five people or five million believe it to be true. And the idea is worrying that untruths — in the form of AI-generated deepfakes, for instance — might transform into truths just because enough people trust that whatever they see circulating on news channels is true — because other people believe it to be so. These fears are absolutely justified, and in light of these fears, the interpretation of truth in terms of systems theory, as a function, might seem absurd.

Yet a small but crucial detail may offer some consolation: Luhmann writes that a “certain untruth” is also “a success.”[2] What does he mean by success? Success means that a thing functions as it should. It functions as it should because somehow it is aligned correctly with how the rest of society functions.

So, the difference between untrue and true is also successful if it can be synchronized with other sub-systems within society as a whole. Something that is false is difficult to establish as a norm throughout the entirety of society, even if it is seen as a success in a particular segment. From this perspective, it is a comfort (and a kind of truth insurance) that in today’s functionally diversified society no principle exists that pervades that whole society — and consequently, the category of truth is attributed a different relevance in each sub-system of society. It is crucial in science and justice, only of limited importance in the economy and art (and of no importance whatsoever in share trading...). 

Such an observation applies to democratic systems in which forms of knowledge and content compete for success. In other words, success is granted to the most valuable contribution in evolutionary terms. It is a different matter in autocratic and authoritarian systems, which are able to enforce whatever they want to establish as truth through force. But once this is the case, AI does not make things any worse. Things have then already been worsened much more effectively by tediously repeating lies, hijacking the media, bombarding the masses with nonsense on TV, and undermining the division of powers. The question as to what constitutes the connection between AI and truth has already become irrelevant at that point.  

[1] Niklas Luhmann, Systemtheorie der Gesellschaft, Berlin 2017, 489 f.
[2] Ibid., 499.

You might also like

Failed to retrieve recommended articles. Please try again.