The recent success of transformer based large language systems has raised immediate and interesting questions as regards to what extent (if any) these systems can be conceived of as models of human language cognition.
In this talk, we will present a very simple argument to the conclusion that the outputs of these systems are, perhaps surprisingly, fundamentally meaningless and, as such, not particularly interesting vis-a-vis models of human language cognition.
Some arguments to a similar conclusion have been proposed, but unlike these prior arguments, our argument does not rely on assumptions about essential reference relations between words and external objects.
For this reason, the conclusion of our argument cannot be resisted by appeal to various semantic externalist considerations. Moreover, despite the perhaps prima facie implausible conclusion of our argument, we contend that explaining why LLMs are nevertheless incredibly useful is not in fact a problem.