Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Wittgenstein's Language Games: The Evolution of Meaningful Discourse and Action, Schemes and Mind Maps of Dynamics

How Ludwig Wittgenstein's concept of language games illustrates the interplay between language and action. how language might emerge from prelinguistic interactions and how it becomes interwoven with action through signaling games. The document also touches upon the role of reinforcement learning in the evolution of language games and the self-assembly of discourse structures.

Typology: Schemes and Mind Maps

2021/2022

Uploaded on 09/27/2022

journalyyy
journalyyy 🇬🇧

4.7

(12)

215 documents

1 / 14

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
LANGUAGE GAMES AND THE EMERGENCE OF DISCOURSE
JEFFREY A. BARRETT AND JACOB VANDRUNEN
Abstract. Ludwig Wittgenstein (1958) used the notion of a language game
to illustrate how language is interwoven with action. Here we consider how
successful linguistic discourse of the sort he described might emerge in the
context of a self-assembling evolutionary game. More specifically, we consider
how discourse and coordinated action might self-assemble in the context of two
generalized signaling games. The first game shows how prospective language
users might learn to initiate meaningful discourse. The second shows how more
subtle varieties of discourse might co-emerge with a meaningful language.
1. introduction
Ludwig Wittgenstein was concerned with how meaningful language was inter-
woven with action. As he put it, in learning a language “children are brought up
to perform these actions, to use these words as they do so, and to react in this way
to the words of others” (1958, 6).1Learning a language involves establishing an
association between words and actions. To illustrate how meaningful language is
interwoven with action, Wittgenstein described a simple language meant to serve
for communication between a builder Aand her assistant B:
Ais building with building-stones: there are blocks, pillars, slabs
and beams. Bhas to pass the stones, and that in the order in which
Aneeds them. For this purpose they use a language consisting of
the words “block”, “pillar”, “slab”, “beam”. Acalls them out;—
Bbrings the stone which he has learnt to bring at such-and-such
a call.— Conceive this as a complete primitive language. (1958, 2)
In using the language, one agent calls out the words and the other acts on them.
To know how to do so is to know the rules of a game. More generally, “[w]e can
. . . think of the whole process of using words in [the builder-assistant language]
as one of those games by means of which children learn their native language.”
The language game associated with the primitive builder-assistant language is “the
whole, consisting of language and the actions into which it is woven” (1958, 7).
Inasmuch as it involves learning how to use the language, a language game
on Wittgenstein’s conception is an evolutionary game. As the players repeatedly
interact, they update their strategies on the basis of what has happened in past
plays. They are playing the game well when their language use facilitates successful
action.
Our aim here is not to reconstruct Wittgenstein’s philosophical views generally
nor his account of how one might learn an established language more specifically.
We are concerned, rather, with how a language game, where words are inextricably
Date: August 5, 2021.
1The Wittgenstein references are to the numbered sections of his 1958 Philosophical Investigations.
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe

Partial preview of the text

Download Wittgenstein's Language Games: The Evolution of Meaningful Discourse and Action and more Schemes and Mind Maps Dynamics in PDF only on Docsity!

LANGUAGE GAMES AND THE EMERGENCE OF DISCOURSE

JEFFREY A. BARRETT AND JACOB VANDRUNEN

Abstract. Ludwig Wittgenstein (1958) used the notion of a language game to illustrate how language is interwoven with action. Here we consider how successful linguistic discourse of the sort he described might emerge in the context of a self-assembling evolutionary game. More specifically, we consider how discourse and coordinated action might self-assemble in the context of two generalized signaling games. The first game shows how prospective language users might learn to initiate meaningful discourse. The second shows how more subtle varieties of discourse might co-emerge with a meaningful language.

  1. introduction Ludwig Wittgenstein was concerned with how meaningful language was inter- woven with action. As he put it, in learning a language “children are brought up to perform these actions, to use these words as they do so, and to react in this way to the words of others” (1958, 6).^1 Learning a language involves establishing an association between words and actions. To illustrate how meaningful language is interwoven with action, Wittgenstein described a simple language meant to serve for communication between a builder A and her assistant B:

A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, and that in the order in which A needs them. For this purpose they use a language consisting of the words “block”, “pillar”, “slab”, “beam”. A calls them out;— B brings the stone which he has learnt to bring at such-and-such a call.— Conceive this as a complete primitive language. (1958, 2)

In using the language, one agent calls out the words and the other acts on them. To know how to do so is to know the rules of a game. More generally, “[w]e can

... think of the whole process of using words in [the builder-assistant language] as one of those games by means of which children learn their native language.” The language game associated with the primitive builder-assistant language is “the whole, consisting of language and the actions into which it is woven” (1958, 7). Inasmuch as it involves learning how to use the language, a language game on Wittgenstein’s conception is an evolutionary game. As the players repeatedly interact, they update their strategies on the basis of what has happened in past plays. They are playing the game well when their language use facilitates successful action. Our aim here is not to reconstruct Wittgenstein’s philosophical views generally nor his account of how one might learn an established language more specifically. We are concerned, rather, with how a language game, where words are inextricably

Date: August 5, 2021. (^1) The Wittgenstein references are to the numbered sections of his 1958 Philosophical Investigations.

1

2 JEFFREY A. BARRETT AND JACOB VANDRUNEN

interwoven with action, might emerge from prelinguistic interactions between po- tential language users. This forging of a language game might also be characterized as an evolutionary game. It is a game that shows how language and action might come to form an integrated whole in the first place. In particular, we will consider how this might happen in the context of generalized signaling games that allow for self-assembly. David Lewis (1969) introduced the idea of a signaling game to show how linguistic conventions might be established. Brian Skyrms (2006) subsequently showed how the classical games Lewis described might be reformulated as evolutionary games that illustrate how even low-rationality learners might evolve simple signaling lan- guages. Barrett and Skyrms (2017) have more recently shown how both simple and more generalized signaling games might self-assemble by means of ritualized interactions.^2 The generalized signaling games we consider here illustrate features of self- assembly. The self-assembly allows for the structure of the players’ interactions itself to evolve on repeated plays. These games show how (1) a meaningful lan- guage and (2) the structure of discourse in that language might coevolve to facilitate successful action. The first game shows how players might learn to initiate mean- ingful discourse by asking a question rather than immediately acting. The second game and a variant we will briefly consider show how agents may evolve to ask new questions with coordinated meanings that coevolve on repeated plays.^3 These games explain how language and the structure of discourse itself may come to be interwoven with action.

  1. the emergence of discourse We will start with something akin to Wittgenstein’s builder-assistant game. It shows how agents might come to be involved in meaningful discourse using an evolved language instead of simply acting in the first place. It also shows how they might learn to end discourse and to act instead of talking. While this game is very simple, it allows for self-assembly. It is this that determines the structure of discourse between the two agents. The question game begins with nature randomly determining whether the builder needs a slab or a block with unbiased probabilities. The assistant knows that the builder needs a slab or a block but does not know which. He may guess at what the builder needs and hand her a slab S or a block B at random, or he may produce a signal Q.^4

(^2) See also Barrett, Skyrms, and Mohseni (2018), and Barrett (2020). See Barrett, Skyrms, and Cochran (2020) and Steinert-Threlkeld (2020) for two accounts of how nontrivial linguistic com- positionality might evolve in the context of signaling games under reinforcement learning. (^3) The second game might be thought of as a more general version of a game Skyrms described (2010, 154–5). On that game, the players are faced with different decision problems that require different information to solve. Nature presents one player with the decision problem. That player sends a signal to her friend who selects one of two partitions of nature to observe, then sends a reply. The first player then chooses an action to perform based on the return signal. This game illustrates how questions might evolve to make use of a fixed partition. In contrast, we show here how coordinated questions, partitions, and the structure of discourse itself might coevolve. (^4) At least initially, a signal is just a state of nature produced by the assistant to which the builder has epistemic access and on which she might consequently condition her actions. The various signals we will consider are assumed to be distinguishable from each other so that the listener can act in a way that depends on the particular signal sent.

4 JEFFREY A. BARRETT AND JACOB VANDRUNEN

..

Figure 1. A question game. Play begins in the top left and zig-zags towards the bottom right. The far right column of text represents nature’s play, while the middle column represents sig- nals and/or actions taken by players. The boxes represent urns which the respective players draw from, conditioned on nature’s play and/or the signals received.

to act on states of nature and signals. Some of their actions will serve to structure their discourse. The question game on simple reinforcement learning proceeds as indicated in Figure 1 read from top to bottom. The events on each round of play are as follows.

question game (simple reinforcement):

assistant move i: Nature randomly determines whether the builder needs a slab or a block with unbiased probabilities. The assistant draws a ball from his start urn. This urn begins with one ball each of types Q, S, and B. If he draws S or B he just hands the builder a slab or a block. If the builder gets what she needs given the state of nature, then the assistant returns the ball he drew to the urn from which he drew it and adds a duplicate ball of the same type. Else, the assistant just returns the ball he drew to the urn from which it was drawn. But if the assistant draws Q, he signals Q.

builder move i: If the assistant signals Q, the builder in turn draws a ball from an urn corresponding to the building material she needs. Specifically, if nature tells her that she needs a slab, then she draws from her slab urn; and if nature tells her that she needs a block, then she draws from her block urn. Each of these urns initially contains one ball each of types A 0 and A 1. If the builder draws an A 0 ball, she signals A 0 ; and if she draws an A 1 ball, she signals A 1.

assistant move ii: When the assistant hears the builder’s reply, he draws from one of two reply urns, A 0 and A 1 , each initially containing one ball each of types Q, S, and B, then either signals Q again, or hands a slab S or block B to the builder. If the builder gets the building material she needs, then the round was successful and both players return the balls they drew to the urns from which they drew them and add a duplicate ball. Else, each agent just returns his or her balls to the urns from which they were drawn.

LANGUAGE GAMES AND THE EMERGENCE OF DISCOURSE 5

0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. Final Accuracy

0

200

400

600

800

1000

of Simulations

Simulation Results (EDF) Payoffs +1, - +1, -

Figure 2. Final accuracies for the question game, displayed as an empirical CDF. Individual dots indicate the results of actual simulations, rank-ordered such that the corresponding value on the ordinate indicates the number of simulations out of 1000 which had less than or equal to the specified final accuracy. The blue distribution indicates the results for simple (+1, −0) reinforcement, while the orange distribution the results for reinforcement with punishment (+1, −1).

A closely analogous description characterizes the question game under reinforce- ment with punishment. The difference is that here when a round leads to a suc- cessful action, each of the players returns the ball that he or she drew to the urn from which it was drawn and adds a copy of that ball. And when a round leads to failure, each of the players discards the ball that he or she drew unless it was the last ball of its type in the urn; in which case, he or she just returns the drawn ball to the urn from which it was drawn. On simulation, the builder and her assistant begin by acting randomly, but on repeated plays, the assistant typically evolves to ask what the builder needs rather than just guess, and the builder’s reply coevolves to represent what she needs. With 10^7 plays per run, on simple reinforcement, the players end up with dispositions that are reliable more than 0.9 of the time on 0.895 of the runs. For (+1, −1) reinforcement with punishment, all of the runs were observed to yield a final reliability better than 0.9. Figure 2 provides a more more detailed sense of these results. The blue distribu- tion represents the number of simulation runs out of 1000 where the final accuracy was less than or equal to the specified value on simple reinforcement learning. The

LANGUAGE GAMES AND THE EMERGENCE OF DISCOURSE 7

.

Figure 3. A dialogue game.

8 JEFFREY A. BARRETT AND JACOB VANDRUNEN

  1. meaning and the structure of discourse In the question game, the assistant’s one linguistic act evolves to serve as a prompt to get the builder to say something that might come to represent the mate- rial the builder needs. One can easily imagine a self-assembling dialogue game that allows for the evolution of more subtle language games.

Suppose that the builder needs one of four possible building materials on each round (red slab RS, red block RB, blue slab BS, or blue block BB) and that the assistant has two potential linguistic acts (Q 0 and Q 1 ) that may come to represent the same or different questions. There are a number of ways to fill in the details to characterize a particular self-assembling game. We will discuss one of these in some detail, then briefly describe what happens in a natural variant. Part of filling in the details involves saying what options and resources each player has and what might affect each player’s actions at each step in the game. To begin, we will suppose that the builder has the same four responses available to answer each of her assistant’s two potential questions (A 0 , A 1 , A 2 , and A 3 ) and that she conditions her actions on what she needs and the question that her assistant just asked and on nothing else. We will further suppose that the assistant conditions his actions on everything the builder has said so far in the round and on nothing else.^10 The dialogue game on simple reinforcement proceeds as indicated in Figure 3 read from top to bottom. The events on each round of play are as follows.

dialogue game (simple reinforcement):

assistant’s move i: Nature randomly determines what the builder needs with unbiased probabilities from among the four possible materials: red slab RS, red block RB, blue slab BS, or blue block BB. Her assistant then draws a ball from an urn that begins with one ball of each of the six types RS, RB, BS, BB, Q 0 , and Q 1. If he draws RS, RB, BS, or BB, he simply hands the corresponding material to the builder. If it is what the builder needs, the round ends with success and the assistant returns his ball to the urn and adds a duplicate ball of the same type. If it is not what the builder needs, the play ends in failure and the assistant just returns the ball he drew to the urn. If the assistant draws Q 0 or Q 1 , he sends the corresponding signal.

builder’s move i: The builder draws from an urn corresponding to her assistant’s signal and the building material she needs. She has eight urns for this purpose labeled Q 0 RS, Q 1 RS, Q 0 RB,... each initially containing one ball each of four types A 0 , A 1 , A 2 , and A 3. The builder sends the signal corresponding to the type of ball drawn.

assistant’s move ii: The assistant observes the builder’s reply and draws a ball from one of four new urns A 0 , A 1 , A 2 , and A 3 , each corresponding to one of the possible replies. Each reply urn begins with one ball of each of the six types RS, RB, BS, BB, Q 0 , and Q 1. If he draws RS, RB, BS, or BB, he hands the corresponding material to the builder. If the assistant gives the builder what she needs, the round

(^10) One might consider games where the players condition on anything to which they have epistemic access. What they in fact find salient evolves by means of a learning dynamics that serves to ritualize whatever behavior leads to successful action. See Barrett and Skyrms (2017) and Barrett (2020) for details regarding how this works.

10 JEFFREY A. BARRETT AND JACOB VANDRUNEN

0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. Final Accuracy

0

200

400

600

800

1000

of Simulations

Simulation Results (EDF) Payoffs +1, - +1, -

Figure 4. Final accuracies for the dialogue game.

The agents are so successful in evolving an efficient language game on the (+1, −1) reinforcement with punishment self-assembling game, that there is not much to say. In contrast, quite different language games may evolve on the simple reinforcement self-assembling game. On different runs of the simple reinforcement game, the players sometimes evolve language practice that does not require a second question. The assistant needs two bits of information to know what material the builder needs. As Figure 5 shows, the probability of the second question being asked at all in the language game resulting from a full run decreases as the evolved information content of the first answer increases. Note, however, that sometimes both questions get asked even when the first answer is fully informative. Indeed, the second question sometimes evolves to mean precisely the same thing as the first, as indicated by the answers it elicits and the subsequent actions it produces. There are a number of ways that this may happen. If the meaning of the first question were slower to evolve on a run than that of the second, such redundancy may have played a role in early success on the run. Since asking a second question is cost free in the present game, the evolution of this sort of redundancy is unsurprising. One would expect it to occur less frequently if there were a cost to asking a second question. In this spirit, there is a very high cost to asking a third question in the present game, and it comes to be almost never asked on either of the two dynamics. Perhaps more interesting, the two questions sometimes evolve different mean- ings and only allow for optimal success by dint of the systematically interrelated meanings of the questions and the replies they elicit. Two examples of this can be seen in Figure 6. Both of these are from runs that produced language games that allow for nearly optimal success.

LANGUAGE GAMES AND THE EMERGENCE OF DISCOURSE 11

0.8 1.0 1.2 1.4 1.6 1.8 2. Average First Answer Information

Second Question Probability

Results After First Exchange

Accuracy < 1. < 0. < 0.

Figure 5. The probability that the assistant asks a second ques- tion given the information (in bits) communicated by the first ques- tion. Once again, individual dots indicate actual simulations, with the blue dots corresponding to those simulations with final accura- cies greater than 0.9, the orange dots to those with final accuracies between 0.9 and 0.75, and the green dots to those with final accu- racies less than 0.75.

Figure 6. Two examples of evolved languages (left and right) in which each question elicits insufficient information but together precisely specifies the material needed.

LANGUAGE GAMES AND THE EMERGENCE OF DISCOURSE 13

  1. discussion Wittgenstein used the notion of a language game to illustrate how language is interwoven with action. We have shown here how a systematically interrelated whole where the agents’ words and the structure of their discourse and actions are thoroughly integrated to facilitate successful cooperative action might be forged in the context of a simple learning dynamics. This explains how a simple language game like that described by Wittgenstein might come to be. Self-assembly is essential to the agents’ success in each of the games we have considered. The agents cannot even begin to evolve a language without first getting involved in discourse. And they cannot benefit from having evolved a language that allows for reliable communication without learning when to stop talking and use what they have learned. Importantly, the games here illustrate only a part of the full self-assembly pro- cess. Just as the assistant learns to ask and to stop asking questions in the present games, the builder may learn to reply to the assistant’s questions rather than to remain silent. To investigate how such a feature of discourse might evolve, one would give the builder the option of not replying at all or replying with one of a set of responses and see what happens under the learning dynamics. And so on for other aspects of the agents’ evolving structure of interactions. What ultimately drives the evolution of discourse is the ritualization of successful action under the learning dynamics of the self-assembling game. It is this that structures the interactions of the agents and determines the significance of their actions at every step of their coevolved linguistic practice. It is this that explains how the agents might come to play a lauguage game at all. The self-assembly of increasingly subtle language games allows for richer forms of meaningful discourse. In each evolved game, one’s language and actions are inextricably interwoven.

14 JEFFREY A. BARRETT AND JACOB VANDRUNEN

References

[1] J. M. Alexander, B. Skyrms, S. L. Zabell (2012). Inventing new signals. Dynamic Games and Applications, 2, 129–145. [2] Barrett, J. A. (2020). Self-assembling games and the evolution of salience. Forthcoming in British Journal for the Philosophy of Science. [3] Barrett, J. A. (2007a). Dynamic partitioning and the conventionality of kinds. Philosophy of Science, 74, 527–546. [4] Barrett, J. A. (2007b). The evolution of coding in signaling games. Theory and Decision, 67(2), 223–237. [5] Barrett, J. A. and Brian Skyrms (2017). Self-assembling games. The British Journal for the Philosophy of Science, 68(2), 329– [6] Barrett, J. A., Brian Skyrms, and Aydin Mohseni (2018). Self-assembling networks. The British Journal for the Philosophy of Science, 70(1), 301-325. doi 10.1093/bjps/axx [7] Barrett, J. A., Brian Skyrms, Calvin Cochran (2020). On the evolution of compositional language. Philosophy of Science 87(5): 910-20. [8] Barrett, J. A., C. T. Cochran, S. Huttegger, N. Fujiwara (2017) Hybrid learning in signaling games. Journal of Experimental & Theoretical Artificial Intelligence 29(5): 1–9. [9] Barrett, J. A. and N. Gabriel (2021) Reinforcement learning with punishment. draft. [10] Barrett, J. A and K. Zollman (2009). The role of forgetting in the evolution and learning of language. Journal of Experimental and Theoretical Artificial Intelligence, 21(4), 293–309. [11] Cochran, C. T. and J. A. Barrett (2021) “The Efficacy of Human Learning in Lewis-Skyrms Signaling Games,” draft. [12] Cochran, C. T. and J. A. Barrett (2020) “How Signaling Conventions are Established,” forthcoming in Synthese. [13] Erev, I. and A. E. Roth (1998) Predicting how people play games: reinforcement learning in experimental games with unique, mixed strategy equilibria. American Economic Review 88: 848–81. [14] Herrnstein, R. J. (1970) On the law of effect. Journal of the Experimental Analysis of Be- havior 13:243–266. [15] Houser, Nathan and Christian Kloesel (eds.) (1992). The Essential Peirce: Selected Philo- sophical Writings Volume 1 (1867-1893). Bloomington and Indianapolis: Indiana University Press. [16] Lewis, David (1969) Convention. Cambridge, MA: Harvard University Press. [17] Roth, A. E. and I. Erev (1995) Learning in extensive form games: experimental data and simple dynamical models in the intermediate term. Games and Economic Behavior 8: 164–

[18] Skyrms, Brian (2010). Signals: Evolution, Learning, & Information, New York: Oxford University Press. [19] Skyrms, Brian (2006). Signals. Philosophy of Science, 75(5), 489–500. [20] Steinert-Threlkeld, Shane (2020) Toward the emergence of nontrivial compositionality. Phi- losophy of Science 87(5): 897–909. [21] Wittgenstein, Ludwig (1958) Philosophical Investigations, G E ˙M ˙Anscombe translator, Ox-˙ ford: Basil Blackwell Ltd.