---
authors:
  - givenNames:
      - Wataru
    familyNames:
      - Toyokawa
    type: Person
    emails:
      - wataru.toyokawa@uni-konstanz.de
    affiliations:
      - name: Department of Psychology, University of Konstanz
        address:
          addressCountry: Germany
          addressLocality: Konstanz
          type: PostalAddress
        type: Organization
  - givenNames:
      - Wolfgang
    familyNames:
      - Gaissmaier
    type: Person
    affiliations:
      - name: Department of Psychology, University of Konstanz
        address:
          addressCountry: Germany
          addressLocality: Konstanz
          type: PostalAddress
        type: Organization
      - name: Centre for the Advanced Study of Collective Behaviour, University of Konstanz
        address:
          addressCountry: Germany
          addressLocality: Konstanz
          type: PostalAddress
        type: Organization
editors:
  - givenNames:
      - Mimi
    familyNames:
      - Liljeholm
    type: Person
    affiliations:
      - name: United States
        type: Organization
datePublished:
  value: '2022-05-10'
  type: Date
dateReceived:
  value: '2021-11-05'
  type: Date
dateAccepted:
  value: '2022-04-01'
  type: Date
title: >-
  Conformist social learning leads to self-organised prevention against adverse
  bias in risky decision making
description: >-
  Given the ubiquity of potentially adverse behavioural bias owing to myopic
  trial-and-error learning, it seems paradoxical that improvements in
  decision-making performance through conformist social learning, a process
  widely considered to be bias amplification, still prevail in animal collective
  behaviour. Here we show, through model analyses and large-scale interactive
  behavioural experiments with 585 human subjects, that conformist influence can
  indeed promote favourable risk taking in repeated experience-based decision
  making, even though many individuals are systematically biased towards adverse
  risk aversion. Although strong positive feedback conferred by copying the
  majority’s behaviour could result in unfavourable informational cascades, our
  differential equation model of collective behavioural dynamics identified a
  key role for increasing exploration by negative feedback arising when a weak
  minority influence undermines the inherent behavioural bias. This ‘collective
  behavioural rescue’, emerging through coordination of positive and negative
  feedback, highlights a benefit of collective learning in a broader range of
  environmental conditions than previously assumed and resolves the ostensible
  paradox of adaptive collective behavioural flexibility under conformist
  influences.
isPartOf:
  volumeNumber: '11'
  isPartOf:
    title: eLife
    issns:
      - 2050-084X
    identifiers:
      - name: nlm-ta
        propertyID: https://registry.identifiers.org/registry/nlm-ta
        value: elife
        type: PropertyValue
      - name: publisher-id
        propertyID: https://registry.identifiers.org/registry/publisher-id
        value: eLife
        type: PropertyValue
    publisher:
      name: eLife Sciences Publications, Ltd
      type: Organization
    type: Periodical
  type: PublicationVolume
licenses:
  - url: http://creativecommons.org/licenses/by/4.0/
    content:
      - content:
          - 'This article is distributed under the terms of the '
          - content:
              - Creative Commons Attribution License
            target: http://creativecommons.org/licenses/by/4.0/
            type: Link
          - >-
            , which permits unrestricted use and redistribution provided that
            the original author and source are credited.
        type: Paragraph
    type: CreativeWork
keywords:
  - social learning
  - conformity
  - reinforcement learning
  - hot stove effect
  - risky decision making
  - collective behaviour
  - Human
identifiers:
  - name: publisher-id
    propertyID: https://registry.identifiers.org/registry/publisher-id
    value: '75308'
    type: PropertyValue
  - name: doi
    propertyID: https://registry.identifiers.org/registry/doi
    value: 10.7554/eLife.75308
    type: PropertyValue
  - name: elocation-id
    propertyID: https://registry.identifiers.org/registry/elocation-id
    value: e75308
    type: PropertyValue
fundedBy:
  - identifiers:
      - value: EXC 2117 - 422037984
        type: PropertyValue
    funders:
      - name: Deutsche Forschungsgemeinschaft
        type: Organization
    type: MonetaryGrant
about:
  - name: Computational and Systems Biology
    type: DefinedTerm
  - name: Physics of Living Systems
    type: DefinedTerm
genre:
  - Research Article
meta: {}
bibliography: elife-75308.references.bib
---

# Introduction

Collective intelligence, a self-organised improvement of decision making among socially interacting individuals, has been considered one of the key evolutionary advantages of group living ([@bib33]; [@bib41]; [@bib67]; [@bib74]). Although what information each individual can access may be a subject of uncertainty, information transfer through the adaptive use of social cues filters such ‘noises’ out ([@bib42]; [@bib60]), making individual behaviour on average more accurate ([@bib34]; [@bib40]; [@bib64]). Evolutionary models ([@bib14]; [@bib38]; [@bib39]) and empirical evidence ([@bib71]; [@bib73]) have both shown that the benefit brought by the balanced use of both socially and individually acquired information is usually larger than the cost of possibly creating an alignment of suboptimal behaviour among individuals by herding ([@bib11]; [@bib29]; [@bib57]). This prediction holds as long as individual trial-and-error learning leads to higher accuracy than merely random decision making ([@bib26]). Copying a common behaviour exhibited by many others is adaptive if the output of these individuals is expected to be better than uninformed decisions.

However, both humans and non-human animals suffer not only from environmental noise but also commonly from systematic biases in their decision making (e.g. [@bib32]; [@bib35]; [@bib58]; [@bib59]). Under such circumstances, simply aggregating individual inputs does not guarantee collective intelligence because a majority of the group may be biased towards suboptimization. A prominent example of such a potentially suboptimal bias is risk aversion that emerges through trial-and-error learning with adaptive information-sampling behaviour ([@bib21]; [@bib46]). Because it is a robust consequence of decision making based on learning ([@bib35]; [@bib79]; [@bib77]; [@bib46]), risk aversion can be a major constraint of animal behaviour, especially when taking a high-risk high-return behavioural option is favourable in the long run. Therefore, the ostensible prerequisite of collective intelligence, that is, that individuals should be unbiased and more accurate than mere chance, may not always hold. A theory that incorporates dynamics of trial-and-error learning and the learnt risk aversion into social learning is needed to understand the conditions under which collective intelligence operates in risky decision making.

Given that behavioural biases are omnipresent and learning animals rarely escape from them, it may seem that social learning, especially the ‘copy-the-majority’ behaviour (aka, ‘conformist social learning’ or ‘positive frequency-based copying’; [@bib42]), whereby the most common behaviour in a group is disproportionately more likely to be copied ([@bib14]), may often lead to maladaptive herding, because recursive social interactions amplify the common bias (i.e. a positive feedback loop; [@bib22]; [@bib23]; [@bib25]; [@bib57]). Previous studies in humans have indeed suggested that individual decision-making biases are transmitted through social influences ([@bib15]; [@bib8]; [@bib69]; [@bib63]; [@bib37]; [@bib51]). Nevertheless, the collective improvement of decision accuracy through simple copying processes has been widely observed across different taxa ([@bib61]; [@bib62]; [@bib1]; [@bib67]; [@bib33]), including the very species known to exhibit learnt risk-taking biases, such as bumblebees ([@bib58]; [@bib59]), honeybees ([@bib24]), and pigeons ([@bib44]). Such observations may indicate, counter-intuitively, that social learning may not necessarily trap animal groups in suboptimization even when most of the individuals are suboptimally biased.

In this paper, we propose a parsimonious computational mechanism that accounts for the emerging improvement of decision accuracy among suboptimally risk-aversive individuals. In our agent-based model, we allow our hypothetical agents to compromise between individual trial-and-error learning and the frequency-based copying process, that is, a balanced reliance on social learning that has been repeatedly supported in previous empirical studies (e.g. [@bib19]; [@bib47]; [@bib48]; [@bib72]; [@bib73]). This is a natural extension of some previous models that assumed that individual decision making was regulated fully by others’ beliefs ([@bib22]; [@bib23]). Under such extremely strong social influence, exaggeration of individual bias was always the case because information sampling was always directed towards the most popular alternative, often resulting in a mismatch between the true environmental state and what individuals believed (’collective illusion’; [@bib23]). By allowing a mixture of social and asocial learning processes within a single individual, the emergent collective behaviour is able to remain flexible ([@bib3]; [@bib73]), which may allow groups to escape from the suboptimal behavioural state.

We focused on a repeated decision-making situation where individuals updated their beliefs about the value of behavioural alternatives through their own action–reward experiences (experience-based task). Experience-based decision making is widespread in animals that learn in a range of contexts ([@bib35]). The time-depth interaction between belief updating and decision making may create a non-linear relationship between social learning and individual behavioural biases ([@bib12]), which we hypothesised is key in improving decision accuracy in self-organised collective systems ([@bib33]; [@bib67]).

In the study reported here, we firstly examined whether a simple form of conformist social influence can improve collective decision performance in a simple multi-armed bandit task using an agent-based model simulation. We found that promotion of favourable risk taking can indeed emerge across different assumptions and parameter spaces, including individual heterogeneity within a group. This phenomenon occurs thanks, apparently, to the non-linear effect of social interactions, namely, _collective behavioural rescue_. To disentangle the core dynamics behind this ostensibly self-organised process, we then analysed a differential equation model representing approximate population dynamics. Combining these two theoretical approaches, we identified that it is a combination of positive and negative feedback loops that underlies collective behavioural rescue, and that the key mechanism is a promotion of information sampling by modest conformist social influence.

Finally, to investigate whether the assumptions and predictions of the model hold in reality, we conducted a series of online behavioural experiments with human participants. The experimental task was basically a replication of the task used in the agent-based model described above, although the parameters of the bandit tasks were modified to explore wider task spaces beyond the simplest two-armed task. Experimental results show that the human collective behavioural pattern was consistent with the theoretical prediction, and model selection and parameter estimation suggest that our model assumptions fit well with our experimental data. In sum, we provide a general account of the robustness of collective intelligence even under systematic risk aversion and highlight a previously overlooked benefit of conformist social influence.

# Results

## The decision-making task

The minimal task that allowed us to study both learnt risk aversion and conformist social learning was a two-armed bandit task where one alternative provided certain payoffs ${\pi }_{s}$ constantly (safe option $s$) and the other alternative provided a range of payoffs stochastically, following a Gaussian distribution ${\displaystyle {\pi }_{r}\sim \mathcal{N}(\mu ,s.d.)}$ (risky option $r$; [Figure 1a](#fig1)). Unless otherwise stated, we followed the same task setup as [@bib21], who mathematically derived the condition under which individual reinforcement learners would exhibit risk aversion. In the main analysis, we focus on the case where the risky alternative had a higher mean payoff than the safe alternative (i.e. producing more payoffs on average in the long run; positive risk premium \[positive RP\]), meaning that choosing the risky alternative was the optimal strategy for a decision maker to maximise accumulated payoffs. Unless otherwise stated, the total number of decision-making trials (time horizon) was set to $T=150$ in the main simulations described below.

figure: Figure 1.
:::
![](elife-75308.xml.media/fig1.jpg)

### Mitigation of suboptimal risk aversion by social influence.

(**a**) A schematic diagram of the task. A safe option provides a constant reward ${\pi }_{s}=1$ whereas a risky option provides a reward randomly drawn from a Gaussian distribution with mean $\mu =1.5$ and $\text{s.d.}=1$. (**b, c**): The emergence of suboptimal risk aversion (the hot stove effect) depending on a combination of the reinforcement learning parameters; (**b**): under no social influence (i.e. the copying weight $\sigma =0$), and (**c**): under social influences with different values of the conformity exponents $\theta$ and copying weights $\sigma$. The dashed curve is the asymptotic equilibrium at which asocial learners are expected to end up choosing the two alternatives with equal likelihood (i.e. ${P}_{r,t\to \mathrm{\infty }}=0.5$), which is given analytically by $\beta =(2-\alpha )/\alpha$([@bib21]). The coloured background is a result of the agent-based simulation with total trials $T=150$ and group size $N=10$, showing the average proportion of choosing the risky option in the second half of the learning trials ${\displaystyle {P}_{r,t>75}>0.5}$ under a given combination of the parameters. (**d**): The differences between the mean proportion of risk aversion of asocial learners and that of social learners, highlighting regions in which performance is improved (orange) or undermined (purple) by social learning.
:::
{#fig1}

figure: Figure 1—figure supplement 1.
:::
![](elife-75308.xml.media/fig1-figsupp1.jpg)

### The simulation result with a wider parameter space.

The effect of the relationship between individual learning rate ($\alpha$) and individual inverse temperature ($\beta$) across the different combinations of social learning parameters on the mean proportion of choosing the risky alternative in the second half of the trials of the two-armed bandit task described in [Figure 1](#fig1) in the main text. The dashed curves give a set of parameter combinations with which asocial learners are expected to choose the risky alternative in the same proportion as they choose the safe alternative (i.e. ${P}_{r}^{\star }=0.5$) in the infinite time horizon $T\to \mathrm{\infty }$, given by $\beta =(2-\alpha )/\alpha$.
:::
{#fig1s1}

figure: Figure 1—figure supplement 2.
:::
![](elife-75308.xml.media/fig1-figsupp2.jpg)

### The results of the value-shaping social influence model.

The relationships between individual learning rate ($\alpha$) and individual inverse temperature ($\beta$) across different combinations of social learning parameters. The coloured background shows the average proportion of choosing the risky option in the second half of the learning trials ${\displaystyle {P}_{r,t>75}>0.5}$. Different social learning weights (${\sigma }_{vs}$) are shown from top to bottom (${\sigma }_{vs}\in \{0,0.1,0.25,0.5,1,2\}$). Different conformity exponents are shown from left to right ($\theta \in \{0.5,1,2\}$). The dashed curve is the asymptotic equilibrium at which asocial learners are expected to end up choosing both alternatives with equal likelihood (i.e. ${P}_{r}^{\star }=0.5$), given by $\beta =(2-\alpha )/\alpha$.
:::
{#fig1s2}

figure: Figure 1—figure supplement 3.
:::
![](elife-75308.xml.media/fig1-figsupp3.jpg)

### The simulation result with the negative risk premium.

The relationships between individual learning rate ($\alpha$) and individual inverse temperature ($\beta$) across different combinations of social learning parameters. The coloured background shows the average proportion of choosing the risky option in the second half of the learning trials ${\displaystyle {P}_{r,t>75}>0.5}$. Different social learning weights ($\sigma$) are shown from top to bottom ($\sigma \in \{0,0.25,0.5,0.75,0.9\}$). Different conformity exponents are shown from left to right ($\theta \in \{1,2,4,8\}$). The risk premium is negative$\mu =-0.5.$
:::
{#fig1s3}

figure: Figure 1—figure supplement 4.
:::
![](elife-75308.xml.media/fig1-figsupp4.jpg)

### The simulation result with the Bernoulli noise distribution.

The relationships between individual learning rate ($\alpha$) and individual inverse temperature ($\beta$) across different combinations of social learning parameters. The coloured background shows the average proportion of choosing the risky option in the second half of the learning trials ${\displaystyle {P}_{r,t>75}>0.5}$. Different social learning weights ($\sigma$) are shown from top to bottom ($\sigma \in \{0,0.2,0.4,0.6,0.8\}$). Different conformity exponents are shown from left to right ($\theta \in \{1,2,4,8\}$). The binary payoff distribution was used where the safe alternative always provides ${\pi }_{s}=1$ while the risky alternative provides either a 70% chance of ${\pi }_{r}=0$ or a 30% chance of ${\pi }_{r}=5$ . The risk premium was 1.5.
:::
{#fig1s4}

figure: Figure 1—figure supplement 5.
:::
![](elife-75308.xml.media/fig1-figsupp5.jpg)

### The simulation results under the positive risk premium experimental setups (a,d: the 1-risky-1-safe; b,e: the 1-risky-3-safe; c,f: the 2-risky-2-safe).

The relationships between individual learning rate ($\alpha$) and individual inverse temperature ($\beta$) across different combinations of social learning parameters. (a–c): The coloured background shows the average proportion of choosing the risky option in the second half of the learning trials (${\displaystyle {P}_{r,t>75}>0.5}$) under social influences with different values of the conformity exponents $\theta$ and copying weights $\sigma$. The dashed curve is the asymptotic equilibrium at which asocial learners are expected to end up choosing the two alternatives with equal likelihood (i.e. ${P}_{r}=0.5$). (d–f): The differences between the mean proportion of risk aversion of asocial learners and that of social learners, highlighting regions in which performance is improved (that is, risk-seeking increases; orange) or undermined (that is, risk-aversion is amplified; purple) by social learning.
:::
{#fig1s5}

figure: Figure 1—figure supplement 6.
:::
![](elife-75308.xml.media/fig1-figsupp6.jpg)

### The simulation results under the negative risk premium experimental setup.

The relationships between individual learning rate ($\alpha$) and individual inverse temperature ($\beta$) across different combinations of social learning parameters. (left): The coloured background shows the average proportion of choosing the (optimal) safe option in the second half of the learning trials under social influences with different values of the conformity exponents $\theta$ and copying weights $\sigma$. The dashed curve shows the proportion of choosing the safe option at ${P}_{s}=0.85$. (right): The differences between the mean proportion of risk aversion of asocial learners and that of social learners, highlighting regions in which (suboptimal) risk-seeking increases (orange) and (optimal) risk-aversion increases (purple) by social learning.
:::
{#fig1s6}

To maximise one’s own long-term individual profit under such circumstances, it is crucial to strike the right balance between exploiting the option that has seemed better so far and exploring the other options to seek informational gain. Because of the nature of adaptive information sampling under such exploration–exploitation trade-offs, lone decision makers often end up being risk averse, trying to reduce the chance of further failures once the individual has experienced an unfavourable outcome from the risky alternative ([@bib46]; [@bib21]; [@bib35]), a phenomenon known as the _hot stove effect_. Within the framework of this task, risk aversion is suboptimal in the long run if the risk premium is positive ([@bib20]).

## The baseline model

For the baseline asocial reinforcement learning, we assumed a standard, well-established model that is a combination of the Rescorla–Wagner learning rule and softmax decision making ([@bib68], see Materials and methods for the full details). There are two parameters, a _learning rate_ ($\alpha$) and an _inverse temperature_ ($\beta$). The larger the $\alpha$, the more weight is given to recent experiences, making the agent’s belief update more myopic. The parameter $\beta$ regulates how sensitive the choice probability is to the belief about the option’s value (i.e. controlling the proneness to explore). As $\beta \to 0$, the softmax choice probability approximates to a random choice (i.e. highly explorative). Conversely, if $\beta \to +\mathrm{\infty }$, it asymptotes to a deterministic choice in favour of the option with the highest subjective value (i.e. highly exploitative).

Varying these two parameters systematically, it is possible to see under what conditions trial-and-error learning leads individuals to be risk averse ([Figure 1b](#fig1)). Suboptimal risk aversion becomes prominent when value updating in learning is myopic (i.e. when $\alpha$ is large) or action selection is exploitative (i.e. when $\beta$ is large) or both (the blue area of [Figure 1b](#fig1)). Under such circumstances, the hot stove effect occurs ([@bib21]): Experiences of low-value payoffs from the risky option tend to discourage decision makers from further choosing the risky option, trapping them in the safe alternative. In sum, whenever the interaction between the two learning parameters $\alpha (\beta +1)$ exceeds a threshold value, which was 2 in the current example, decision makers are expected to become averse to the risky option (the black solid lines in [Figure 2](#fig2)). The hot stove effect is known to emerge in a range of model implementations and has been widely observed in previous human experiments ([@bib46]; [@bib21]; [@bib35]).

figure: Figure 2.
:::
![](elife-75308.xml.media/fig2.jpg)

### The effect of social learning on average decision performance.

The _x_ axis is a product of two reinforcement learning parameters $\alpha (\beta +1)$, namely, the susceptibility to the hot stove effect. The _y_ axis is the mean probability of choosing the optimal risky alternative in the last 75 trials in a two-armed bandit task whose setup was the same as in [Figure 1](#fig1). The black solid curve is the analytical prediction of the asymptotic performance of individual reinforcement learning with infinite time horizon $T\to +\mathrm{\infty }$ ([@bib21]). The analytical curve shows a choice shift emerging at $\alpha (\beta +1)=2$; that is, individual learners ultimately prefer the safe to the risky option in the current setup of the task when ${\displaystyle \alpha (\beta +1)>2}$. The dotted curves are mean results of agent-based simulations of social learners with two different mean values of the copying weight $\sigma \in \{0.25,0.5\}$ (green and yellow, respectively) and asocial learners with $\sigma =0$ (purple). The difference between the agent-based simulation with $\sigma =0$ and the analytical result was due to the finite number of decision trials in the simulation, and hence, the longer the horizon, the closer they become ([Figure 2—figure supplement 1](#fig2s1)). Each panel shows a different combination of the inverse temperature $\beta$ and the conformity exponent $\theta$.
:::
{#fig2}

figure: Figure 2—figure supplement 1.
:::
![](elife-75308.xml.media/fig2-figsupp1.jpg)

### The effect of social learning on the average decision performance on the longer time horizon.

The _x_ axis is an interaction of two reinforcement learning parameters $\alpha (\beta +1)$, that is, the susceptibility to the hot stove effect. The _y_ axis is the mean probability of choosing the optimal risky alternative in the last 75 trials in the two-armed bandit task whose setup was the same as in [Figures 1](#fig1) and [2](#fig2) in the main text (i.e. $\mu =0.5$, s.d. = 1) except for the longer time horizon $T=1075$ compared to the time horizon used in the main text ($T=150$). The dotted curves are the mean result of agent-based simulations of groups of social learners with two different mean values of the copying weight $\sigma \in \{0.25,0.5\}$ or individual learners with $\sigma =0$. Each panel shows a different combination of the inverse temperature $\beta$ and the conformity exponent $\theta$. The black solid curve is the theoretical benchmark where individual reinforcement learners were expected to asymptote with $T\to +\mathrm{\infty }$. Compared to [Figure 2](#fig2) in the main text, individual learners got closer to the benchmark. On the other hand, the performance of social learners remained deviated from the benchmark, suggesting that social influence had a qualitative impact on the course of learning and decision making, rather than merely slowing down approaching the equilibrium of individual learning.
:::
{#fig2s1}

figure: Figure 2—figure supplement 2.
:::
![](elife-75308.xml.media/fig2-figsupp2.jpg)

### The effect of social learning on the time evolution of decision performance.

The _x_ axis is the number of trials. The _y_ axis is the mean proportion of choosing the optimal risky alternative. Each colour shows a different $\beta$. For the asocial learning condition (i.e. $\sigma =0$), the analytical benchmark to which reinforcement learners asymptote is shown as a horizontal line. Conformity exponent $\theta$ was 2. Group size was 8. The simulation was repeated 1000 times for each combination of parameters. Compared to asocial learning cases, social learning ($\sigma =0.3$) qualitatively alters the course of learning, rather than just speeding up or slowing down learning.
:::
{#fig2s2}

## The conformist social influence model

We next considered a collective learning situation in which a group of multiple individuals perform the task simultaneously and individuals can observe others’ actions. We assumed a simple frequency-based social cue specifying distributions of individual choices ([@bib47]; [@bib48]; [@bib72]; [@bib73]; [@bib19]). We assumed that individuals could not observe others’ earnings, ensuring that they could not sample information about payoffs being no longer available because of their own choice (i.e. forgone payoffs; [@bib21]; [@bib78]).

A realised payoff was independent of others’ decisions and was drawn solely from the payoff probability distribution specific to each alternative (and hence no externality was assumed), thereby ensuring there would be no direct social competition over the monetary reward ([@bib28]) nor normative pressure towards majority alignment ([@bib16]; [@bib45]). The value of social information was assumed to be only informational ([@bib26]; [@bib54]). Nevertheless, our model may apply to the context of normative social influences, because what we assumed here was modification in individual choice probabilities by social influences, irrespective of underlying motivations of conformity.

To model a compromise between individual trial-and-error learning and the frequency-based copying process, we formulated the social influences on reinforcement learning as a weighted average between the asocial ($A$) and social ($S$) processes of decision making, that is, ${P}_{i,t}=(1-\sigma ){A}_{i,t}+\sigma {S}_{i,t}$, where ${P}_{i,t}$ is the individual net probability of choosing an option $i\in \{r,s\}$ at time $t$ and $\sigma$ is a weight given to the social influence (_copying weight_).

In addition, the level of social frequency dependence was determined by another social learning parameter $\theta$ (_conformity exponent_), such that ${S}_{i,t}={N}_{i,t}^{\theta }/({N}_{r,t}^{\theta }+{N}_{s,t}^{\theta })$, where ${N}_{i}$ is the number of agents who chose option $i$ (see the Materials and methods for the accurate formulation). The larger the $\theta$, the more the net choice probability favours a common alternative chosen by the majority of a group at the moment (a conformity bias; [@bib14]). Note that there is no actual social influence when $\theta =0$ because in this case the ‘social influence’ favours a uniformly random choice, irrespective of whether it is a common behaviour.

Our model is a natural extension of both the asocial reinforcement learning and the model of ‘extreme conformity’ assumed in some previous models (e.g. [@bib23]), as these conditions can be expressed as a special case of parameter combinations. We explore the implications of this extension in the Discussion. The descriptions of the parameters are summarised in [Table 1](#table1).

table: Table 1.
:::
### Summary of the learning model parameters.

| Symbol | Meaning                                | Range of the value |
| ------ | -------------------------------------- | ------------------ |
| α      | Learning rate                          | \[0, 1\]            |
| β      | Inverse temperature                    | \[0, +∞\]           |
| α(1+β) | Susceptibility to the hot stove effect |                    |
| σ      | Copying weight                         | \[0, 1\]            |
| θ      | Conformity exponent                    | \[-∞, +∞\]          |
:::
{#table1}

## The collective behavioural rescue effect

Varying these two social learning parameters, $\sigma$ and $\theta$, systematically, we observed a mitigation of suboptimal risk aversion under positive frequency-based social influences. As shown in [Figure 1c](#fig1), even with a strong conformity bias (${\displaystyle \theta >1}$), social influence widened the region of parameter combinations where the majority of decision makers could escape from suboptimal risk aversion (the increase of the red area in [Figure 1c](#fig1)). The increment of the area of adaptive risk seeking was greater with $\theta =1$ than with $\theta =4$. When $\theta =1$, a large copying weight ($\sigma$) could eliminate almost all the area of risk aversion ([Figure 1c](#fig1); see also [Figure 1—figure supplement 1](#fig1s1) for a greater range of parameter combinations), whereas when $\theta =4$, there was also a region in which optimal risk seeking was weakened ([Figure 1d](#fig1)). On the other hand, such substantial switching of the majority to being risk seeking did not emerge in the negative risk premium (negative RP) task ([Figure 1—figure supplement 3](#fig1s3)), although there was a parameter region where the proportion of suboptimal risk seeking relatively increased compared to that of individual learners ([Figure 1—figure supplement 6](#fig1s6)). Naturally, increasing the copying weight $\sigma \to 1$ eventually approximated the chance-level performance in both positive and negative RP cases ([Figure 1—figure supplement 1](#fig1s1), [Figure 1—figure supplement 3](#fig1s3)). In sum, simulations suggest that conformist social influence widely promoted risk seeking under the positive RP, and that such a promotion of risk seeking was less evident in the negative RP task.

[Figure 2](#fig2) highlights the extent to which risk aversion was relaxed through social influences. Individuals with positive ${\displaystyle \sigma >0}$ could maintain a high proportion of risk seeking even in the region of high susceptibility to the hot stove effect (${\displaystyle \alpha (\beta +1)>2}$). Although social learners eventually fell into a risk-averse regime with increasing $\alpha (\beta +1)$, risk aversion was largely mitigated compared to the performance of individual learners who had $\sigma =0$. Interestingly, the probability of choosing the optimal risky option was maximised at an intermediate value of $\alpha (\beta +1)$ when the conformity exponent was large $\theta =4$ and the copying weight was high $\sigma =0.5$.

In the region of less susceptibility to the hot stove effect (${\displaystyle \alpha (\beta +1)<2}$), social influence could enhance individual optimal risk seeking up to the theoretical benchmark expected in individual reinforcement learning with an infinite time horizon (the solid curves in [Figure 2](#fig2)). A socially induced increase in risk seeking in the region ${\displaystyle \alpha (\beta +1)<2}$ was more evident with larger $\beta$, and hence with smaller $\alpha$ to satisfy ${\displaystyle \alpha (\beta +1)<2}$. The smaller the learning rate $\alpha$, the longer it would take to achieve the asymptotic equilibrium state, due to slow value updating. Asocial learners, as well as social learners with high $\sigma$ (=0.5) coupled with high $\theta$ (=4), were still far from the analytical benchmark, whereas social learners with weak social influence $\sigma =0.25$ were nearly able to converge on the benchmark performance, suggesting that social learning might affect the speed of learning. Indeed, a longer time horizon $T=1075$ reduced the advantage of weak social learners in this ${\displaystyle \alpha (\beta +1)<2}$ region because slow learners could now achieve the benchmark accuracy ([Figure 2—figure supplement 1](#fig2s1) and [Figure 2—figure supplement 2](#fig2s2)).

Approaching the benchmark with an elongated time horizon, and the concomitant reduction in the advantage of social learners, was also found in the high susceptibility region $\alpha (\beta +1)\gg 2$ especially for those who had a high conformity exponent $\theta =4$ ([Figure 2—figure supplement 1](#fig2s1)). Notably, however, facilitation of optimal risk seeking became further evident in the other intermediate region ${\displaystyle 2<\alpha (\beta +1)<4}$. This suggests that merely speeding up or slowing down learning could not satisfactorily account for the qualitative ‘choice shift’ emerging through social influences.

We obtained similar results across different settings of the multi-armed bandit task, such as a skewed payoff distribution in which either large or small payoffs were randomly drawn from a Bernoulli process ([@bib46]; [@bib21], [Figure 1—figure supplement 4](#fig1s4)) and increased option numbers ([Figure 1—figure supplement 5](#fig1s5)). Further, the conclusion still held for an alternative model in which social influences modified the belief-updating process (the value-shaping model; [@bib53]) rather than directly influencing the choice probability (the decision-biasing model) as assumed in the main text thus far (see Supplementary Methods; [Figure 1—figure supplement 2](#fig1s2)). One could derive many other more complex social learning processes that may operate in reality; however, the comprehensive search of possible model space is beyond the current interest. Yet, decision biasing was found to fit better than value shaping with our behavioural experimental data ([Figure 6—figure supplement 2](#fig6s2)), leading us to focus our analysis on the decision-biasing model.

## The robustness of individual heterogeneity

We have thus far assumed no parameter variations across individuals in a group to focus on the qualitative differences between social and asocial learners’ behaviour. However, individual differences in development, state, or experience or variations in behaviour caused by personality traits might either facilitate or undermine collective decision performance. Especially if a group is composed of both types of individuals, those who are less susceptible to the hot stove effect (${\displaystyle \alpha (\beta +1)<2}$) as well as those who are more susceptible ${\displaystyle \alpha (\beta +1)>2}$, it remains unclear who benefits from the rescue effect: Is it only those individuals with ${\displaystyle \alpha (\beta +1)>2}$ who enjoy the benefit, or can collective intelligence benefit a group as a whole? For the sake of simplicity, here we considered groups of five individuals, which were composed of either homogeneous (yellow in [Figure 3](#fig3)) or heterogeneous (green, blue, purple in [Figure 3](#fig3)) individuals. Individual values of a focal behavioural parameter were varied across individuals in a group. Other non-focal parameters were identical across individuals within a group. The basic parameter values assigned to non-focal parameters were $\alpha =0.5$, $\beta =7$, $\sigma =0.3$, and $\theta =2$, which were chosen so that the homogeneous group could generate the collective rescue effect. The groups’ mean values of the various focal parameters were matched to these basic values.

figure: Figure 3.
:::
![](elife-75308.xml.media/fig3.jpg)

### The effect of individual heterogeneity on the proportion of choosing the risky option in the two-armed bandit task.

(**a**) The effect of heterogeneity of $\alpha$, (**b**) $\beta$, (**c**) $\sigma$, and (**d**) $\theta$. Individual values of a focal behavioural parameter were varied across individuals in a group of five. Other non-focal parameters were identical across individuals within a group. The basic parameter values assigned to non-focal parameters were $\alpha =0.5$, $\beta =7$, $\sigma =0.3$, and $\theta =2$, and groups’ mean values of the various focal parameters were matched to these basic values. We simulated 3 different heterogeneous compositions: The majority (3 of 5 individuals) potentially suffered the hot stove effect ${\displaystyle {\alpha }_{i}({\beta }_{i}+1)>2}$ (**a, b**) or had the highest diversity in social learning parameters (c, d; purple); the majority were able to overcome the hot stove effect ${\displaystyle {\alpha }_{i}({\beta }_{i}+1)<2}$ (**a, b**) or had moderate heterogeneity in the social learning parameters (c, d; blue); and all individuals had ${\displaystyle {\alpha }_{i}({\beta }_{i}+1)>2}$ but smaller heterogeneity (green). The yellow diamond shows the homogeneous groups’ performance. Lines are drawn through average results across the same compositional groups. Each round dot represents a group member’s mean performance. The diamonds are the average performance of each group for each composition category. For comparison, asocial learners’ performance, with which the performance of social learners can be evaluated, is shown in gray. For heterogeneous $\alpha$ and $\beta$, the analytical solution of asocial learning performance is shown as a solid-line curve. We ran 20,000 replications for each group composition.
:::
{#fig3}

[Figure 3a](#fig3) shows the effect of heterogeneity in the learning rate ($\alpha$). Heterogeneous groups performed better on average than a homogeneous group (represented by the yellow diamond). The heterogeneous groups owed this overall improvement to the large rescue effect operating for individuals who had a high susceptibility to the hot stove effect ($\alpha (\beta +1)\gg 2$). On the other hand, the performance of less susceptible individuals (${\displaystyle \alpha (\beta +1)<2}$) was slightly undermined compared to the asocial benchmark performance shown in grey. Notably, however, how large the detrimental effect was for the low-susceptibility individuals depended on the group’s composition: The undermining effect was largely mitigated when low-susceptibility individuals (${\displaystyle \alpha (\beta +1)<2}$) made up a majority of a group (3 of 5; the blue line), whereas they performed worse than the asocial benchmark when the majority were those with high susceptibility (purple).

The advantage of a heterogeneous group was also found for the inverse temperature ($\beta$), although the impact of the group’s heterogeneity was much smaller than that for $\alpha$ ([Figure 3b](#fig3)). Interestingly, no detrimental effect for individuals with ${\displaystyle \alpha (\beta +1)<2}$ was found in association with the $\beta$ variations.

On the other hand, individual variations in the copying weight ($\sigma$) had an overall detrimental effect on collective performance, although individuals in the highest diversity group could still perform better than the asocial learners ([Figure 3c](#fig3)). Individuals who had an intermediate level of $\sigma$ achieved relatively higher performance within the group than those who had either higher or lower $\sigma$. This was because individuals with lower $\sigma$ could benefit less from social information, while those with higher $\sigma$ relied so heavily on social frequency information that behaviour was barely informed by individual learning, resulting in maladaptive herding or collective illusion ([@bib23]; [@bib73]). As a result, the average performance decreased with increasing diversity in $\sigma$.

Such a substantial effect of individual differences was not observed in the conformity exponent $\theta$ ([Figure 3d](#fig3)), where individual performance was almost stable regardless of whether the individual was heavily conformist (${\theta }_{i}=8$) or even negatively dependent on social information (${\theta }_{i}=-1$). The existence of a few conformists in a group could not itself trigger positive feedback among the group unless other individuals also relied on social information in a conformist-biased way, because the flexible behaviour of non-conformists could keep the group’s distribution nearly flat (i.e. ${N}_{s}\approx {N}_{r}$). Therefore, the existence of individuals with small $\theta$ in a heterogeneous group could prevent the strong positive feedback from being immediately elicited, compensating for the potential detrimental effect of maladaptive herding by strong conformists.

Overall, the relaxation of, and possibly the complete rescue from, a suboptimal risk aversion in repeated risky decision making emerged in a range of conditions in collective learning. It was not likely a mere speeding up or slowing down of learning process ([Figure 2—figure supplement 1](#fig2s1) and [Figure 2—figure supplement 2](#fig2s2)), nor just an averaging process mixing performances of both risk seekers and risk-averse individuals ([Figure 3](#fig3)). It depended neither on specific characteristics of social learning models ([Figure 1—figure supplement 2](#fig1s2)) nor on the profile of the bandit task’s setups ([Figure 1—figure supplement 4](#fig1s4)). Instead, our simulation suggests that self-organisation may play a key role in this emergent phenomenon. To seek a general mechanism underlying the observed collective behavioural rescue, in the next section we show a reduced, approximated differential equation model that can provide qualitative insights into the collective decision-making dynamics observed above.

## The simplified population dynamics model

To obtain a qualitative understanding of self-organisation that seems responsible for the pattern of adaptive behavioural shift observed in our individual-based simulation, we made a reduced model that approximates temporal changes of behaviour of an ‘average’ individual, or in other words, average dynamics of a population of multiple individuals, where the computational details of reinforcement learning were purposely ignored. Such a dynamic modelling approach has been commonly used in population ecology and collective animal behaviour research and has proven highly useful in disentangling the factors underlying complex systems (e.g. [@bib9]; [@bib30]; [@bib62]; [@bib66]; [@bib33]).

Specifically, we considered a differential equation that focuses only on increases and decreases in the number of individuals who are choosing the risky option (${N}_{R}$) and the safe option (${N}_{S}$) with either a positive (+) or a negative (-) ‘attitude’ (or preference) towards the risky option ([Figure 4a](#fig4)). The part of the population that has a positive attitude (${N}_{S}^{+}$ and ${N}_{R}^{+}$) is more likely to move on to, and stay at, the risky option, whereas the other part of the population that has a negative attitude (${N}_{S}^{-}$ and ${N}_{R}^{-}$) is more likely to move on to, and stay at, the safe option. Note that movements in the opposite direction also exist, such as moving on to the risky option when having a negative attitude (${P}_{R}^{-}$), but at a lower rate than ${P}_{S}^{-}$, depicted by the thickness of the arrows in [Figure 4a](#fig4). We defined that the probability of moving towards an option matched with their attitude (${P}_{S}^{-}={P}_{R}^{+}={p}_{h}$) was higher than that of moving in the opposite direction (${P}_{R}^{-}={P}_{S}^{+}={p}_{l}$), that is, ${\displaystyle {p}_{h}>{p}_{l}}$. The probability ${p}_{l}$ and ${p}_{h}$ can be seen approximately as the per capita rate of exploration and exploitation, respectively.

figure: Figure 4.
:::
![](elife-75308.xml.media/fig4.jpg)

### The population dynamics model.

(**a**) A schematic diagram of the dynamics. Solid arrows represent a change in population density between connected states at a time step. The thicker the arrow, the larger the per-capita rate of behavioural change. (**b, c**) The results of the asocial, baseline model where ${P}_{S}^{-}={P}_{R}^{+}={p}_{h}$ and ${P}_{R}^{-}={P}_{S}^{+}={p}_{l}$ (${\displaystyle {p}_{h}>{p}_{l}}$). Both figures show the equilibrium bias towards risk seeking (i.e., ${N}_{r}^{\star }-{N}_{s}^{\star }$) as a function of the degree of risk premium $e$ as well as of the per-capita probability of moving to the less preferred behavioural option ${p}_{l}$. (**b**) The explicit form of the curve is given by $-n({p}_{h}-{p}_{l})\left\{(1-e){p}_{h}-e{p}_{l}\right\}/({p}_{h}+{p}_{l})\left\{(1-e){p}_{h}+e{p}_{l}\right\}$. (**c**) The dashed curve is the analytically derived neutral equilibrium of the asocial system that results in ${N}_{R}^{*}={N}_{S}^{*}$, given by $e={p}_{h}/({p}_{h}+{p}_{l})$. (**d**) The equilibrium of the collective behavioural dynamics with social influences. The numerical results were obtained with ${N}_{S,t=0}^{-}={N}_{S,t=0}^{+}=5$, ${N}_{R,t=0}=10$, and ${p}_{h}=0.7$.
:::
{#fig4}

figure: Figure 4—figure supplement 1.
:::
![](elife-75308.xml.media/fig4-figsupp1.jpg)

### The result of the differential equation model.

The effect of both the per capita probability of exploration ${p}_{l}$ and $e$ (i.e. the ratio of individuals who prefer behavioural state $R$) on the equilibrium degree of risk seeking (i.e. ${N}_{R}^{*}-{N}_{S}^{*}$), across the different combinations of social influence parameters. Different social influence weights are shown from top to bottom ($\sigma \in \{0,0.25,0.5,0.75\}$). Different conformity exponents are shown from left to right ($\theta \in \{1,2,10\}$). The dashed curve is $e={p}_{h}/({p}_{h}+{p}_{l})$. The numeric solution was obtained with conditions ${N}_{S,t=0}^{-}={N}_{S,t=0}^{+}=5$, ${N}_{R,t=0}=10$, and ${p}_{h}=0.7$.
:::
{#fig4s1}

An attitude can change when the risky option is chosen. We assumed that a proportion $e$ ($0\le e\le 1$) of the risk-taking part of the population would have a good experience, thereby holding a positive attitude (i.e. ${N}_{R}^{+}=e{N}_{R}$). On the other hand, the rest of the risk-taking population would have a negative attitude (i.e. ${N}_{R}^{-}=(1-e){N}_{R}$). This proportion $e$ can be interpreted as an approximation of the risk premium under the Gaussian noise of risk, because the larger $e$ is, the more individuals one would expect would encounter a better experience than when making the safe choice. The full details are shown in the Materials and methods ([Table 2](#table2)).

table: Table 2.
:::
### Summary of the differential equation model parameters.

| Symbol        | Meaning                                                     | Range of the value                      |
| ------------- | ----------------------------------------------------------- | --------------------------------------- |
| ${N}_{R}^{+}$ | Density of individuals choosing $R$ and preferring $R$      | ${N}_{R}^{+}=e{N}_{R}$                  |
| ${N}_{R}^{-}$ | Density of individuals choosing $R$ and preferring $S$      | ${N}_{R}^{-}=(1-e){N}_{R}$              |
| ${N}_{S}^{+}$ | Density of individuals choosing $S$ and preferring $R$      |                                         |
| ${N}_{S}^{-}$ | Density of individuals choosing $S$ and preferring $S$      |                                         |
| ${p}_{l}$     | Per capita rate of moving to the unfavourable option        | $0\le {p}_{l}\le {p}_{h}\le 1$          |
| ${p}_{h}$     | Per capita rate of moving to the favourable option          | $0\le {p}_{l}\le {p}_{h}\le 1$          |
| $e$           | Per capita rate of becoming enchanted with the risky option | $[0,1]$                                 |
| $\sigma$      | Social influence weight                                     | $[0,1]$                                 |
| $\theta$      | Conformity exponent                                         | $[-\mathrm{\infty },+\mathrm{\infty }]$ |
:::
{#table2}

To confirm that this approximated model can successfully replicate the fundamental property of the hot stove effect, we first describe the asocial behavioural model without social influence. The baseline, asocial dynamic system has a locally stable non-trivial equilibrium that gives ${N}_{S}^{\star }\ge 0$ and ${N}_{R}^{\star }\ge 0$, where ${N}^{\star }$ means the equilibrium density at which the system stops changing ($d{N}_{S}^{\star }/dt=d{N}_{R}^{\star }/dt=0$). At equilibrium, the ratio between the number of individuals choosing the safe option $S$ and the number choosing the risky option $R$ is given by ${N}_{S}^{\star }:{N}_{R}^{\star }=e({p}_{l}/{p}_{h})+(1-e)({p}_{h}/{p}_{l}):1$, indicating that risk aversion (defined as the case where a larger part of the population chooses the safe option; ${\displaystyle {N}_{S}^{\star }>{N}_{R}^{\star }}$) emerges when the inequality ${\displaystyle e<{P}_{S}^{-}/({P}_{S}^{-}+{P}_{R}^{-})={p}_{h}/({p}_{h}+{p}_{l})}$ holds.

[Figure 4b](#fig4) visually shows that the population is indeed attracted to the safe option $S$ (that is, ${\displaystyle {N}_{S}^{\star }>{N}_{R}^{\star }}$) in a wide range of the parameter region even when there is a positive ‘risk premium’ defined as ${\displaystyle e>1/2}$. Although individuals choosing the risky option are more likely to become enchanted with the risky option than to be disappointed (i.e., ${\displaystyle e{N}_{R}={N}_{R}^{+}>(1-e){N}_{R}={N}_{R}^{-}}$), the risk-seeking equilibrium (defined as ${\displaystyle {N}_{S}^{\star }<{N}_{R}^{\star }}$) becomes less likely to emerge as the exploration rate ${p}_{l}$ decreases, consistent with the hot stove effect caused by asymmetric adaptive sampling ([@bib21]). Risk seeking never emerges when $e\le 1/2$, which is also consistent with the results of reinforcement learning.

This dynamics model provides an illustrative understanding of how the asymmetry of adaptive sampling causes the hot stove effect. Consider the case of high inequality between exploitation (${p}_{h}$) and exploration (${p}_{l}$), namely, ${p}_{h}\gg {p}_{l}$. Under such a condition, the state ${S}^{-}$, that is choosing the safe option with the negative inner attitude –, becomes a ‘dead end’ from which individuals can seldom escape once entered. However, if the inequality ${p}_{h}\ge {p}_{l}$ is not so large that a substantial fraction of the population now comes back to ${R}^{-}$ from ${S}^{-}$, the increasing number of people belonging to ${R}^{+}$ (that is, ${N}_{R}^{+}$) could eventually exceed the number of people ‘spilling out’ to ${S}^{-}$. Such an illustrative analysis shows that the hot stove effect can be overcome if the number of people who get stuck in the dead end ${S}^{-}$ can somehow be reduced. And this is possible if one can increase the ‘come-backs’ to ${R}^{-}$. In other words, if any mechanisms can increase ${P}_{R}^{-}$ in relation to ${P}_{S}^{-}$, the hot stove effect should be overcome.

Next, we assumed a frequency-dependent reliance on social information operating in this population dynamics. Specifically, we considered that the net per capita probability of choosing each option, $P$, is composed of a weighted average between the asocial baseline probability ($p$) and the social frequency influence ($F$), namely, $P=(1-\sigma )p+\sigma F$. Again, $\sigma$ is the weight of social influence, and we also assumed that there would be the conformity exponent $\theta$ in the social frequency influence $F$ such that $F={N}_{i}^{\theta }/({N}_{S}^{\theta }+{N}_{R}^{\theta })$ where $i\in \{S,R\}$ (see Materials and methods).

Through numerical analyses, we have confirmed that social influence can indeed increase the flow-back rate ${P}_{R}^{-}$, which raises the possibility of risk-seeking equilibrium ${\displaystyle {N}_{R}^{\star }>{N}_{S}^{\star }}$ ([Figure 4d](#fig4); see [Figure 4—figure supplement 1](#fig4s1) for a wider parameter region). For an approximation of the bifurcation analysis, we recorded the equilibrium density of the risky state ${N}_{R}^{\star }$ starting from various initial population distributions (that is, varying ${N}_{R,t=0}$ and ${N}_{S,t=0}=20-{N}_{R,t=0}$). [Figure 5](#fig5) shows the conditions under which the system ends up in risk-seeking equilibrium. When the conformity exponent $\theta$ is not too large (${\displaystyle \theta <10}$), there is a region that risk seeking can be a unique equilibrium, irrespective of the initial distribution, and attracting the population even from an extremely biased initial distribution such as ${\displaystyle {N}_{R,t=0}=0}$ ([Figure 5](#fig5)).

figure: Figure 5.
:::
![](elife-75308.xml.media/fig5.jpg)

### The approximate bifurcation analysis.

The relationships between the social influence weight $\sigma$ and the equilibrium number of individuals in the risky behavioural state ${N}_{R}^{\star }$ across different conformity exponents $\theta \in \{0,1,2,10\}$ and different values of risk premium $e\in \{0.55,0.65,0.7,0.75\}$, are shown as black dots. The background colours indicate regions where the system approaches either risk aversion (${\displaystyle {N}_{R}^{\star }<{N}_{S}^{\star }}$; blue) or risk seeking (${\displaystyle {N}_{R}^{\star }>{N}_{S}^{\star }}$; red). The horizontal dashed line is ${N}_{R}={N}_{S}=10$. Two locally stable equilibria emerge when $\theta \ge 2$, which suggests that the system has a bifurcation when $\sigma$ is sufficiently large. The other parameters are set to ${p}_{h}=0.7$, ${p}_{l}=0.2$, and $N=20$.
:::
{#fig5}

figure: Figure 5—figure supplement 1.
:::
![](elife-75308.xml.media/fig5-figsupp1.jpg)

### The approximate bifurcation analysis.

The relationship between the social influence weight $\sigma$ and the equilibrium number of individuals choosing the risky alternative ${N}_{R}^{\star }$ across the different conformity exponents $\theta \phantom{\rule{0.3889em}{0ex}}(\in \{0,1,2,10\})$, shown as black dots. The triangular points shown in the background of each panel indicate regions in which the group approaches risk aversion (i.e., ${\displaystyle {N}_{R}^{\star }<10}$; blue) or the risk-seeking equilibrium (i.e. ${\displaystyle {N}_{R}^{\star }>10}$; red). Two different equilibria mean that the system has a bifurcation under a given $\sigma$. The direction of the background triangles indicates whether ${N}_{R}$ increases (${\displaystyle \mathrm{\Delta }}$) or decreases (${\displaystyle \mathrm{\nabla }}$) relative to its starting position. The other parameters are set to ${p}_{h}=0.7$, ${p}_{l}=0.2$.
:::
{#fig5s1}

Under the conformist bias $\theta \ge 2$, two locally stable equilibria exist. Strong positive feedback dominates the system when both $\sigma$ and $\theta$ are large. Therefore, the system can end up in either of the equilibria depending solely on the initial density distribution, consistent with the conventional view of herding ([@bib23]; [@bib73]). This is also consistent with a well-known result of collective foraging by pheromone trail ants, which react to social information in a conformity-like manner ([@bib9]; [@bib33]).

Notably, however, even with a positive conformist bias, such as $\theta =2$, there is a region with a moderate value of $\sigma$ where risk seeking remains a unique equilibrium when the risk premium was high ($e\ge 0.7$). In this regime, the benefit of collective behavioural rescue can dominate without any possibility of maladaptive herding.

It is worth noting that in the case of $\theta =0$, where individuals make merely a random choice at a rate $\sigma$, risk aversion is also relaxed ([Figure 5](#fig5), the leftmost column), and the adaptive risky shift even emerges around ${\displaystyle 0.25<\sigma <1}$. However, this ostensible behavioural rescue is due solely to the pure effect of additional random exploration that reduces ${P}_{S}^{-}/({P}_{S}^{-}+{P}_{R}^{-})$, mitigating stickiness to the dead-end status ${S}^{-}$. When $\sigma \to 1$ with $\theta =0$, therefore, the risky shift eventually disappears because the individuals choose between $S$ and $R$ almost randomly.

However, the collective risky shift observed in the conditions of ${\displaystyle \theta >0}$ cannot be explained solely by the mere addition of exploration. A weak conformist bias (i.e. a linear response to the social frequency; $\theta =1$) monotonically increases the equilibrium density ${N}_{R}^{\star }$ with increasing social influence $\sigma$, which goes beyond the level of risky shift observed with the addition of random choice ([Figure 5](#fig5)). Therefore, although the collective rescue might indeed owe its part of the mitigation of the hot stove effect to increasing exploration, the further enhancement of risk seeking cannot be fully explained by it alone.

The key is the interaction between negative and positive feedback. As we discussed above, risk aversion is reduced if the ratio ${P}_{S}^{-}/({P}_{S}^{-}+{P}_{R}^{-})$ decreases, either by increasing ${P}_{R}^{-}$ or reducing ${P}_{S}^{-}$. The per individual probability of choosing the safe option with the negative attitude, that is, ${P}_{S}^{-}=(1-\sigma ){p}_{h}+\sigma {N}_{S}^{\theta }/({N}_{R}^{\theta }+{N}_{S}^{\theta })$, becomes smaller than the baseline exploitation probability ${p}_{h}$, when ${\displaystyle {N}_{S}^{\theta }/({N}_{R}^{\theta }+{N}_{S}^{\theta })<{p}_{h}}$. Even though the majority of the population may still choose the safe alternative and hence ${\displaystyle {N}_{S}>{N}_{R}}$, the inequality ${\displaystyle {N}_{S}^{\theta }/({N}_{R}^{\theta }+{N}_{S}^{\theta })<{p}_{h}}$ can nevertheless hold if one takes a sufficiently small value of $\theta$. Crucially, the reduction of ${\displaystyle {P}_{S}^{-}}$ leads to a further reduction of ${P}_{S}^{-}$ itself through decreasing ${\displaystyle {N}_{S}^{-}}$, thereby further decreasing the social influence supporting the safe option. Such a negative feedback process weakens the concomitant risk aversion. Naturally, this negative feedback is maximised with $\theta =0$.

Once the negative feedback has weakened the underlying risk aversion, the majority of the population eventually choose the risky option, an effect evident in the case of $\theta =0$ ([Figure 5](#fig5)). What uniquely operates in cases of ${\displaystyle \theta >0}$ is that because ${N}_{R}$ is a majority by now, positive feedback starts. Thanks to the conformist bias, the inequality ${\displaystyle {N}_{R}>{N}_{S}}$ is further _amplified_. In this phase, the larger $\theta$, the stronger the concomitant relationship ${N}_{S}^{\theta }/({N}_{R}^{\theta }+{N}_{S}^{\theta })\ll {p}_{h}$. Such positive feedback will never operate with $\theta \le 0$.

In conclusion, it is the synergy of negative and positive feedback that explains the full range of adaptive risky shift. Neither positive nor negative feedback alone can account for both accuracy and flexibility emerging through collective learning and decision making. The results are qualitatively unchanged across a range of different combinations of $e$, ${p}_{l}$, and ${p}_{h}$ ([Figure 4—figure supplement 1](#fig4s1) and [Figure 5—figure supplement 1](#fig5s1)). It is worth noting that when ${\displaystyle e<0.5}$, this social frequency-dependent population tends to exhibit risk aversion ([Figure 5—figure supplement 1](#fig5s1)), consistent with the result of the agent-based simulation for the case where the mean payoff of the risky option was smaller than that of the safe option ([Figure 1—figure supplement 3](#fig1s3)). Therefore, the system does not mindlessly prefer risk seeking, but it becomes risk prone only when to do so is favourable in the long run.

## An experimental demonstration

One hundred eighty-five adult human subjects performed the individual task without social interactions, while 400 subjects performed the task collectively with group sizes ranging from 2 to 8. We confirmed that the model predictions were qualitatively unchanged across the experimental settings used in the online experiments ([Figure 1—figure supplement 5](#fig1s5)).

We used four different task settings. Three of them were positive risk premium (positive RP) tasks that had an optimal risky alternative, while the other was a negative risk premium (negative RP) task that had a suboptimal risky alternative. On the basis of both the agent-based simulation ([Figure 1](#fig1) and [Figure 1—figure supplement 3](#fig1s3)) and the population dynamics ([Figure 5](#fig5) and [Figure 5—figure supplement 1](#fig5s1)), we hypothesised that conformist social influence promotes risk seeking to a lesser extent when the RP is negative than when it is positive. We also expected that whether the collective rescue effect emerges under positive RP settings depends on learning parameters such as ${\displaystyle {\alpha }_{i}({\beta }_{i}+1)}$ ([Figure 1—figure supplement 5d-f](#fig1s5)).

The Bayesian model comparison ([@bib65]) revealed that participants in the group condition were more likely to employ decision-biasing social learning than either asocial reinforcement learning or the value-shaping process ([Figure 6—figure supplement 2](#fig6s2)). Therefore, in the following analysis, we focus on results obtained from the decision-biasing model fit. Individual parameters were estimated using a hierarchical Bayesian method whose performance had been supported by the parameter recovery ([Figure 6—figure supplement 3](#fig6s3)).

Parameter estimation ([Table 3](#table3)) showed that individuals in the group condition across all four tasks were likely to use social information in their decision making at a rate ranging between 4% and 18% (Mean $\sigma$; [Table 3](#table3)), and that mean posterior values of $\theta$ were above 1 for all four tasks. These suggest that participants were likely to use a mix of individual reinforcement learning and conformist social learning.

table: Table 3.
:::
### Means and 95% Bayesian credible intervals (shown in square brackets) of the global parameters of the learning model.

The group condition and individual condition are shown separately. All parameters satisfied the Gelman–Rubin criterion ${\displaystyle \hat{R}<1.01}$. All estimates are based on over 500 effective samples from the posterior.

| Task category | Positive risk premium (positive RP) | Negative risk premium (negative RP) |                    |                    |
| ------------- | ----------------------------------- | ----------------------------------- | ------------------ | ------------------ |
| Task          | 1-risky-1-safe                      | 1-risky-3-safe                      | 2-risky-2-safe     | 1-risky-1-safe     |
| Group         | n = 123                             | n = 97                              | n = 87             | n = 93             |
| μ~logitα~     | –2.2 \[-2.8,–1.5\]                   | –1.8 \[-2.3,–1.4\]                   | –1.7 \[-2.1,–1.3\]  | –0.09 \[-0.7, 0.6\] |
| (Mean α)      | 0.10 \[0.06, 0.18\]                  | 0.14 \[0.09, 0.20\]                  | 0.15 \[0.11, 0.21\] | 0.48 \[0.3, 0.6\]   |
| μ~logitβ~     | 1.4 \[1.1, 1.6\]                     | 1.5 \[1.3, 1.8\]                     | 1.3 \[1.0, 1.5\]    | 1.2 \[1.0, 1.5\]    |
| (Mean β)      | 4.1 \[3.0, 5.0\]                     | 4.5 \[3.7, 6.0\]                     | 3.7 \[2.7, 4.5\]    | 3.3 \[2.7, 4.5\]    |
| μ~logitα~     | –2.4 \[-3.1,–1.8\]                   | –2.1 \[-2.6,–1.6\]                   | –2.1 \[-2.5,–1.7\]  | –2.0 \[-2.7,–1.5\]  |
| (Mean σ)      | 0.08 \[0.04, 0.14\]                  | 0.11 \[0.07, 0.17\]                  | 0.11 \[0.08, 0.15\] | 0.12 \[0.06. 0.18\] |
| μ~θ~ = mean θ | 1.4 \[0.58, 2.3\]                    | 1.6 \[0.9, 2.4\]                     | 1.8 \[1.0, 2.9\]    | 1.6 \[0.9, 2.3\]    |
| Individual    | n = 45                              | n = 51                              | n = 64             | n = 25             |
| μ~logitα~     | –2.1 \[-3.1,–0.87\]                  | –2.1 \[-2.6,–1.6\]                   | –1.3 \[-2.1,–0.50\] | –1.3 \[-2.2,–0.4\]  |
| (Mean α)      | 0.11 \[0.04, 0.30\]                  | 0.11 \[0.07, 0.17\]                  | 0.21 \[0.11, 0.38\] | 0.2 \[0.1, 0.4\]    |
| μ~logitβ~     | 0.42 \[-0.43, 1.1\]                  | 0.91 \[0.63, 1.2\]                   | 0.76 \[0.42, 1.1\]  | 1.2 \[0.9, 1.4\]    |
| (Mean β)      | 1.5 \[0.65, 3.0\]                    | 2.5 \[1.9, 3.3\]                     | 2.1 \[1.5, 3.0\]    | 3.3 \[2.5, 4.1\]    |
:::
{#table3}

To address whether the behavioural data are well explained by our social learning model and whether collective rescue was indeed observed for social learning individuals, we conducted agent-based simulations of the fit computational model with the calibrated parameters, including 100,000 independent runs for each task setup (see Materials and methods).

The results of the agent-based simulations agreed with our hypotheses ([Figure 6](#fig6)). Overall, the 80% Bayesian credible intervals of the predicted performance of the group condition (shades of orange in [Figure 6](#fig6)) cover an area of more risk taking than the area covered by the individual condition (shades of grey). As predicted, in the negative RP task, social learning promoted suboptimal risk taking for some values of $\alpha (\beta +1)$, but the magnitude looked smaller compared to in the positive RP tasks. Additionally, increasing ${\sigma }_{i}$ led to an increasing probability of risk taking in the positive RP tasks ([Figure 6a–c](#fig6)), whereas in the negative RP task, increasing $\sigma$ did not always increase risk taking ([Figure 6d](#fig6)).

figure: Figure 6.
:::
![](elife-75308.xml.media/fig6.jpg)

### Prediction of the fit learning model.

Results of a series of agent-based simulations with individual parameters that were drawn randomly from the best fit global parameters. Independent simulations were conducted 100,000 times for each condition. Group size was fixed to six for the group condition. Lines are means (black-dashed: individual, coloured-solid: group) and the shaded areas are 80% Bayesian credible intervals. Mean performances of agents with different ${\sigma }_{i}$ are shown in the colour gradient. (**a**) A two-armed bandit task. (**b**) A 1-risky-3-safe (four-armed) bandit task. (**c**) A 2-risky-2-safe (four-armed) bandit task. (**d**) A negative risk premium two-armed bandit task.
:::
{#fig6}

figure: Figure 6—figure supplement 1.
:::
![](elife-75308.xml.media/fig6-figsupp1.jpg)

### Experimental results with the mixed logit model regression.

The black triangles are subjects in the individual learning condition; the orange dots are those in the group condition with group sizes ranging from 2 to 8. The solid lines are predictions from a mixed logit model for the individual condition (black) and for the group condition (orange), with the shaded area showing the 95% Bayesian credible intervals (CIs). (**a**) A two-armed bandit task ($N=168)$. (**b**) A 1-risky-3-safe (four-armed) bandit task ($N=148$). (**c**) A 2-risky-2-safe (four-armed) bandit task ($N=151$). (**d**) A negative risk premium (RP) two-armed bandit task ($N=118$). The width of the CI for the individual condition in the negative RP task is due to the lack of data points in the region. The _x_ axis is ${\alpha }_{i}({\beta }_{i}+1)$, namely, the susceptibility to the hot stove effect. (**a**, **b,** and **d**) The _y_ axis is the mean proportion of choosing the risky alternative averaged over the second half of the trials. (**c**) The _y_ axis is the mean proportion of choosing the optimal risky alternative averaged over the second half of the trials. The horizontal lines show the chance-level probability.
:::
{#fig6s1}

figure: Figure 6—figure supplement 2.
:::
![](elife-75308.xml.media/fig6-figsupp2.jpg)

### Bayesian model comparison.

(**a**) The model recovery performance: model frequencies (dark shade) and exceedance probability (XP) for each pair of simulated and fitted models, calculated by the Widely Applicable Information Criterion (WAIC). (b–d) Model comparison results. The lengths of the bars indicate model frequencies. Exceedance probability (XP) of the decision-biasing model is shown.
:::
{#fig6s2}

figure: Figure 6—figure supplement 3.
:::
![](elife-75308.xml.media/fig6-figsupp3.jpg)

### The parameter recovery performance.

The top half and bottom half of the figure are the results of parameter recovery test 1 and 2, respectively. The left column shows the global parameters fitted for each of the two four-armed bandit tasks, the 1-risky-3-safe task ($N=105$) and the 2-risky-2-safe task ($N=105$). The red points are the true values and the black points are the mean posterior values (i.e. recovered values). The 95% Bayesian credible intervals are shown with error bars. The middle and right column are individual-level parameters across the two task conditions ($N=210$). The _x_ axis is the true value and the _y_ axis is the fitted (i.e. the mean posterior) individual value. The differences between the true value and the estimated value are shown in different colours (Dark: fit well). The Pearson’s correlation coefficients between the true and fitted values are shown.
:::
{#fig6s3}

However, a complete switch of the majority’s behaviour from the suboptimal safe options to the optimal risky option (i.e. ${\displaystyle {P}_{r}>0.5}$ for the two-armed task and ${\displaystyle {P}_{r}>0.25}$ for the four-armed task) was not widely observed. This might be because of the low copying weight ($\sigma$), coupled with the lower ${\alpha }_{i}({\beta }_{i}+1)$ of individual learners (mean \[median\] = 0.8 \[0.3\]) than that of social learners (mean \[median\] = 1.1 \[0.5\]; [Table 3](#table3)). The weak average reliance on social learning (${\sigma }_{i}$) hindered the strong collective rescue effect because strong positive feedback was not robustly formed.

To quantify the effect size of the relationship between the proportion of risk taking and each subject’s best fit learning parameters, we analysed a generalised linear mixed model (GLMM) fitted with the experimental data (see Materials and methods; [Table 4](#table4)). Within the group condition, the GLMM analysis showed a positive effect of ${\sigma }_{i}$ on risk taking for every task condition ([Table 4](#table4)), which supports the simulated pattern. Also consistent with the simulations, in the positive RP tasks, subjects exhibited risk aversion more strongly when they had a higher value of ${\displaystyle {\alpha }_{i}({\beta }_{i}+1)}$ ([Figure 6—figure supplement 1a-c](#fig6s1)). There was no such clear trend in data from the negative RP task, although we cannot make a strong inference because of the large width of the Bayesian credible interval ([Figure 6—figure supplement 1d](#fig6s1)). In the negative RP task, subjects were biased more towards the (favourable) safe option than subjects in the positive RP tasks (i.e. the intercept of the GLMM was lower in the negative RP task than in the others).[Table 2](#table2).

table: Table 4.
:::
### Means and 95% Bayesian credible intervals (CIs; shown in square brackets) of the posterior estimations of the mixed logit model (generalised linear mixed model) that predicts the probability of choosing the risky alternative in the second half of the trial (${\displaystyle t>35)}$.

All parameters satisfied the Gelman–Rubin criterion ${\displaystyle \hat{R}<1.01}$. All estimates are based on over 500 effective samples from the posterior. Coefficients whose CI is either below or above 0 are highlighted.

| Task category                                   | Positive Risk Premium (positive RP) | Negative Risk Premium (negative RP) |                   |                   |
| ----------------------------------------------- | ----------------------------------- | ----------------------------------- | ----------------- | ----------------- |
| Task                                            | 1-risky-1-safe                      | 1-risky-3-safe                      | 2-risky-2-safe    | 1-risky-1-safe    |
|                                                 | n = 168                             | n = 148                             | n = 151           | n = 118           |
| Intercept                                       | –0.1 \[-0.6, 0.3\]                   | –1.1 \[-1.5,–0.6\]                   | –0.8 \[-1.2,–0.4\] | –3.5 \[-4.4,–2.7\] |
| Susceptibility to the hot stove effect (α(β+1)) | –0.9 \[-1.3,–0.4\]                   | –1.0 \[-1.5,–0.5\]                   | –0.9 \[-1.3,–0.6\] | 0.6 \[-0.1, 1.4\]  |
| Group (no = 0/yes = 1)                          | 0.0 \[-0.7, 0.7\]                    | –0.2 \[-1.0, 0.7\]                   | 0.4 \[-0.5, 1.2\]  | 3.8 \[2.7, 4.9\]   |
| Group × α(β+1)                                  | 0.6 \[0.0, 1.1\]                     | 0.4 \[0.0, 0.9\]                     | 0.3 \[-0.1, 0.7\]  | –1.1 \[-1.9,–0.3\] |
| Group × copying weight σ                        | 1.4 \[0.5, 2.3\]                     | 1.9 \[0.8, 3.0\]                     | 2.2 \[0.4, 4.0\]   | 3.8 \[2.2, 5.3\]   |
| Group × conformity exponent θ                   | –0.7 \[-0.9,–0.5\]                   | 0.2 \[0.0, 0.5\]                     | –0.3 \[-0.5,–0.1\] | –1.8 \[-2.1,–1.5\] |
:::
{#table4}

In sum, the experimental data analysis supports our prediction that conformist social influence promotes favourable risk taking even if individuals are biased towards risk aversion. The GLMM generally agreed with the theoretical prediction, and the fitted computational model that was supported by the Bayesian model comparison confirmed that the observed pattern was indeed likely to be a product of the collective rescue effect by conformist social learning. As predicted, the key was the balance between individual learning and the use of social information. In the Discussion, we consider the effect of the experimental setting on human learning strategies, which can be explored in future studies.

# Discussion

We have demonstrated that frequency-based copying, one of the most common forms of social learning strategy, can rescue decision makers from committing to adverse risk aversion in a risky trial-and-error learning task, even though a majority of individuals are potentially biased towards suboptimal risk aversion. Although an extremely strong reliance on conformist influence can raise the possibility of getting stuck on a suboptimal option, consistent with the previous view of herding by conformity ([@bib57]; [@bib23]), the mitigation of risk aversion and the concomitant collective behavioural rescue could emerge in a wide range of situations under modest use of conformist social learning.

Neither the averaging process of diverse individual inputs nor the speeding up of learning could account for the rescue effect. The individual diversity in the learning rate (${\alpha }_{i}$) was beneficial for the group performance, whereas that in the social learning weight (${\sigma }_{i}$) undermines the average decision performance, which could not be explained simply by a monotonic relationship between diversity and wisdom of crowds ([@bib43]). Self-organisation through collective behavioural dynamics emerging from the experience-based decision making must be responsible for the seemingly counter-intuitive phenomenon of collective rescue.

Our simplified differential equation model has identified a key mechanism of the collective behavioural rescue: the synergy of positive and negative feedback. Despite conformity, the probability of choosing the suboptimal option can decrease from what is expected by individual learning alone. Indeed, an inherent individual preference for the safe alternative, expressed by the softmax function ${\displaystyle {e}^{\beta {Q}_{s}}/({e}^{\beta {Q}_{s}}+{e}^{\beta {Q}_{r}})}$, is mitigated by the conformist influence ${\displaystyle {N}_{s}^{\theta }/({N}_{s}^{\theta }+{N}_{r}^{\theta })}$ as long as the former is larger than the latter. In other words, risk-aversion was mitigated not because the majority chose the risky option, nor were individuals simply attracted towards the majority. Rather, participants’ choices became risker even though the majority chose the safer alternative at the outset. Under social influences (either because of informational or normative motivations), individuals become more explorative, likely to continue sampling the risky option even after he/she gets disappointed by poor rewards. Once individual risk aversion is reduced, there will exist fewer individuals choosing the suboptimal safe option, which further reduces the number of majority choosing the safe option. This negative feedback facilitates individuals revisiting the risky alternative. Such an attraction to the risky option allows more individuals, including those who are currently sceptical about the value of the risky option, to experience a large bonanza from the risky option, which results in ‘gluing’ them to the risky alternative for a while. Once a majority of individuals get glued to the risky alternative, positive feedback from conformity kicks in, and optimal risk seeking is further strengthened.

Models of conformist social influences have suggested that influences from the majority on individual decision making can lead a group as a whole to collective illusion that individuals learn to prefer any behavioural alternatives supported by many other individuals ([@bib22]; [@bib23]). However, previous empirical studies have repeatedly demonstrated that collective decision making under frequency-based social influences is broadly beneficial and can maintain more flexibility than what suggested by models of herding and collective illusion ([@bib73]; [@bib3]; [@bib9]; [@bib62]; [@bib33]; [@bib38]). For example, [@bib3] demonstrated that populations of great tits (_Parus major_) could switch their behavioural tradition after an environmental change even though individual birds were likely to have a strong conformist tendency. A similar phenomenon was also reported in humans ([@bib73]).

Although these studies did not focus on risky decision making, and hence individuals were not inherently biased, experimentally induced environmental change was able to create such a situation where a majority of individuals exhibited an out-dated, suboptimal behaviour. However, as we have shown, a collective learning system could rescue their performance even though the individual distribution was strongly biased towards the suboptimal direction at the outset. The great tit and human groups were able to switch their tradition because of, rather than despite, the conformist social influence, thanks to the synergy of negative and positive feedback processes. Such the synergistic interaction between positive and negative feedback could not be predicted by the collective illusion models where individual decision making is determined fully by the majority influence because no negative feedback would be able to operate.

Through online behavioural experiments using a risky multi-armed bandit task, we have confirmed our theoretical prediction that simple frequency-based copying could mitigate risk aversion that many individual learners, especially those who had higher learning rates or lower exploration rates or both, would have exhibited as a result of the hot stove effect. The mitigation of risk aversion was also observed in the negative RP task, in which social learning slightly undermined the decision performance. However, because riskiness and expected reward are often positively correlated in a wide range of decision-making environments in the real world ([@bib27]; [@bib56]), the detrimental effect of reducing optimal risk aversion when risk premium is negative could be negligible in many ecological circumstances, making the conformist social learning beneficial in most cases.

Yet, a majority, albeit a smaller one, still showed risk aversion. The weak reliance on social learning, which affected less than 20% of decisions, was unable to facilitate strong positive feedback. The little use of social information might have been due to the lack of normative motivations for conformity and to the stationarity of the task. In a stable environment, learners could eventually gather enough information as trials proceeded, which might have made them less curious about information gathering including social learning ([@bib60]). In reality, people might use more sophisticated social learning strategies whereby they change the reliance on social information flexibly over trials ([@bib19]; [@bib72]; [@bib73]). Future research should consider more strategic use of social information, and will look at the conditions that elicit heavier reliance on the conformist social learning in humans, such as normative pressures for aligning with majority, volatility in the environment, time pressure, or an increasing number of behavioural options ([@bib52]), coupled with much larger group sizes ([@bib73]).

The low learning rate $\alpha$, which was at most 0.2 for many individuals in all the experimental task except for the negative RP task, should also have hindered the potential benefits of collective rescue in our current experiment, because the benefit of mitigating the hot stove effect would be minimal or hardly realised under such a small susceptibility to the hot stove effect. Although we believe that the simplest stationary environment was a necessary first step in building our understanding of the collective behavioural rescue effect, we would suggest that future studies use a temporally unstable (‘restless’) bandit task to elicit both a higher learning rate and a heavier reliance on social learning, so as to investigate the possibilities of a stronger effect. Indeed, previous studies with changing environments have reported a learning rate as high as ${\displaystyle \alpha >0.5}$ ([@bib72]; [@bib73]; [@bib19]), under which individual learners should have suffered the hot stove trap more often.

Information about others’ payoffs might also be available in addition to inadvertent social frequency cues in some social contexts ([@bib8]; [@bib13]). Knowing others’ payoffs allows one to use the ‘copy-successful-individuals’ strategy, which has been suggested to promote risk seeking irrespective of the risk premium because at least a subset of a population can be highly successful by sheer luck in risk taking ([@bib5]; [@bib6]; [@bib70]). Additionally, cooperative communications may further amplify the suboptimal decision bias if information senders selectively communicate their own, biased, beliefs ([@bib51]). Therefore, although communication may transfer information about forgone payoffs of other alternatives, which could mitigate the hot stove effect ([@bib21]; [@bib78]), future research should explore the potential impact of active sharing of richer information on collective learning situations ([@bib71]).

In contrast, previous studies suggested that competitions or conflicts of interest among individuals can lead to better collective intelligence than fully cooperative situations ([@bib17]) and can promote adaptive risk taking ([@bib4]). Further research will identify conditions under which cooperative communication containing richer information can improve decision making and drive adaptive cumulative cultural transmission ([@bib18]; [@bib50]), when adverse biases in individual decision-making processes prevail.

The generality of our dynamics model should apply to various collective decision-making systems, not only to human groups. Because it is a fundamental property of adaptive reinforcement learning, risk aversion due to the hot stove effect should be widespread in animals ([@bib58]; [@bib76]; [@bib35]). Therefore, its solution, the collective behavioural rescue, should also operate broadly in collective animal decision making because frequency-based copying is one of the common social learning strategies ([@bib36]; [@bib31]). Future research should determine to what extent the collective behavioural rescue actually impacts animal decision making in wider contexts, and whether it influences the evolution of social learning, information sharing, and the formation of group living.

We have identified a previously overlooked mechanism underlying the adaptive advantages of frequency-based social learning. Our results suggest that an informational benefit of group living could exist well beyond simple informational pooling where individuals can enjoy the wisdom of crowds effect ([@bib74]). Furthermore, the flexibility emerging through the interaction of negative and positive feedback suggests that conformity could evolve in a wider range of environments than previously assumed ([@bib2]; [@bib55]), including temporally variable environments ([@bib3]). Social learning can drive self-organisation, regulating the mitigation and amplification of behavioural biases and canalising the course of repeated decision making under risk and uncertainty.

# Materials and methods

## The baseline asocial learning model and the hot stove effect

We assumed that the decision maker updates their value of choosing the alternative $i$ ($\in \{s,r\}$) at time $t$ following the Rescorla–Wagner learning rule: ${Q}_{i,t+1}\leftarrow (1-\alpha ){Q}_{i,t}+\alpha {\pi }_{i,t}$, where $\alpha$ ($0\le \alpha \le 1$) is a _learning rate_, manipulating the step size of the belief updating, and ${\pi }_{i,t}$ is a realised payoff from the chosen alternative $i$ at time $t$ ([@bib68]). The larger the $\alpha$, the more weight is given to recent experiences, making reinforcement learning more myopic. The $Q$ value for the unchosen alternative is unchanged. Before the first choice, individuals had no previous preference for either option (i.e. ${Q}_{r,1}={Q}_{s,1}=0$). Then $Q$ values were translated into choice probabilities through a softmax (or multinomial-logistic) function such that ${P}_{i,t}=\mathrm{exp}(\beta {Q}_{i,t})/(\mathrm{exp}(\beta {Q}_{s,t})+\mathrm{exp}(\beta {Q}_{r,t}))$, where $\beta$, the _inverse temperature_, is a parameter regulating how sensitive the choice probability is to the value of the estimate $Q$ (i.e. controlling the proneness to explore).

In such a risk-heterogeneous multi-armed bandit setting, reinforcement learners are prone to exhibiting suboptimal risk aversion ([@bib46]; [@bib21]; [@bib35]), even though they could have achieved high performance in a risk-homogeneous task where all options have an equivalent payoff variance ([@bib68]). [@bib21] mathematically derived the condition under which suboptimal risk aversion arises, depicted by the dashed curve in [Figure 1b](#fig1). In the main analysis, we focused on the case where the risky alternative had $\mu =1.5$ and $\text{s.d.}=1$ and the safe alternative generated ${\pi }_{s}=1$ unless otherwise stated, that is, where choosing the risky alternative was the optimal strategy for a decision maker in the long run.

## Collective learning and social influences

We extended the baseline model to a collective learning situation in which a group of 10 individuals completed the task simultaneously and individuals could obtain social information. For social information, we assumed a simple frequency-based social cue specifying distributions of individual choices ([@bib47]; [@bib48]; [@bib72]; [@bib73]; [@bib19]). Following the previous modelling of social learning in such multi-agent multi-armed bandit situations (e.g. [@bib3]; [@bib7]; [@bib47]; [@bib48]; [@bib72]; [@bib73]; [@bib19]), we assumed that social influences on reinforcement learning would be expressed as a weighted average between the softmax probability based on the $Q$ values and the conformist social influence, as follows:

$$
{P}_{i,t}=(1-\sigma )\frac{\mathrm{exp}(\beta {Q}_{i,t})}{\mathrm{exp}(\beta {Q}_{r,t})+\mathrm{exp}(\beta {Q}_{s,t})}+\sigma \frac{({N}_{i,t-1}+0.1{)}^{\theta }}{({N}_{s,t-1}+0.1{)}^{\theta }+({N}_{r,t\mathrm{\%}-1}+0.1{)}^{\theta }}
$$

where $\sigma$ was a weight given to the social influence (_copying weight_) and $\theta$ was the strength of conformist influence (_conformity exponent_), which determines the influence of social frequency on choosing the alternative $i$ at time $t-1$, that is, ${N}_{i,t-1}$. The larger the conformity exponent $\theta$, the higher the influence that was given to an alternative that was chosen by more individuals, with non-linear conformist social influence arising when ${\displaystyle \theta >1}$. We added a small number, 0.1, to ${N}_{i,t-1}$ so that an option chosen by no one (i.e., ${N}_{i,t-1}=0$) could provide the highest social influence when ${\displaystyle \theta <0}$ (negative frequency bias). Although this additional 0.1 slightly reduces the conformity influence when ${\displaystyle \theta >0}$, we confirmed that the results were qualitatively unchanged. Note also that in the first trial $t=1$, we assumed that the choice was determined solely by the asocial softmax function because there was no social information available yet.

Note that when $\sigma =0$, there is no social influence, and the decision maker is considered an asocial learner. It is also worth noting that when $\sigma =1$ with ${\displaystyle \theta >1}$, individual choices become fully contingent on the group’s most common behaviour, which was assumed in some previous models of strong conformist social influences in sampling behaviour ([@bib23]). The descriptions of the parameters are shown in [Table 1](#table1). The simulations were run in R 4.0.2 (<https://www.r-project.org>) and the code is available at ([the author’s github repository](https://github.com/WataruToyokawa/ToyokawaGaissmaier2021/tree/main/dynamicsModel)).

## The approximated dynamics model of collective behaviour

We assume a group of $N$ individuals who exhibit two different behavioural states: choosing a safe alternative $S$, exhibited by ${N}_{S}$ individuals; and choosing a risky alternative $R$, exhibited by ${N}_{R}$ individuals ($N={N}_{S}+{N}_{R}$). We also assume that there are two different ‘inner belief’ states, labelled ‘-’ and ‘+’. Individuals who possess the negative belief prefer the safe alternative $S$ to $R$, while those who possess the positive belief prefer $R$ to $S$. A per capita probability of choice shift from one behavioural alternative to the other is denoted by $P$. For example, ${P}_{S}^{-}$ means the individual probability of changing the choice to the safe alternative from the risky alternative under the negative belief. Because there exist ${N}_{S}^{-}$ individuals who chose $S$ with belief -, the total number of individuals who ‘move on’ to $S$ from $R$ at one time step is denoted by ${\displaystyle {P}_{S}^{-}{N}_{S}^{-}}$. We assume that the probability of shifting to the more preferable option is larger than that of shifting to the less preferable option, that is, ${\displaystyle {P}_{S}^{-}>{P}_{R}^{-}}$ and ${\displaystyle {P}_{R}^{+}>{P}_{S}^{+}}$ ([Figure 4a](#fig4)).

We assume that the belief state can change by choosing the risky alternative. We define that the per capita probability of becoming + state, that is, having a higher preference for the risky alternative, is $e$ ($0\le e\le 1$), and hence ${N}_{R}^{+}=e{N}_{R}$. The rest of the individuals who choose the risky alternative become - belief state, that is, ${N}_{R}^{-}=(1-e){N}_{R}$.

We define ‘$e$’ so that it can be seen as a risk premium of the gambles. For example, imagine a two-armed bandit task equipped with one risky arm with Gaussian noises and the other a sure arm. The larger the mean expected reward of the risky option (i.e. the higher the risk premium), the more people who choose the risky arm are expected to obtain a larger reward than what the safe alternative would provide. By assuming ${\displaystyle e>1/2}$, therefore, it approximates a situation where risk seeking is optimal in the long run.

Here, we focus only on the population dynamics: If more people choose $S$, ${N}_{S}$ increases. On the other hand, if more people choose $R$, ${N}_{R}$ increases. As a consequence, the system may eventually reach an equilibrium state where both ${N}_{S}$ and ${N}_{R}$ no longer change. If we find that the equilibrium state of the population (denoted by \*) satisfies ${\displaystyle {N}_{R}^{\star }>{N}_{S}^{\star }}$, we define that the population exhibits risk seeking, escaping from the hot stove effect. For the sake of simplicity, we assumed ${p}_{l}={P}_{R}^{-}={P}_{S}^{+}$ and ${p}_{h}={P}_{R}^{+}={P}_{S}^{-}$, where $0\le {p}_{l}\le {p}_{h}\le 1$, for the asocial baseline model.

Considering ${N}_{R}^{+}=e{N}_{R}$ and ${N}_{R}^{-}=(1-e){N}_{R}$, the dynamics are written as the following differential equations:

$$
{\displaystyle \{\begin{array}{ll}{\displaystyle \frac{d{N}_{R}}{dt}={p}_{l}{N}_{S}^{-}-{p}_{h}(1-e){N}_{R}+{p}_{h}{N}_{S}^{+}-{p}_{l}e{N}_{R}}& \\ {\displaystyle \frac{d{N}_{S}^{-}}{dt}=-{p}_{l}{N}_{S}^{-}+{p}_{h}(1-e){N}_{R},}& \\ {\displaystyle \frac{d{N}_{S}^{+}}{dt}=-{p}_{h}{N}_{S}^{+}+{p}_{l}e{N}_{R}.}& \end{array}}
$$

Overall, our model crystallises the asymmetry emerging from adaptive sampling, which is considered as a fundamental mechanism of the hot stove effect ([@bib21]; [@bib46]): Once decision makers underestimate the expected value of the risky alternative, they start avoiding it and do not have another chance to correct the error. In other words, although there would potentially be more individuals who obtain a preference for $R$ by choosing the risky alternative (i.e. ${\displaystyle e>0.5}$), this asymmetry raised by the adaptive balance between exploration–exploitation may constantly increase the number of people who possess a preference for $S$ due to underestimation of the value of the risky alternative. If our model is able to capture this asymmetric dynamics properly, the relationship between $e$ (i.e. the potential goodness of the risky option) and ${p}_{l}/{p}_{h}$ (i.e. the exploration–exploitation) should account for the hot stove effect, as suggested by previous learning model analysis ([@bib21]). The equilibrium analysis was conducted in Mathematica ([code is available online](https://github.com/WataruToyokawa/ToyokawaGaissmaier2021/tree/main/dynamicsModel)). The results are shown in [Figure 4](#fig4).

## Collective dynamics with social influences

For social influences, we assumed that the behavioural transition rates, ${P}_{S}$ and ${P}_{R}$, would depend on the number of individuals ${N}_{S}$ and ${N}_{R}$ as follows:

$$
{\displaystyle \{\begin{array}{ll}{\displaystyle {P}_{S}^{-}=(1-\sigma ){p}_{h}+\sigma \frac{{N}_{S}^{\theta }}{{N}_{R}^{\theta }+{N}_{S}^{\theta }},}& \\ {\displaystyle {P}_{R}^{-}=(1-\sigma ){p}_{l}+\sigma \frac{{N}_{R}^{\theta }}{{N}_{R}^{\theta }+{N}_{S}^{\theta }},}& \\ {\displaystyle {P}_{S}^{+}=(1-\sigma ){p}_{l}+\sigma \frac{{N}_{S}^{\theta }}{{N}_{R}^{\theta }+{N}_{S}^{\theta }},}& \\ {\displaystyle {P}_{R}^{+}=(1-\sigma ){p}_{h}+\sigma \frac{{N}_{R}^{\theta }}{{N}_{R}^{\theta }+{N}_{S}^{\theta }},}& \end{array}}
$$

where $\sigma$ is the weight of social influence and $\theta$ is the strength of the conformist bias, corresponding to the agent-based learning model ([Table 1](#table1)). Other assumptions were the same as in the baseline dynamics model. The baseline dynamics model was a special case of this social influence model with $\sigma =0$. Because the system was not analytically tractable, we obtained the numeric solution across different initial distribution of ${N}_{S,t=0}$ and ${N}_{R,t=0}$ for various combinations of the parameters.

## The online experiments

The experimental procedure was approved by the Ethics Committee at the University of Konstanz (‘Collective learning and decision-making study’). Six hundred nineteen English-speaking subjects \[294 self-identified as women, 277 as men, 1 as other, and the rest of 47 unspecified; mean (minimum, maximum) age = 35.2 (18, 74) years\] participated in the task through the online experimental recruiting platform [Prolific Academic](https://www.prolific.co). We excluded subjects who disconnected from the online task before completing at least the first 35 rounds from our computational model-fitting analysis, resulting in 585 subjects (the detailed distribution of subjects for each condition is shown in [Table 3](#table3)). A parameter recovery test had suggested that the sample size was sufficient to reliably estimate individual parameters using a hierarchical Bayesian fitting method (see below; [Figure 6—figure supplement 3](#fig6s3)).

### Design of the experimental manipulations

The group size was manipulated by randomly assigning different capacities of a ‘waiting lobby’ where subjects had to wait until other subjects arrived. When the lobby capacity was 1, which happened at probability 0.1, the individual condition started upon the first subject’s arrival. Otherwise, the group condition started when there were more than three people at 3 min since the lobby opened (see Appendix 1 Supplementary Methods). If there were only two or fewer people in the lobby at this stage, the subjects each were assigned to the individual condition. Note that some groups in the group condition ended up with only two individuals due to a drop out of one individual during the task.

We used three different tasks: a 1-risky-1-safe task, a 1-risky-3-safe task, and a 2-risky-2-safe task, where one risky option was expected to give a higher payoff than other options on average (that is, tasks with a positive risk premium \[positive RP\]). To confirm our prediction that risky shift would not strongly emerge when risk premium was negative (i.e. risk seeking was suboptimal), we also conducted another 1-risky-1-safe task with a negative risk premium (the negative RP task). Participants’ goal was to gather as many individual payoff as possible, as monetary incentives were given to the individual performance. In the negative RP task, risk aversion was favourable instead. All tasks had 70 decision-making trials. The task proceeded on a trial basis; that is, trials of all individuals in a group were synchronised. Subjects in the group condition could see social frequency information, namely, how many people chose each alternative in the preceding trial. No social information was available in the first trial. These tasks were assigned randomly as a between subject condition, and subjects were allowed to participate in one session only.

We employed a skewed payoff probability distribution rather than a normal distribution for the risky alternative, and we conducted not only a two-armed task but also four-armed bandit tasks, because our pilot study had suggested that subjects tended to have a small susceptibility to the effect (${\alpha }_{i}({\beta }_{i}+1)\ll 2$), and hence we needed more difficult settings than the conventional Gaussian noise binary-choice task to elicit risk aversion from individual decision makers. Running agent-based simulations, we confirmed that these task setups used in the experiment could elicit the collective rescue effect ([Figure 1—figure supplement 5](#fig1s5)[Figure 1—figure supplement 6](#fig1s6)).

The details of the task setups are as follows:

#### The 1-risky-1-safe task (positive RP)

The optimal risky option produced either 50 or 550 points at probability 0.7 and 0.3, respectively (the expected payoff was 200). The safe option produced 150 points (with a small amount of Gaussian noise with s.d. = 5).

#### The 1-risky-3-safe task (positive RP)

The optimal risky option produced either 50 or 425 points at probability 0.6 and 0.4, respectively (the expected payoff was 200). The three safe options each produced 150, 125, and 100 points, respectively, with a small Gaussian noise with s.d. = 5.

#### The 2-risky-2-safe task (positive RP)

The optimal risky option produced either 50 or 425 points at probability 0.6 and 0.4, respectively (the expected payoff was 200). The two safe options each produced 150 and 125 points, respectively, with a small Gaussian noise with s.d. = 5. The suboptimal risky option, whose expected value was 125, produced either 50 or 238 points at probability 0.6 and 0.4, respectively.

#### The 1-risky-1-safe task (negative RP)

The setting was the same as in the 1-risky-1-safe positive RP task, except that the expected payoff from the risky option was smaller than the safe option, producing either 50 or 220 points at probability 0.7 and 0.3, respectively (the expected payoff was 101).

We have confirmed through agent-based model simulations that the collective behavioural rescue could emerge in tasks equipped with the experimental settings ([Figure 1—figure supplement 5](#fig1s5)). We have also confirmed that risk seeking does not always increase when risk premium is negative ([Figure 1—figure supplement 6](#fig1s6)). With the four-armed tasks we aimed to demonstrate that the rescue effect is not limited to binary-choice situations. Other procedures of the collective learning task were the same as those used in our agent-based simulation shown in the main text. The experimental materials including illustrated instructions can be found in [Video 1](#video1) (individual condition) and [Video 2](#video2) (group condition).

figure: Video 1.
:::
![](elife-75308.xml.media/elife-75308-video1.mp4)

### A sample screenshot of the online experimental task (Individual condition).

This video was taken only for the demonstration purpose and hence not associated to any actual participant’s behaviour.
:::
{#video1}

figure: Video 2.
:::
![](elife-75308.xml.media/elife-75308-video2.mp4)

### A sample screenshot of the online experimental task with N = 3 (group condition).

This video was taken only for the demonstration purpose and hence not associated to any actual participant’s behaviour. Also note that actual participants could see only one browser window per participant in the experimental sessions.
:::
{#video2}

## The hierarchical Bayesian model fitting

To fit the mixed logit model (GLMM) as well as the learning model, we used a hierarchical Bayesian method. For the learning model, we estimated the global means (${\mu }_{\alpha }$, ${\mu }_{\beta }$, ${\mu }_{\sigma }$, and ${\mu }_{\theta }$) and global variances (${v}_{\alpha }$, ${v}_{\beta }$, ${v}_{\sigma }$, and ${v}_{\theta }$) for each of the four experimental conditions and for the individual and group conditions separately. For the individual condition, we assumed $\sigma =0$ for all subjects and hence no social learning parameters were estimated. Full details of the model-fitting procedure and prior assumptions are shown in the Supplementary Methods. The R and Stan code used in the model fitting are available from [an online repository](https://github.com/WataruToyokawa/ToyokawaGaissmaier2021).

### The GLMM

We conducted a mixed logit model analysis to investigate the relationship between the proportion of choosing the risky option in the second half of the trials (${\displaystyle {P}_{r,t>35}}$) and the fit learning parameters (${\alpha }_{i}({\beta }_{i}+1)$, ${\sigma }_{i}$, and ${\theta }_{i}$). Since no social learning parameters exist in the individual condition, the dummy variable of the group condition was considered (${G}_{i}=1$ if individual $i$ was in the group condition or 0 otherwise). The formula used is ${\displaystyle logit({P}_{r,t>35})}$ = ${\displaystyle {\gamma }_{0}+{\gamma }_{1}{\alpha }_{i}({\beta }_{i}+1)+{\gamma }_{2}{G}_{i}+{\gamma }_{3}{G}_{i}{\alpha }_{i}({\beta }_{i}+1)+{\gamma }_{4}{G}_{i}{\sigma }_{i}+{\gamma }_{5}{G}_{i}{\theta }_{i}+{ϵ}_{i}+{ϵ}_{g}}$, where ${ϵ}_{i}$ and ${ϵ}_{g}$ were the random effect of individual and group, respectively. The model fitting using the Markov chain Monte Carlo (MCMC) method was the same as what was used for the computational model fitting, and the code are available from the repository shown above.

### Model and parameter recovery, and post hoc simulation

To assess the adequacy of the hierarchical Bayesian model-fitting method, we tested how well the hierarchical Bayesian method (HBM) could recover ‘true’ parameter values that were used to simulate synthetic data. We simulated artificial agents’ behaviour assuming that they behave according to the social learning model with each parameter setting. We generated ‘true’ parameter values for each simulated agent based on both experimentally fit global parameters ([Table 1](#table1); parameter recovery test 1). In addition, we ran another recovery test using arbitrary global parameters that deviated from the experimentally fit values (parameter recovery test 2), to confirm that our fitting procedure was not just ‘attracted’ to the fit value. We then simulated synthetic behavioural data and recovered their parameter values using the HBM described above. Both parameter recovery tests showed that all the recovered individual parameters were positively correlated with the true values, whose correlation coefficients were all larger than 0.5. We also confirmed that 30 of 32 global parameters in total were recovered within the 95% Bayesian credible intervals, and that even those two non-recovered parameters (${\mu }_{\beta }$ for the 2-risky-2-safe task in parameter recovery test 1 and ${\mu }_{\alpha }$ for the 1-risky-3-safe task in parameter recovery test 2) did not deviate so much from the true value ([Figure 6—figure supplement 3](#fig6s3)).

We compared the baseline reinforcement learning model, the decision-biasing model, and the value-shaping model (see Supplementary Methods) using Bayesian model selection ([@bib65]). The model frequency and exceedance probability were calculated based on the Widely Applicable Information Criterion (WAIC) values for each subject ([@bib75]). We confirmed accurate model recovery by simulations using our task setting ([Figure 6—figure supplement 2](#fig6s2)).

We also ran a series of individual-based model simulations using the calibrated global parameter values for each condition. First, we randomly sampled a set of agents whose individual parameter values were drawn from the fit global parameters. Second, we let this synthetic group of agents perform the task for 70 rounds. We repeated these steps 100,000 times for each task setting and for each individual and group condition.

# Appendix 1

## Supplementary methods

### An analytical result derived by [@bib21]

In the simplest setup of the two-armed bandit task, [@bib21] derived an explicit form for the asymptotic probability of choosing the risky alternative ${P}_{r}^{\star }$ (as $t\to \mathrm{\infty }$) as follows:

$$
{P}_{r}^{\star }=\frac{1}{1+\mathrm{exp}\left[\frac{\alpha {\beta }^{2}{\text{s.d.}}^{2}}{2(2-\alpha )}-\beta (\mu -{\pi }_{s})\right]}.
$$

[Equation 4](#equ4) identifies a condition under which reinforcement learners exhibit risk aversion. In fact, when there is no risk premium (i.e. $\mu \le {\pi }_{s}$), the condition of risk aversion always holds, that is, ${\displaystyle {P}_{r}^{\star }<0.5}$. Consider the case where risk aversion is suboptimal, that is, ${\displaystyle \mu >{\pi }_{s}}$. [Equation 4](#equ4) suggests that suboptimal risk aversion emerges when learning is myopic (i.e. when $\alpha$ is large) and/or decision making is less explorative (i.e. when $\beta$ is large). For instance, when the payoff distribution of the risky alternative is set to $\mu ={\pi }_{s}+0.5$ and ${\text{s.d.}}^{2}=1$, the condition of risk aversion, ${\displaystyle {P}_{r}^{\star }<0.5}$, holds under ${\displaystyle \beta >(2-\alpha )/\alpha }$, which corresponds to the area above the dashed curve in [Figure 1b](#fig1) in the main text. Risk aversion becomes more prominent when the risk premium $\mu -{\pi }_{s}$ is small and/or the payoff variance ${\text{s.d.}}^{2}$ is large.

### The online experiments

#### Subjects

The positive risk premium (positive RP) tasks were conducted between August and October 2020 (recruiting 492 subjects), while the negative risk premium (negative RP) task was conducted in September 2021 (recruiring 127 subjects) in response to the comments from peer reviewers. All subjects declared their residence in the United Kingdom, the United States, Ireland, or Australia. All subjects consented to participation through an online consent form at the beginning of the task. We excluded subjects who disconnected from the online task before completing at least the first 35 rounds from our computational model-fitting analysis, resulting in 467 subjects for the positive RP tasks and 118 subjects for the negative RP task (the detailed distribution of subjects for each condition is shown in [Table 1](#table1) in the main text). The task was available only for English-speaking subjects and they had to be 18 years old or older. Only subjects who passed a comprehension quiz at the end of the instructions could enter the task. Subjects were paid 0.8 GBP as a show-up fee as well as an additional bonus payment depending on their performance in the decision-making task In the positive RP tasks 500 artificial points were converted to 8 pence, while in the negative RP task 500 points were converted to 10 pence so as to compensate the less productive environment, resulting in a bonus ranging between £1.0 and £3.5.

#### Sample size

Our original target sample size for the positive RP tasks was 50 subjects for the individual condition and 150 subjects for the group condition where our target average group size was 5 individuals per group. For the negative RP task, we aimed to recruit 30 individuals for the individual condition and 100 individuals (that is, 20 groups of 5) for the group condition. Subjects each completed 70 trials of the task. The sample size and the trial number had been justified by a model recovery analysis of a previous study ([@bib73]).

Because of the nature of the ‘waiting lobby’, which was available only for 3 min, we could not fully control the exact size of each experimental group. Therefore, we set the maximum capacity of a lobby to 8 individuals for the 1-safe-1-risky task, which was conducted in August 2020, so as to buffer potential dropouts during the waiting period. Since we learnt that dropping out happened far less than we originally expected, we reduced the lobby capacity to 6 for both the 1-risky-3-safe and the 2-risky-2-safe task, which were conducted in October 2020. As a result, we had 20 groups (mean group size = 6.95), 21 groups (mean group size = 4.7), 19 groups (mean group size = 4.3), and 21 gorups (mean group size = 4.4), for the 1-risky-1-safe, 1-risky-3-safe, 2-risky-2-safe task, and the negative risk premium 2-armed task, respectively. Although we could not achieve the sample size targeted, partly due to the dropouts during the task and to a fatal error occurring in the experimental server in the first few sessions of the four-armed tasks, the parameter recovery test with $N=105$ suggested that the current sample size should be reliable enough to estimate social influences for each subject ([Figure 6—figure supplement 3](#fig6s3)).

### The hierarchical Bayesian parameter estimation

We used the hierarchical Bayesian method (HBM) to estimate the free parameters of our learning model. HBM allowed us to estimate individual differences, while this individual variation is bounded by the group-level (i.e. hyper) parameters. To do so, we used the following non-centred reparameterisation (the ‘Matt trick’) as follows:

$$
\text{logit}({\alpha }_{i})={\mu }_{\alpha }+{v}_{\alpha }\ast {\alpha }_{\text{raw},i}
$$

where ${\mu }_{\alpha }$ is a global mean of $\text{logit}({\alpha }_{i})$ and ${v}_{\alpha }$ is a global scale parameter of the individual variations, which is multiplied by a standardised individual random variable ${\alpha }_{\text{raw},i}$. We used a standardised normal prior distribution centred on 0 for ${\mu }_{\alpha }$ and an exponential prior for ${v}_{\alpha }$. The same method was applied to the other learning parameters ${\beta }_{i}$, ${\sigma }_{i}$, and ${\theta }_{i}$.

We assumed that the ‘raw’ values of individual random variables (${\alpha }_{\text{raw},i}$, ${\beta }_{\text{raw},i}$, ${\sigma }_{\text{raw},i}$, ${\theta }_{\text{raw},i}$) were drawn from a multivariate normal distribution. The correlation matrix was estimated using a Cholesky decomposition with a weakly informative Lewandowski–Kurowicka–Joe prior that gave a low likelihood to very high or very low correlations between the parameters ([@bib49]; [@bib19]).

#### Model fitting

All models were fitted using the Hamiltonian Monte Carlo engine CmdStan 2.25.0 (<https://mc-stan.org/cmdstanr/index.html>) in R 4.0.2 (<https://www.r-project.org>). The models contained at least six parallel chains and we confirmed convergence of the MCMC using both the Gelman–Rubin statistics criterion $\widehat{R}\le 1.01$ and the effective sample sizes greater than 500. The R and Stan code used in the model fitting are available from [an online repository](https://github.com/WataruToyokawa/ToyokawaGaissmaier2021).

### The value-shaping social influence model

We considered another implementation of social influences in reinforcement learning, namely, a value-shaping ([@bib53]) (or ‘outcome-bonus’ [@bib10]) model rather than the decision-biasing process assumed in our main analyses. In the value-shaping model, social influence modifies the $Q$ value’s updating process as follows:

$$
{Q}_{i,t+1}\leftarrow (1-\alpha ){Q}_{i,t}+\alpha \left({\pi }_{i,t}+{\sigma }_{vs}\overline{\pi }\frac{{N}_{i,t-1}^{\theta }}{{N}_{s,t-1}^{\theta }+{N}_{r,t-1}^{\theta }}\right)
$$

where the social frequency cue acts as an additional ‘bonus’ to the value that was weighted by ${\sigma }_{vs}$ (${\displaystyle {\sigma }_{vs}>0}$) and standardised by the expected payoff from choosing randomly among all alternatives $\overline{\pi }$. Here we assumed no direct social influence on the action selection process (i.e., $\sigma =0$ in our main model). We confirmed that the collective behavioural rescue could emerge when the inverse temperature $\beta$ was sufficiently small ([Figure 1—figure supplement 2](#fig1s2)). Although it is beyond the focus of this article whether any other types of models would fit better with human data than the models we considered in this study, it is an interesting question for future research. For such an attempt, see [@bib53].