Abstract
Decisionmaking biases can be features of normal behaviour, or deficits underlying neuropsychiatric symptoms. We used behavioural psychophysics, spikingcircuit modelling and pharmacological manipulations to explore decisionmaking biases during evidence integration. Monkeys showed a provariance bias (PVB): a preference to choose options with more variable evidence. The PVB was also present in a spiking circuit model, revealing a potential neural mechanism for this behaviour. To model possible effects of NMDA receptor (NMDAR) antagonism on this behaviour, we simulated the effects of NMDAR hypofunction onto either excitatory or inhibitory neurons in the model. These were then tested experimentally using the NMDAR antagonist ketamine, a pharmacological model of schizophrenia. Ketamine yielded an increase in subjects’ PVB, consistent with lowered cortical excitation/inhibition balance from NMDAR hypofunction predominantly onto excitatory neurons. These results provide a circuitlevel mechanism that bridges across explanatory scales, from the synaptic to the behavioural, in neuropsychiatric disorders where decisionmaking biases are prominent.
Introduction
A major challenge in computational psychiatry is to relate changes that occur at the synaptic level to the cognitive computations that underlie neuropsychiatric symptoms (Wang and Krystal, 2014; Huys et al., 2016). For example, one line of research has implicated NmethylDaspartate receptor (NMDAR) hypofunction in the pathophysiology of schizophrenia (Nakazawa et al., 2012; Kehrer et al., 2008; Olney and Farber, 1995). Some of the strongest evidence in support of this hypothesis comes from the observation that subanaesthetic doses (~0.1–0.5 mg/kg) of the NMDAR antagonist ketamine produce psychotomimetic effects in humans, especially cognitive aspects (Krystal et al., 1994; Umbricht et al., 2000; Malhotra et al., 1996; see Frohlich and Van Horn, 2014 for review). But how do we link our understanding of the pharmacological actions of ketamine to its effects on cognition? One strategy to bridge across these different scales is to consider behaviour at the intermediate level of the cortical microcircuit.
Circuit models present a promising avenue to address the challenges of neuropsychiatric research due to their biophysically detailed mechanisms. By perturbing the circuit model at the synaptic level, specific behavioural and neural predictions can be made. For example, NMDARs have long been argued to play a central role in temporally extended cognitive processes such as working memory, supported by studies of the effects of NMDAR antagonism in prefrontal cortical microcircuits (Wang et al., 2013). Using a cortical circuit model, a precise pattern of working memory deficits can be predicted by hypofunction of NMDARs, eliciting changes in excitationinhibition balance (E/I ratio) (Murray et al., 2014). This predicts changes in behaviour consistent with those observed in healthy volunteers administered with ketamine (Murray et al., 2014), and also in patients with schizophrenia (Starc et al., 2017). Yet it currently remains unclear whether this approach might generalise to explain the behavioural consequences of NMDAR antagonism in other temporally extended cognitive processes.
A closely related cognitive process to working memory is evidence accumulation – the decision process whereby multiple samples of information are combined over time to form a categorical choice (Gold and Shadlen, 2007). Recent research has advanced our understanding of how such evidence accumulation decisions are made in the healthy brain. Of particular relevance to psychiatric research, it has been possible to disentangle systematic biases in decisionmaking and reveal the mechanisms through which they occur. For instance, when choosing between two series of bars with distinct heights, people have a preference to choose the option where evidence is more broadly distributed across samples (Tsetsos et al., 2016; Tsetsos et al., 2012). Although this ‘provariance bias’ may appear irrational, and would not be captured by many normative decisionmaking models, it becomes the optimal strategy when the accumulation process is contaminated by noise (Tsetsos et al., 2016). These behaviours have presently been wellcharacterised using algorithmic level descriptions of decision formation, yet in order to understand how these decision biases might be affected by NMDAR hypofunction, a mechanistic explanation is needed.
As with working memory, an influential technique used to investigate evidence accumulation at the mechanistic level has been biophysically grounded computational modelling of cortical circuits (Wang, 2002; Wong and Wang, 2006; Murray et al., 2017). Through strong recurrent connections between similarly tuned pyramidal neurons, and NMDAR mediated synaptic transmission, these circuits can facilitate the integration of evidence across long timescales. Crucially, these neural circuit models bridge synaptic and behavioural levels of understanding, by predicting both choices and their underlying neural activity. These predictions reproduce key experimental phenomena, mirroring the behavioural and neurophysiological data recorded from macaque monkeys performing evidence accumulation tasks (Wang, 2002; Wong et al., 2007). Whether neural circuit models can provide a mechanistic implementation of the provariance bias, and other systematic biases associated with evidence accumulation, is currently unknown. Moreover, while NMDAR antagonists have been tested during various decisionmaking tasks (Shen et al., 2010; Evans et al., 2012), the role of the NMDAR in shaping the temporal process of evidence accumulation has not been characterised experimentally.
Here, we used a psychophysical behavioural task in macaque monkeys, in combination with spiking cortical circuit modelling and pharmacological manipulations, to gain new insights into decisionmaking biases in both health and disease. We trained two subjects to perform a challenging decisionmaking task requiring the combination of multiple samples of information with distinct magnitudes. Replicating observations from humans, monkeys showed a provariance bias. The provariance bias was also present in the spiking circuit model, revealing an explanation of how it may arise through neural dynamics. We then investigated the effects of NMDAR hypofunction in the circuit model, by perturbing NMDAR function at distinct synaptic sites. Perturbations could either elevate or lower the E/I ratio which strengthen or weaken recurrent circuit dynamics, with each effect making dissociable predictions for evidence accumulation behaviour. These model predictions were tested experimentally by administering monkeys with a subanaesthetic dose of the NMDAR antagonist ketamine (0.5 mg/kg, intramuscular injection). Ketamine produced decisionmaking deficits consistent with a lowering of the cortical E/I ratio.
Results
To study evidence accumulation behaviour in nonhuman primates, we developed a novel twoalternative perceptual decisionmaking task (Figure 1A). Subjects were presented with two series of eight bars (evidence samples), one on either side of central fixation. Their task was to decide which evidence stream had the taller/shorter average bar height, and indicate their choice contingent on a contextual cue shown at the start of the trial. The individual evidence samples were drawn from Gaussian distributions, which could have different variances for different options (Figure 1B). This task design had several advantages over evidence accumulation paradigms previously employed with animal subjects. Subjects were given eight evidence samples with distinct magnitudes (Figure 1C) – encouraging a temporal integration decisionmaking strategy. Precise experimental control of the stimuli facilitated analytical approaches probing the influence of evidence variability and time course on choice, and allowed us to design specific trials that attempted to induce systematic biases in choice behaviour.
Two monkeys (Macaca mulatta) completed 29,726 trials (Monkey A: 10,748; Monkey H: 18,978). Despite the challenging nature of the task, subjects were able to perform it with high accuracy (Figure 2A–B, Figure 2—figure supplement 1A–B). The precise control of the discrete stimuli allowed us to evaluate the impact of evidence presented at each time point on the final behavioural choice, via logistic regression (see Materials and methods). Stimuli presented at a time point with a larger regression coefficient have a strong impact on the choice, relative to time points with smaller coefficients. We found that the subjects utilised all eight stimuli throughout the trial to inform their decision, and demonstrated a primacy bias such that early stimuli have stronger temporal weights than later stimuli (Figure 2C–D, Figure 2—figure supplement 1C–D). A primacy bias has been reported in prior studies in monkeys, and is consistent with a decisionmaking strategy of bounded evidence integration (Kiani et al., 2008; Nienborg and Cumming, 2009; Wimmer et al., 2015). As it was clear both monkeys could accurately perform the task, all subsequent figures are presented with data collapsed across subjects for conciseness, but results separated by subjects are consistent (see supplementary figures).
We next probed the influence of evidence variability on choice. We designed specific choice options with different levels of standard deviation across samples in an attempt to replicate the provariance bias previously reported for human subjects (see Materials and methods) (Tsetsos et al., 2016; Tsetsos et al., 2012). On each trial, one option was allocated a narrow distribution of bar heights, and the other a broad distribution. In different conditions, either the broad or narrow stimuli stream could be the correct choice (‘Broad Correct’ Trials or ‘Narrow Correct’ Trials), or there could be no clear correct answer (‘Ambiguous’ Trials) (Figure 3A, Figure 3—figure supplements 1 and 2). If subjects chose optimally, and only the mean bar height influenced their choice, their accuracy would be the same in ‘Broad Correct’ and ‘Narrow Correct’ trials and they would be indifferent to the variance of the distributions in ‘Ambiguous’ trials. We show that our monkeys deviate from such behaviours. The monkeys are more accurate on ‘Broad Correct’ trials than on ‘Narrow Correct’ trials (Figure 3B, Figure 3—figure supplements 1 and 2). Furthermore, in the ‘Ambiguous’ trials, the monkeys demonstrated a preference for the broadly distributed stream, which has greater variability across samples (Figure 3C, Figure 3—figure supplements 1 and 2). Such a provariance bias pattern of decision behaviour is similar to what was found in human subjects (Tsetsos et al., 2016; Tsetsos et al., 2012; Figure 3D–E).
To further probe the provariance bias, we studied choices from a larger pool of ‘Regular’ trials in which the mean evidences and variabilities of the two streams were set independently on each trial (Figure 4A–B, Figure 4—figure supplements 1 and 2). ‘Regular' trials allowed us to explore the provariance bias across a greater range of choice difficulties (Figure 4C) and quantitatively characterise its effect using regression analysis. On ‘Regular’ trials, subjects also demonstrated a preference for options with broadly distributed evidence. Regression analysis confirmed that evidence variability was a significant predictor of choice (Figure 4D; see Materials and methods).
In addition, we defined the provariance bias (PVB) index as the ratio of the regression coefficient for evidence standard deviation over the regression coefficient for mean evidence. Although evidence standard deviation was irrelevant in determining the correct option to choose in the task, and it is important to note that we were not suggesting it is explicitly computed by the monkey subjects, the sensitivity of choice behaviour to evidence standard deviation could also arise as a byproduct of the neural computations to evaluate the taskrelevant mean evidence (as shown later in the Results). PVB index thus served as a unitless, descriptive measure of the evidence accumulation process, quantifying the subjects’ sensitivity to evidence standard deviation relative to the evidence accumulation process. A PVB index value of 0 thereby indicates no provariance bias, whereas a PVB index value of 1 indicates the subject is as sensitive to evidence standard deviation as they are to mean evidence. The PVB index thus provides a quantitative measure of the provariance bias. A key motivation for defining PVB index is as a potentially useful measure for assessing changes in decisionmaking behaviour, such as via pharmacological perturbation (performed in later experiments in this paper). For example, if a perturbation simply weakened the overall sensitivity of choice to stimulus information, this would presumably downscale the mean and standard deviation regression coefficients proportionally, yielding no change in the PVB index as a ratio. In contrast, if a perturbation to the decisionmaking process differentially impacts how evidence mean vs. standard deviation impact choice, then this would be reflected as a change in the PVB index. From the ‘Regular’ trials, the PVB index across both monkeys was 0.173 (Monkey A = 0.230; Monkey H = 0.138).
Recent work has suggested that when traditional evidence accumulation tasks are performed, it is hard to dissociate whether subjects are combining information across samples, or whether conventional analyses may be disguising a simpler heuristic (Waskom and Kiani, 2018; Stine et al., 2020). In particular, an alternative decisionmaking strategy which does not involve temporal accumulation of evidence is to detect the single most extreme sample. Because the extreme sample will occur at different times in each trial, if a subject employed this strategy, the choice regression weights across time points would be distributed as in Figure 2C–D. Therefore, it is possible for these findings to be mistakenly interpreted as reflecting evidence accumulation. We wanted to quantitatively confirm that subjects were using the strategy we envisioned when designing our task, namely evidence accumulation. Additionally, we wanted to further investigate the relative contributions of mean evidence and evidence variability on choices. A logistic regression approach probed the influence upon choice of mean evidence, evidence variability, first/last samples, and the most extreme samples within each stream (Figure 4—figure supplements 1E,H and 2C,F,I,L, see Materials and methods). A crossvalidation approach revealed choice was principally driven by the mean evidence, verifying that subjects performed the task using evidence accumulation (Supplementary file 1, see Materials and methods).
Although this analysis revealed choices were not primarily driven by an ‘extreme sample detection’ decision strategy, another concern was whether partially employing this strategy could explain the provariance effect we observed. To address this, we compared the influence of ‘evidence variability’ versus the influence of ‘extreme samples’ on subjects’ choices. Crossvalidation revealed that choices were better described by a model incorporating evidence variability, rather than the extreme sample values (Supplementary file 2). We also demonstrated that including evidence variability as a coregressor improved the performance of all combinations of nested models (Supplementary file 3). In summary, it can be concluded that although subjects integrated across samples, they were additionally influenced by sample variability.
Previous studies have revealed that a ‘frequent winner’ bias – whereby subjects prefer to choose options where there is a greater number of cases of stronger evidence between the simultaneously presented stimuli – coexists with the provariance bias (Tsetsos et al., 2016). Furthermore, both of these biases may arise from the same selective accumulation mechanism (Tsetsos et al., 2016). Therefore, we next analysed whether our subjects’ choices were also influenced by a ‘frequent winner’ bias (Figure 4—figure supplement 3; Materials and methods). After controlling for the influence of mean evidence on choices, we found that neither subject demonstrated a ‘frequent winner’ bias.
Existing algorithmiclevel proposals for generating a provariance bias in human decisionmaking rely on the disregarding of sensory information before it enters the accumulation process, depending on its salience (Tsetsos et al., 2016). To investigate a possible alternative basis for the provariance bias, at the level of neural implementation, we sought to characterise decisionmaking behaviour in a biophysicallyplausible spiking cortical circuit model (Figure 5A–B, Figure 5—figure supplement 1; Wang, 2002; Lam, 2017). In the circuit architecture, two groups of excitatory pyramidal neurons are assigned to the left and right options, such that high activity in one group signals the response to the respective option. Excitatory neurons within each group are recurrently connected to each other via AMPA and NMDA receptors, and this recurrent excitation supports ramping activity and evidence accumulation. Both groups of excitatory neurons are jointly connected to a group of inhibitory interneurons, resulting in feedback inhibition and winnertakeall competition (Wang, 2002; Wong and Wang, 2006). The two groups of excitatory neurons receive separate inputs  with each group receiving information about one of the two options (i.e. Group A receives I_{A} reflecting the left option; Group B receives I_{B} reflecting the right option). Specifically, we assume the bar heights from each stream are remapped, upstream of the simulated decisionmaking circuit, to evidence for the corresponding option depending on the cued context. Therefore, taller bars correspond to larger inputs in a ‘ChooseTall’ trial and smaller inputs in a ‘ChooseShort’ trial. Combined together, this synaptic architecture endows the circuit model with decisionmaking functions.
The spiking circuit model was tested with the same trial types as the monkey experiment. Importantly, not only can the circuit model perform the evidence accumulation task, it also demonstrated a provariance bias comparable to the monkeys (Figure 5C–F). Regression analysis showed that the circuit model utilises a strategy similar to the monkeys to solve the decisionmaking task (Figure 5—figure supplement 1B). The temporal process of evidence integration in the circuit model disproportionately weighted early stimuli over late stimuli (Figure 5G), similar to the evidence integration patterns observed in both monkeys. However, the circuit model demonstrated an initial rampup in stimuli weights due to the time needed for it to reach an integrative state.
To understand the origin of the provariance bias in the spiking circuit, we mathematically reduced the circuit model to a meanfield model (Figure 6A), which demonstrated similar decisionmaking behaviour to the spiking circuit (Figure 6BC, Figure 6—figure supplement 1). The meanfield model, with two variables representing the integrated evidence for the two choices, allowed phaseplane analysis to further investigate the provariance bias. A simplified case was considered where the broad and narrow streams have the same mean evidence, and the stimuli evidence varies over time in the broad stream but not the narrow stream (i.e. ${\mathrm{\sigma}}_{\mathrm{N}}$ = 0) (Figure 6EH). This example provides an intuitive explanation for the provariance bias: a momentarily strong stimulus has an asymmetrically greater influence upon the decisionmaking process than a momentarily weak stimulus. Input streams with larger variability and thus a higher chance to display both strong inputs and weak inputs, can thus leverage such asymmetry more so than input streams with small variability, resulting in provariance bias. Such asymmetry arises from the expansive nonlinearities of the firing rate profiles (Figure 6D, see Materials and methods).
To explore whether this explanation may account for the provariance bias in the circuit model and monkey behaviour (Figures 4 and 5), we reanalysed the data separating trials into two halves: those with more or less total evidence (summed across both streams) (Figure 5—figure supplement 2). The circuit model demonstrated a smaller PVB index (larger mean evidence and smaller evidence standard deviation regression weights) for trials with more total evidence than for trials with less total evidence (Figure 5—figure supplement 2C). This was consistent with the prediction from the FI nonlinearity: trials with more total evidence, and thus larger total input, will more strongly drive the neurons to the nearlinear regime of the firing rate profile, where the effect of the expansive nonlinearity was weaker (Figure 6D). Similar analysis of the monkey behavioural data revealed a similar trend of smaller PVB index (larger mean evidence and smaller evidence standard deviation regression weights) for trials with more total evidence than less total evidence, though the effect was insignificant (Figure 5—figure supplement 2G). In addition, distinct temporal weighting on stimuli were observed in both the circuit model and experimental data, for trials with more versus less total evidence (Figure 5—figure supplement 2D,H).
An advantage of the circuit model over existing algorithmic level explanations of the provariance bias is it can be used to make testable behavioural predictions in response to different synaptic or cellular perturbations, including excitationinhibition (E/I) imbalance. In turn, perturbation experiments can constrain and refine model components. Therefore, we studied the behavioural effects of distinct E/I perturbations, and upstream sensory deficit, on decision making and in particular, provariance bias (Figure 7, Figure 7—figure supplement 1). Three perturbations were introduced to the circuit model: lowered E/I balance (via NMDAR hypofunction on excitatory pyramidal neurons), elevated E/I balance (via NMDAR hypofunction on inhibitory interneurons), or sensory deficit (as weakened scaling of external inputs to stimuli evidence) (Figure 7A).
While all circuit models were capable of performing the task (Figure 7B–E), the choice accuracy of each perturbed model was reduced when compared to the control model. This was quantified by the regression coefficient of mean evidence (Figure 7F). In addition, the regression coefficient for evidence standard deviation was reduced for each perturbed model in comparison to the control model, indicating a lesser influence of evidence variability on choice (Figure 7G). Finally, in a dissociation between the three model perturbations, the PVB index was increased by lowered E/I, decreased by elevated E/I, and roughly unaltered by sensory deficits (Figure 7H). Further regression analyses indicated no obvious shift in utilised strategies relative to the control model (Figure 7—figure supplement 1). Crucially, the effect of E/I and sensory perturbations on PVB index and regression coefficients were generally robust to the strength and pathway of perturbation (Figure 7—figure supplements 2 and 3).
Disease and pharmacologyrelated perturbations likely concurrently alter multiple sites, for instance NMDARs of both excitatory and inhibitory neurons. We thus parametrically induced NMDAR hypofunction on both excitatory and inhibitory neurons in the circuit model. The net effect on E/I ratio depended on the relative perturbation strength to the two populations (Lam, 2017). Stronger NMDAR hypofunction on excitatory neurons lowered the E/I ratio, while stronger NMDAR hypofunction on inhibitory neurons elevated the E/I ratio. Notably, proportional reduction to both pathways preserved E/I balance and did not lower the mean evidence regression coefficient (a proxy of performance) (Figure 7—figure supplement 2A). On the other hand, decision making performance was maximally susceptible to perturbations in the orthogonal direction, along the E/I axis (Lam, 2017). Furthermore, along this axis PVB index monotonically increased with lowered E/I ratio and decreased with elevated E/I ratio, demonstrating a robust prediction from our circuit model (Figure 7—figure supplement 2C). Sensory deficit perturbations did not significantly alter PVB index, until the limit where decision making performance was greatly impaired (Figure 7—figure supplement 4). Finally, the temporal weightings were distinctly altered by the elevated and lowered E/I perturbations (Figure 7I). The circuit model thus provided the basis of dissociable prediction by E/Ibalance perturbing pharmacological agents.
Since the decision making choice accuracy depends on E/I ratio along an invertedU shape – where the control, E/I balanced model is right next to the (slightly lowered E/I) peak (Lam, 2017) both elevating and lowering E/I ratio drive the model away from the peak, resulting in lowered mean evidence regression weight. The evidence standard deviation regression weight similarly follows an invertedU shape, but with the peak at a more strongly lowered E/I ratio (Figure 7—figure supplement 2). As such, elevating E/I ratio consistently lowers the evidence standard deviation regression weight, but lowering E/I ratio by a small amount initially increases the evidence standard deviation regression weight, and only decreases the evidence standard deviation regression weight after such peak in the invertedU shape is passed at greater perturbation strengths. Notably, regardless of the magnitude with which E/I ratio is lowered, PVB index is consistently increased, providing a robust measure of provariance bias.
To explore these predictions experimentally, we collected behavioural data from both monkeys following the administration of a subanaesthetic dose (0.5 mg/kg, intramuscular injection) of the NMDAR antagonist ketamine (see Materials and methods, Figure 8, Figure 8—figure supplement 1). After a baseline period of the subjects performing the task, either ketamine or saline was injected intramuscularly (Monkey A: 13 saline sessions, 15 ketamine sessions; Monkey H: 17 saline sessions, 18 ketamine sessions). Administering ketamine had behavioural effects for around 30 min in both subjects. The data collected during this period formed a behavioural database of 8521 completed trials (Monkey A Saline: 1710; Monkey A Ketamine: 2276; Monkey H Saline: 2669; Monkey H Ketamine: 1866). Following ketamine administration, subjects’ choice accuracy was markedly decreased (Figure 8A), without a significant shift in their strategies (Figure 8—figure supplement 1, Supplementary file 4).
To understand the nature of this deficit, we studied the effect of drug administration on the provariance bias (Figure 8B–F). Although subjects were less accurate following ketamine injection, they retained a provariance bias (Figure 8C). Regression analysis confirmed ketamine caused choices to be substantially less driven by mean evidence (Figure 8D), but still strongly influenced by the standard deviation of evidence across samples (Figure 8E). The PVB index was significantly higher when ketamine was administered, than saline (permutation test p=8×10^{−6}, Figure 8F). Of all the circuit model perturbations, this was only consistent with lowered E/I balance (Figure 7H).
In further analysis, we also controlled for the influence of ketamine on the subjects’ lapse rate – i.e. the propensity for the animals to respond randomly regardless of trial difficulty. We modelled this lapse rate using an additional term that bounded the logistic function at Y_{0} and (1Y_{0}), rather than 0 and 1 (Figure 8—figure supplement 2, see Materials and methods, Equation 9). In other words, lapse rate refers to the asymptote error rate at the limit of strong evidence. Consistent with the psychometric function (Figure 8C), we found that ketamine significantly increased the subjects’ lapse rate (Subject A: Lapse_{(Saline)}=1.49×10^{−11}, Lapse_{(Ketamine)}=0.118, permutation test, p<0.0001; Subject H: Lapse_{(Saline)}=0.012, Lapse_{(Ketamine)}=0.0684, permutation test, p=0.019). Crucially, however, the PVB effect was still present in the regression model that included the effect of lapses. This confirms that the change in lapse rate was not responsible for any of the behavioural effects of ketamine outlined above. We also investigated the timecourse of ketamine’s influence on the PVB index (Figure 8—figure supplement 3). This confirmed that the rise in PVB was an accurate description of a common behavioural deficit throughout the duration of ketamine administration.
Additional observations further supported the lowered E/I hypothesis for the effect of ketamine on monkey choice behaviour. Quantitative model comparison, using cosine similarity, Euclidean distance, and Kullback–Leibler (KL) divergence, revealed the effect of ketamine injection on monkey choice behaviour was better explained by lowered E/I perturbations in the circuit model, than by sensory deficit or elevated E/I perturbations (Figure 8—figure supplements 4–6). Very strong sensory deficit may also increase PVB index, but with minimal decision making performance and a psychometric function very different from the monkey data (Figure 7—figure supplement 4, Figure 8—figure supplement 7). In addition, we investigated the effect of ketamine on the time course of evidence weighting (Figure 8G). It caused a general downward shift of the temporal weights; but had no strong effects on how each stimulus was weighted relative to the others in the stream. This shifting of the weights could reflect a sensory deficit, but given the results of the provariance analysis, collectively the behavioural effects of ketamine are most consistent with lowered E/I balance and weakened recurrent connections. Notably, the saline data demonstrate a Ushaped pattern different from the primacy pattern observed in nondrug experiments (Figure 2C,D) and spiking circuit models (Figure 7I). This may be due to task modifications for the ketamine/saline experiments compared with the nondrug experiments, but could also potentially arise from distinct regimes of decision making attractor dynamics (e.g. see Genís PratOrtega et al., 2020).
To quantify the effect of lapse rate on evidence sensitivity and regression weights in general, we examined the effect of a lapse mechanism downstream of spiking circuit models (Figure 8—figure supplements 8–9). Using the lapse rate fitted to the experimental data collected from the two monkeys, we assigned such portions of trials to have randomly selected choices for each circuit model, and repeated the analysis to obtain psychometric functions and various regression weights. Crucially, while the psychometric function as well as evidence mean and standard deviation regression weights were suppressed, the findings on PVB index were not qualitatively altered in the circuit models, further supporting the finding that the lapse rate does not account for changes in PVB under ketamine.
Discussion
Previous studies have shown human participants exhibit choice biases when options differ in the standard deviation of the evidence samples, preferring choice options drawn from a more variable distribution (Tsetsos et al., 2016; Tsetsos et al., 2012). By utilising a behavioural task with precise experimenter control over the distributions of timevarying evidence, we show that macaque monkeys exhibit a similar provariance bias in their choices akin to human participants. This provariance bias was also present in a spiking circuit model, which demonstrated a neural mechanism for this behaviour. We then introduced perturbations at distinct synaptic sites of the circuit, which revealed dissociable predictions for the effects of NMDAR antagonism. Ketamine produced decisionmaking deficits consistent with a lowering of the cortical excitationinhibition balance.
Biophysically grounded neural circuit modelling is a powerful tool to link cellular level observations to behaviour. Previous studies have shown recurrent cortical circuit models reproduce normative decisionmaking and working memory behaviour, and replicate the corresponding neurophysiological activity (Wang et al., 2013; Murray et al., 2014; Wang, 2002; Wong and Wang, 2006; Murray et al., 2017; Wong et al., 2007; Wimmer et al., 2014). However, whether they are also capable of reproducing idiosyncratic cognitive biases has not previously been explored. Here we demonstrated provariance and primacy biases in a spiking circuit model. The primacy bias results from the formation of attractor states before all of the evidence has been presented. This neural implementation for bounded evidence accumulation corresponds with previous algorithmic explanations (Kiani et al., 2008).
The results from our spiking circuit modelling also provide a parsimonious candidate mechanism for the provariance bias within the evidence accumulation process. Specifically, strong evidence in favour of an option pushes the network towards an attractor state more so than symmetrically weak evidence pushes it away. Previous explanations for provariance bias proposed computations at the level of sensory processing upstream of evidence accumulation. In particular, a ‘selective integration’ model proposed that information for the momentarily weaker option is discarded before it enters the evidence accumulation process (Tsetsos et al., 2016). Conceptually, our model was analogous to previous models in that weak evidence is weighted less relative to strong evidence. However, there are key differences between the two models. In ‘selective integration’ and similar models concerning sensory processes, an asymmetric filter was applied to the stimuli before the stimuli were evaluated by the decision making process, in some upstream area that can be potentially modulated based on task demands. In contrast, in our circuit model provariance bias arose from the nonlinearity activity profile (Figure 6D, see Materials and methods) of model neurons. In that sense, provariance bias was an intrinsic phenomenon of the evidence integration process in our circuit model.
Despite the conceptual analogy between our circuit model and the ‘selective integration’ model in which weak stimuli were asymmetrically weighted, our circuit model cannot be directly mapped to the latter. In the ‘selective integration’ model, the asymmetry is realized as a discounting of the momentarily weaker stimuli by a constant factor. In our circuit model, the asymmetry arose from the nonlinearity of the transfer function. However, the transfer function was not static, but dynamically evolved with the state of the model (e.g. in the meanfield model, the transfer function depended on the two decision variables, See Materials and methods). Due to this complexity, the asymmetry of the circuit model cannot be reduced to one simple expression, and was instead closely entangled with the attractor dynamics of the system.
Crucially, our circuit model generated dissociable predictions for the effects of NMDAR hypofunction on the provariance bias (PVB) index that were tested by followup ketamine experiments. While it remains an open question where and how in the brain the selective integration process takes place, our modelling results suggest that purely sensory deficits may not capture the alterations in choice behaviour observed under ketamine, in contrast to E/I perturbations in decisionmaking circuits (Figure 7H). Multiple complementary processes may simultaneously contribute to provariance bias during decision making, especially in complex behaviours over longer timescales. Future work will aim to contrast between these two models with neurophysiological data recorded while monkeys are performing this task.
On the other hand, there may also be limits in the extent to which our findings can be directly compared to those from previous studies in humans (Tsetsos et al., 2016; Tsetsos et al., 2012). For example, human studies have revealed that a ‘frequent winner’ bias coexists with the provariance bias and may arise from the same selective integration mechanism. Unlike previous studies, our subjects did not exhibit a ‘frequent winner’ bias. Furthermore, although both studies demonstrate a PVB, the temporal weighting of evidence in the previous human studies exhibit recency, unlike the primacy found in the present study. This may be in part due to differences in the underlying computational regimes that are used for evidence integration, or may be due to more trivial differences between the experimental paradigms – for example, different paradigms have identified primacy (Kiani et al., 2008), recency (Cheadle et al., 2014) or noiseless sensory evidence integration (Brunton et al., 2013). A stronger test will be to record neurophysiological data while monkeys are performing our task; this would help to distinguish between the ‘selective integration’ hypothesis and the cortical circuit mechanism proposed here.
The PVB index, as the ratio of standard deviation to mean evidence regression weights, serves as a conceptually useful measure to interpret changes in provariance bias due to ketamine perturbation in this study. Given the model does not feature any explicit processes that mediate provariance bias, PVB should be understood as an emergent phenomenon arising from the decisionmaking process. In this context, a sensorydeficit perturbation, which downscales the incoming evidence strength without perturbing the decisionmaking process, should proportionally downscale the evidence mean and standard deviation regression weights, thus maintaining the PVB index. In contrast, lowering and elevating E/I ratio distinctly alter the dynamics of the decisionmaking process and thus differentially perturb the PVB index. It is also important to study how changes in the PVB index are driven by changes in the mean vs. standard deviation regression coefficients, as considering PVB index alone can obscure these effects. For instance, based on the model, the increase in PVB index by lowering E/I is generally due to a stronger decrease in mean regression coefficient than standard deviation regression coefficient (Figure 7—figure supplement 2). However, small perturbations of lowering E/I may actually increase PVB index due to an increase in standard deviation regression coefficient and a decrease in mean regression coefficient. As a support of this model finding, while the two monkeys both demonstrate a significant decrease in mean regression weight by ketamine, one monkey seems to demonstrate a trend to decrease standard deviation regression weight and the other seems to demonstrate a trend to increase (Figure 8—figure supplement 2). The two monkeys, both interpreted as lowered E/I ratio using the modelbased approach in this study, may therefore experience slightly different degrees of E/I reduction when administered with ketamine, as shown through concurrent changes in NMDAR conductances in the circuit model (Figure 7—figure supplement 2).
In this study we did not undertake quantitative fitting of the circuit model parameters to match the empirical data. Rather we took a previously developed circuit model and only manually adjusted input strengths to be loosely in the regime of experimental behavior. There are technical and theoretical challenges in quantitatively fitting to psychophysical behavior with biophysicallybased circuit models, including reduced meanfield models, which have impeded such applications in the field. Critical challenges include computational cost of simulation, a large number of parameters with unknown effective degeneracies on behavior, and treatment of noise in meanfield reductions. Future work, beyond the scope of the present study, is needed to bridge these gaps in relating circuit models to psychophysical behavior.
Instead of direct model fitting, here we studied biophysicallybased spiking circuit models for two primary purposes: to examine whether a behavioral phenomenon, such as provariance bias, can emerge from a regime of circuit dynamics, and through what dynamical circuit mechanisms; and to characterize how the phenomenon and underlying dynamics is altered by modulation of neurobiologically grounded parameters, such as NMDAR conductance. The circuit modelling in this study demonstrates a set of mechanisms which is sufficient to produce the phenomenon of interest. The bottomup mechanistic approach in this study, which makes links to the physiological effects of pharmacology and makes testable predictions for neural recordings and perturbations, is complementary to topdown algorithmic modeling approaches.
Our pharmacological intervention experimentally verified the significance of NMDAR function for decisionmaking. In the spiking circuit model, NMDARs expressed on pyramidal cells are necessary for reverberatory excitation, without which evidence cannot be accumulated and stable working memory activity cannot be maintained. NMDARs on interneurons are necessary for maintaining background inhibition and preventing the circuit from reaching an attractor state prematurely (Murray et al., 2014; Wang, 2002). By administering ketamine, an NMDAR antagonist, specific shortterm deficits in choice behaviour were induced, which were consistent with a lowering of the cortical excitationinhibition balance in the circuit model. This suggests the NMDAR antagonist we administered systemically was primarily acting to inhibit neurotransmission onto pyramidal cells and weaken the recurrent connection strength across neurons. It is important to note that in addition to the main role of ketamine as a NMDAR antagonist, it might also target other receptor sites (Chen et al., 2009; Zanos et al., 2016; Moaddel et al., 2013). However, of all receptors, ketamine has by far the highest affinity for the NMDAR receptor (Frohlich and Van Horn, 2014). The effects of synaptic perturbations could be interpreted in terms of their net effect on E/I balance, at least to the first order (Murray et al., 2014; Lam, 2017). For instance, in the circuit model, proportional NMDA hypofunction on both E and I neurons maintains E/I balance and minimally impairs circuit computation, while the effect of disproportionate NMDA hypofunction on E and I neurons is well captured by the direction of net change in E/I ratio (Figure 7—figure supplements 2 and 3). Given the highest affinity of ketamine to NMDARs, the effect of NMDARhypofunction should dominantly determine the direction of E/I imbalance, and should not be counterbalanced by the effect of other perturbations. Finally, other receptors and brain areas are likely altered by systemic ketamine administration, which is beyond the scope of the microcircuit model in this study.
The physiological effects of NMDAR antagonism on in vivo cortical circuits remains an unresolved question. A number of studies have proposed a net cortical disinhibition through NMDAR hypofunction on inhibitory interneurons (Nakazawa et al., 2012; Krystal et al., 2003; Lisman et al., 2008; Lewis et al., 2012). The disinhibition hypothesis is supported by studies finding NMDAR antagonists mediate an increase in the firing of prefrontal cortical neurons, in rodents (Jackson et al., 2004; Homayoun and Moghaddam, 2007) and monkeys (Ma et al., 2018; Ma et al., 2015; Skoblenick and Everling, 2012; Skoblenick et al., 2016). On the other hand, the effects of NMDAR antagonists on E/I balance may vary across neuronal subcircuits within a brain area. For instance, in a working memory task, ketamine was found to increase spiking activity of responseselective cells, but decrease activity of the taskrelevant delaytuned cells in primate prefrontal cortex (Wang et al., 2013). Such specificity might explain why several studies reported less conclusive effects of NMDAR antagonists on overall prefrontal firing rates in monkeys (Wang et al., 2013; Zick et al., 2018). In vitro work has also revealed the excitatory postsynaptic potentials (EPSPs) of prefrontal pyramidal neurons are much more reliant on NMDAR conductance than parvalbumin interneurons (Rotaru et al., 2011). Other investigators combining neurophysiological recordings with modelling approaches have also concluded that the action of NMDAR antagonists is primarily upon pyramidal cells (Wang et al., 2013; Moran et al., 2015). Our present findings, integrating pharmacological manipulation of behaviour with biophysicallybased spiking circuit modelling, suggest that the ketamineinduced behavioural biases are more consistent with a lowering of excitationinhibition balance and weakening of recurrent dynamics. Future work with electrophysiological recordings during the performance of our task, under pharmacological interventions, can potentially dissociate the effect of ketamine on E/I balance specifically in cortical neurons exhibiting decisionrelated signals. Notably, the decision making behaviours in our circuit model arise from attractor dynamics relying on unstructured interneurons to provide lateral feedback inhibition. Recent experiments found that, in mouse parietal cortex during a decisionmaking task, inhibitory parvalbumin (PV) interneurons  thought to provide feedback inhibition  may be equally selective as excitatory pyramidal neurons (Najafi et al., 2020). Depending on the pattern and connectivity of their feedback projections to pyramidal neurons, such a circuit structure supports different forms of evidence accumulation in cortical circuits (Lim and Goldman, 2013). It remains to be seen how the provariance bias effect and the current predictions extend to circuit models with selective inhibitory interneurons.
The minuteslong timescale of the NMDAR mediated decisionmaking deficit we observed was also consistent with the psychotomimetic effects of subanaesthetic doses of ketamine in healthy humans (Krystal et al., 1994; Krystal et al., 2003). As NMDAR hypofunction is hypothesised to play a role in the pathophysiology of schizophrenia (Kehrer et al., 2008; Olney and Farber, 1995; Krystal et al., 2003; Lisman et al., 2008), our findings have important clinical relevance. Previous studies have demonstrated impaired perceptual discrimination in patients with schizophrenia performing the randomdot motion (RDM) decisionmaking task (Chen et al., 2003; Chen et al., 2004; Chen et al., 2005). Although RDM tasks have been extensively used to study evidence accumulation (Gold and Shadlen, 2007), previously this performance deficit in schizophrenia was interpreted as reflecting a diminished representation of sensory evidence in visual cortex (Chen et al., 2003; Butler et al., 2008). Based on our task with precise temporal control of the stimuli, our findings suggest that NMDAR antagonism alters the decisionmaking process in association cortical circuits. Dysfunction in these association circuits may therefore provide an important contribution to cognitive deficits  one that is potentially complementary to upstream sensory impairment. Crucially, our task uniquely allowed us to rigorously verify that the subjects used an accumulation strategy to guide their choices (cf. previous animal studies [Gold and Shadlen, 2007; Roitman and Shadlen, 2002; Hanks et al., 2015; Morcos and Harvey, 2016; Katz et al., 2016]), with these analyses suggesting the strategy our subjects employed was relatively consistent with findings in human participants. This consistency further ensures our findings may translate across species, in particular to clinical populations.
Another related line of schizophrenia research has shown a decisionmaking bias known as jumping to conclusions (JTC) (Ross et al., 2015; Huq et al., 1988). The JTC has predominately been demonstrated in the ‘beads task’, a paradigm where participants are shown two jars of beads, one mostly pink and the other mostly green (typically 85%). The jars are hidden, and the participants are presented a sequence of beads drawn from a single jar. Following each draw, they are asked if they are ready to commit to a decision about which jar the beads are being drawn from. Patients with schizophrenia typically make decisions based on fewer beads than controls. Importantly, this JTC bias has been proposed as a mechanism for delusion formation. Based on the JTC literature, one plausible hypothesis for behavioural alteration under NMDAR antagonism in our task may be a strong increase in the primacy bias, whereby only the initially presented bar samples would be used to guide the subjects’ decisions. However, following ketamine administration, we did not observe a strong primacy – instead all samples received roughly the same weighting. There are important differences between our task and the beads task. In our task, the stimulus presentation is shorter (2 s, compared to slower sampling across bead draws), and is of fixed duration rather than terminated by the subject’s choice, and therefore may not involve the perceived sampling cost of the beads task (Ermakova et al., 2019).
Our precise experimental paradigm and complementary modelling approach allowed us to meticulously quantify how monkeys weight timevarying evidence and robustly dissociate sensory and decisionmaking deficits – unlike prior studies using the RDM and beads tasks. Our approach can be readily applied to experimental and clinical studies to yield insights into the nature of cognitive deficits and their potential underlying E/I alterations in pharmacological manipulations and pathophysiologies across neuropsychiatric disorders, such as schizophrenia (Wang and Krystal, 2014; Huys et al., 2016) and autism (Wang and Krystal, 2014; Yizhar et al., 2011; Lee et al., 2017; Marín, 2012). Finally, our study highlights how precise task design, combined with computational modelling, can yield translational insights across species, including through pharmacological perturbations, and across levels of analysis, from synapses to cognition.
Materials and methods
Subjects
Two adult male rhesus monkeys (M. mulatta), subjects A and H, were used. The subjects weighed 12–13.3 kg, and both were ~6 years old at the start of the data collection period. We regulated their daily fluid intake to maintain motivation in the task. All experimental procedures were approved by the UCL Local Ethical Procedures Committee and the UK Home Office (PPL Number 70/8842), and carried out in accordance with the UK Animals (Scientific Procedures) Act.
Behavioural protocol
Request a detailed protocolSubjects sat head restrained in a primate behavioural chair facing a 19inch computer screen (1,280 × 1024 px screen resolution, and 60 Hz refresh rate) in a dark room. The monitor was positioned 59.5 cm away from their eyes, with the height set so that the centre of the screen aligned with neutral eye level for the subject. Eye position was tracked using an infrared camera (ISCAN ETL200) sampled at 240 Hz. The behavioural paradigm was run in the MATLABbased toolbox MonkeyLogic (http://www.monkeylogic.net/, Brown University) (Asaad and Eskandar, 2008a; Asaad and Eskandar, 2008b; Asaad et al., 2013). Eye position data were relayed to MonkeyLogic for use online during the task, and was recorded for subsequent offline analysis. Following successful trials, juice reward was delivered to the subject using a precision peristaltic pump (ISMATEC IPC). Subjects performed two types of behavioural sessions: standard and pharmacological. In pharmacological sessions, following a baseline period, either an NMDAR antagonist (Ketamine) or saline was administered via intramuscular injection. Monkey A completed 41 standard sessions, and 28 pharmacological sessions (15 ketamine; 13 saline). Monkey H completed 68 standard sessions, and 35 pharmacological sessions (18 ketamine; 17 saline).
Injection protocol
Request a detailed protocolTypically, two pharmacological sessions were performed each week, at least 3 days apart. Subjects received either a saline or ketamine injection into the trapezius muscle while seated in the primate chair. Approximately 12 min into the session, local anaesthetic cream was applied to the muscle. At 28 min, the injection was administered. The task was briefly paused for this intervention (64.82 ± 10.85 secs). Drug dose was determined through extensive piloting, and a review of the relevant literature (Wang et al., 2013; Blackman et al., 2013). The dose used was 0.5 mg/kg.
Task
Request a detailed protocolSubjects were trained to perform a twoalternative valuebased decisionmaking task. A series of bars, each with different heights, were presented on the left and rightside of the computer monitor. Following a poststimulus delay, subjects were rewarded for saccading towards the side with either the taller or shorter average barheight, depending upon a contextual cue displayed at the start of the trial (see Figure 1A inset). The number of pairs of bars in each series was either four (‘4SampleTrial’) or eight (‘8SampleTrial’) during trials in each standard behavioural session. In this report, we only consider the results from the eight sample trials, though similar results were obtained from the four sample trials. The number of bars was always six during pharmacological sessions.
The bars were presented inside of fixedheight rectangular placeholders (width, 84px; height, 318px). The placeholders had a black border (thickness 9px), and a grey centre where the stimuli were presented (width, 66px; height, 300px). The bar heights could take discrete percentiles, occupying between 1% and 99% of the grey space. The height of the bar was indicated by a horizontal black line (thickness 6px). Beneath the black line, there was 45° grey gabor shading.
An overview of the trial timings is outlined in Figure 1A. Subjects initiated a trial by maintaining their gaze on a central, red fixation point for 750 ms. After this fixation was completed, one of four contextual cues (see Figure 1A inset) was centrally presented for 350 ms. Subjects had previously learned that two of these cues instructed to choose the side with the taller average barheight (‘ChooseTall’ trial), and the other two instructed to choose the side with the shorter average barheight (‘ChooseShort’ trial). Next, two black masks (width, 84px; height, 318px) were presented for 200 ms in the location of the forthcoming bar stimuli. These were positioned either side of the fixation spot (6° visual angle from centre). Each bar stimulus was presented for 200 ms, followed by a 50 ms interstimulusinterval where only the fixation point remained on the screen. Once all of the bar stimuli had been presented, the mask stimuli returned for a further 200 ms. There was then a post stimulus delay (250–750 ms, uniformly sampled across trials). Following this, the colour of the fixation point was changed to green (go cue), and two circular saccade targets appeared on each side of the screen where the bars had previously been presented. This cued the subject to indicate their choice by making a saccade to one of the targets. Once the subject reported their decision, there were two stages of feedback. Immediately following choice, the green go cue was extinguished, the contextual cue was represented centrally, along with the average bar heights of the two series of stimuli previously presented. The option the subject chose was indicated by a purple outline surrounding the relevant bar placeholder (width, 3.8°; height, 10°). Following 500 ms, the second stage of feedback began. The correct answer was indicated by a white outline surrounding the bar placeholder (width, 5.7°; height, 15°). On correct trials, the subject was rewarded for a length of time proportional to the average height of the chosen option (directly proportional on a ‘ChooseTall’ trial, negatively proportional on a ‘ChooseShort’ trial). On incorrect trials, there was no reward. Regardless of the reward amount, the second feedback stage lasted 1200 ms. This was followed by an intertrialinterval (1.946 ± 0.051 secs; for Standard Sessions, across all completed included trials). The intertrialinterval duration was longer on ‘4SampleTrials’ than ‘8SampleTrials’, in order for the trials to be an equal duration, and facilitate a similar reward rate between the two conditions.
Subjects were required to maintain central fixation from the fixation period until they indicated their choice. If the initial fixation period was not completed, or fixation was subsequently broken, the trial was aborted and the subject received a 3000 ms timeout (Trials in standard sessions: Monkey A – 22.46%, Monkey H – 15.27%). On the following trial, the experimental condition was not repeated. If subjects failed to indicate their choice within 8000 ms, a 5000 ms timeout was initiated (Trials in standard sessions: Monkey A  0%, Monkey H – 0%).
Experimental conditions were blocked according to the contextual cue and evidence length. This produced four block types (ChooseTall4SampleTrial (T4), ChooseTall8SampleTrial (T8), ChooseShort4SampleTrial (S4), ChooseShort8SampleTrial (S8)). At the start of each session, subjects performed a short block of memoryguided saccades (MGS) (Hikosaka and Wurtz, 1983), completing 10 trials. Data from these trials are not presented in this report. Following the MGS block, the first block of decisionmaking trials was selected at random. After the subject completed 15 trials in a block, a new block was selected without replacement. Each new block had to have either the same evidence length or the same contextual cue as the previous block. After all four blocks had been completed, there was another interval of MGS trials. A new evidence accumulation start block was then randomly selected. As there were four block types, and either the evidence length or the contextual cue had to be preserved across a block switch, there were two ‘sequences’ in which the blocks could transition (i.e. T4→T8→S8→S4; or T4→S4→S8→T8, if starting from T4). Following the intervening MGS trials, the blocks transitioned in the opposite sequence to those used previously, starting from the new randomly chosen block. This block switching protocol was continued throughout the session. At the start of each block, the background of the screen was changed for 5000 ms to indicate the evidence length of the forthcoming block. A burgundy colour indicated an eight sample block was beginning, a teal colour indicated a four sample block was beginning.
Trial generation
Request a detailed protocolThe heights of the bars on each trial were precisely controlled. On the majority of trials (Regular Trials, Completed trials in standard sessions: Monkey A – 76.67%, Monkey H – 76.23%), the heights of each option were generated from independent Gaussian distributions (Figure 4AB). There were two levels of variance for the distributions, designated as ‘Narrow’ and ‘Broad’. The mean of each distribution, μ, was calculated as $\mu =50+Z\times \sigma$, where $Z\sim \mathcal{\mathcal{U}}(0.25,0.25)$, and $\sigma$ was either 12 or 24 for narrow and broad stimuli streams. The individual bar heights were then determined by $\sim \mathcal{\mathcal{N}}(\mu ,\sigma )$. The trial generation process was constrained so the samples reasonably reflected the generative parameters. These restrictions required bar heights to range from 1 to 99, and the actual σ for each stream to be no more than 4 from the generative value. On any given trial, subjects could be presented with two narrow streams, two broad streams, or one of each. The evidence variability was therefore independent between the two streams. For posthoc analysis (Figure 4) we defined one stream as the ‘Lower SD’ option on each trial, and the other the ‘Higher SD’ option, based upon the sampled/actual $\sigma$.
A proportion of ‘decisionbias trials’ were also specifically designed to elucidate the effects of evidence variability on choice, and whether subjects displayed primacy/recency biases (Tsetsos et al., 2012). These trials occurred in equal proportions within all four block types. Only one of these decisionbias trial types was tested in each behavioural session.
Narrowbroad trials (Completed trials in standard sessions: Monkey A – 14.87%, Monkey H – 15.78%) probed the effect of evidence variability on choice (Tsetsos et al., 2012). Within this category of trials, there were three conditions (Figure 3A). In each, the bar heights of one alternative were associated with a narrow Gaussian distribution ($\sim \mathcal{\mathcal{N}}$ (μ_{N}, 12)), and the bar heights from the other with a broad Gaussian distribution ($\sim \mathcal{\mathcal{N}}$ (μ_{B}, 24)). In the first two conditions, ‘Narrow Correct’ (μ_{N} $\sim \mathcal{\mathcal{U}}$ (48, 60), μ_{B} = μ_{N} – 8) and ‘Broad Correct’ (μ_{B} $\sim \mathcal{\mathcal{U}}$ (48, 60), μ_{N} = μ_{B} – 8), there was a clear correct answer. In the third condition, ‘Ambiguous’ (μ_{B} $\sim \mathcal{\mathcal{U}}$ (44, 56), μ_{N} = μ_{B}), there was only small evidence in favour of the correct answer. In all of these conditions, the generated samples had to be within 4 of the generating σ. Furthermore, on ‘Narrow Correct’ and ‘Broad Correct’ trials the difference between the mean evidence of the intended correct and incorrect stream had to range from +2 to +14. On the ‘Ambiguous’ trials, the mean evidence in favour of one option over the other was constrained to be <4. A visualisation of the net evidence in each of these trial types is displayed (Figure 3A). For the purposes of illustration, the probability density was smoothed by a sliding window of ±1, within the generating constraints described above (‘Narrow Correct’ and ‘Broad Correct’ trials have net evidence for correct option within [2, 14]; ‘Ambiguous’ trials have net evidence within [4, 4]). A very small number of trials were excluded from this visualisation, because their net evidence fell marginally outside the constraints. This was because bar heights were rounded to the nearest integer (due to the limited number of pixels on the computer monitor) after the generating procedure and the plot reflects the presented bar heights.
Halfhalf trials (Completed trials in standard sessions: Monkey A – 8.46%, Monkey H – 8.00%) probed the effect of temporal weighting biases on choice (Tsetsos et al., 2012). The heights of each option were generated using the same Gaussian distribution (X $\sim \mathcal{\mathcal{N}}$ (μ_{HH}, 12), where $\mu}_{HH}\sim \mathcal{\mathcal{U}$ (40, 60)). This distribution was truncated to form two distributions: X_{Tall} {mean(X) – 0.5*SD(X),∞}, and X_{Short} {∞, mean(X) + 0.5*SD(X)}. On each trial, one option was designated ‘TallFirst’ – where the first half of bar heights was drawn from X_{Tall} and the second half of bar heights drawn from X_{Short.} This process was also constrained so that the mean of samples drawn from X_{Tall} had to be at least 7.5 greater than those taken from X_{Short}. The other option was ‘ShortFirst’, where the samples were drawn from the two distributions in the reverse order.
Task modifications for pharmacological sessions
Request a detailed protocolMinor adjustments were made to the task during the pharmacological sessions to maximise trial counts available for statistical analysis. Trial length was fixed to 6 pairs of samples. The block was switched between ‘ChooseTall6Sample’ and ‘ChooseShort6Sample’ after 30 completed trials, without intervening MGS trials. From our pilot data, it was clear ketamine reduced choice accuracy. In order to maintain subject motivation, the most difficult ‘Regular’ and ‘HalfHalf’ trials were not presented. Following the trial generation procedures described above, in pharmacological sessions these trials were additionally required to have >4 mean difference in evidence strength. Of the ‘NarrowBroad’ trials, only ‘Ambiguous’ conditions were used; but no further constraints were applied to these trials. In some sessions, a small number of control trials were used, in which the bar heights for each option were fixed across all of the samples. All analyses utilised ‘Regular’, ‘HalfHalf’, and ‘NarrowBroad’ trials. Monkey H did not always complete sufficient trials once ketamine was administered. Sessions where the number of completed trials was fewer than the minimum recorded in the saline sessions were discarded (6 of 18 sessions). Following ketamine administration, Monkey A did not complete fewer trials in any session than the minimum recorded in a saline session.
Behavioural data analysis
Request a detailed protocolTo assess decisionmaking accuracy during standard sessions, we initially fitted a psychometric function (Kiani et al., 2008; Roitman and Shadlen, 2002) to subjects’ choices pooled across ‘Regular’ and ‘NarrowBroad’ trials (Figure 2AB). This defines the choice accuracy ($P$) as a function of the difference in mean evidence in favour of the correct choice (evidence strength,$x$):
where α and β are respectively the discrimination threshold and order of the psychometric function, and $exp$ is the exponential function. To illustrate the effect of provariance bias, we also fitted a threeparameter psychometric function to the subjects’ probability to choose the higher SD option (${P}_{HSD}$) in the ‘Regular’ trials, as a function of the difference in mean evidence in favour of the higher SD option on each trial (${x}_{HSD}$):
where $\delta $ is the psychometric function shift, and $sign$ returns 1 and 1 for positive and negative inputs respectively. To be explicitly clear, on ‘ChooseTall’ trials, the mean evidence in favour of the higher SD option was calculated by subtracting the mean bar height of the lower SD option from that of the higher SD option. On ‘ChooseShort’ trials, the mean evidence in favour of the higher SD option was calculated by subtracting [100  mean bar height of the lower SD option] from [100 – mean bar height of the higher SD option].
In both cases, the psychometric function is fitted using the method of maximumlikelihood estimation (MLE), with the estimator
(and similarly for ${P}_{HSD}$ & ${x}_{HSD}$), where $i$ is summed across trials. ${\mathbb{\U0001d7d9}}_{i}=1$ if the correct (higher SD) option is chosen in trial $i$ and 0 otherwise.
The temporal weights of stimuli were calculated using logistic regression. This function defined the probability (P_{L}) of choosing the left option:
where ${\beta}_{0}^{\text{'}}$ is a bias term, ${\beta}_{n}^{\text{'}}$ reflects the weighting given to the nth pair of stimuli, ${L}_{n}$ and ${R}_{n}$ reflect the evidence for the left and right option at each time point.
Regression analysis was used to probe the influence of evidence mean, and evidence variability on choice during the ‘Regular’ trials (Figures 4D, 5F, 6C, 7F–H and 8DF, Figure 4—figure supplement 1D,G, Figure 8—figure supplement 1C,H). This function defined the probability (P_{L}) of choosing the left option:
where ${\beta}_{0}$ is a bias term, ${\beta}_{1}$ reflects the influence of evidence mean, and ${\beta}_{2}$ reflects the influence of standard deviation of evidence (evidence variability).
This approach was extended to probe other potential influences on the decisionmaking process. An expanded regression model was defined as follows:
where $\beta}_{0$ is a bias term, $\beta}_{1$ reflects the influence of evidence mean of the left samples, $\beta}_{2$ reflects the influence of evidence variability of the left samples, $\beta}_{3$ reflects the influence of the maximum left sample, $\beta}_{4$ reflects the influence of the minimum left sample, $\beta}_{5$ reflects the influence of the first left sample, $\beta}_{6$ reflects the influence of the last left sample. $\beta}_{7$ to $\beta}_{12$ reflect the same attributes for samples on the right side of the screen. Due to strong correlations among evidence standard deviation, maximum, and minimum, the regression model without $\beta}_{2$ and $\beta}_{8$ is used to evaluate the contribution of regressors other than evidence mean and standard deviation to the decision making process (Figure 4—figure supplement 1E,H, Figure 5—figure supplement 1B, Figure 6—figure supplement 1B, Figure 7—figure supplement 1B, Figure 8—figure supplement 1D,I).
To explore whether the subjects demonstrated a frequentwinner bias (Tsetsos et al., 2012), whereby they prefer to choose options that more frequently have the greater evidence across samples, we used a regression approach (Figure 4—figure supplement 3). The regression equation defined the probability (PL) of choosing the left option:
where $\beta}_{0$ is a bias term, $\beta}_{1$ reflects the influence of evidence mean, and $\beta}_{2$ reflects the influence of local winners (frequentwinner bias). The number of local wins for each option ranges between 0 and 8, and is the amount of times that the momentary evidence is stronger for that option. To provide an example, consider a trial where the evidence values were Left: [50 55 56 48 80 45 30 50], Right: [55 48 90 34 70 50 50 70]. Here, there would be 3 local wins for the left option, and 5 local wins for the right option.
To control for possible lapse effects induced by ketamine, where the animal responded randomly regardless of the trial difficulty, the behavioural models described above were extended to include an extra ‘lapse parameter’, Y_{0}. The purpose of this parameter was to quantify the frequency of lapses, and to isolate the effect of lapsing from our other analyses of interest (i.e. the effect of ketamine on PVB index). In other words, lapse rate refers to the asymptote error rate at the limit of strong evidence. Equations 46 were extended as follows:
The models including a lapse term (Equations 810) were fitted via maximumlikelihood estimation (using the fminsearch algorithm in MATLAB), using the following cost function:
where $i$ is summed across trials. ${1}_{i}=1$ if the left option is chosen in trial $i$ and 0 otherwise. $\lambda$ is an L2 regularisation constant, which was set to 0.01. Bootstrapping was used to generate error estimates for the parameters of these models (10,000 iterations). As our analyses demonstrate that the animals very rarely lapse when administered with saline, we did not deem it necessary to apply the lapsing models to the standard session experiment (i.e. Figures 2, 3, 4, 5, 6).
To visualise the influence of lapsing upon the psychometric functions, and to allow a comparison between the monkey behaviour and circuit model performance, we extended Equation 2:
Here, Y_{0} was a fixed parameter according to the lapse rate calculated from the relevant monkey’s behavioural data.
The goodnessoffit of various regression models with combinations of the predictors in the full model (Equation 6) were compared using a 10fold crossvalidation procedure (Supplementary files 1–4). Trials were initially divided into 10 groups. Data from 9 of the groups were used to train each regression model and calculate regression coefficients. The likelihood of the subjects’ choices in the leftout group (testing group), given the regression coefficients, could then be determined. The loglikelihood was then summed across these leftout trials. This process was repeated so that each of the 10 groups acted as the testing group. The whole crossvalidation procedure was performed 100 times, and the average loglikelihood values were taken.
To initially explore the time course of drug effects on decisionmaking, we plotted choice accuracy (combined across ‘Regular’, ‘HalfHalf’ and ‘NarrowBroad’ trials) relative to drug administration (Figure 8A). Trials were binned relative to the time of injection. Within each session, choice accuracy was estimated at every minute, using a 6 min window around the bin centre. Accuracy was then averaged across sessions. To further probe the influence of drug administration on decisionmaking, we defined an analysis window based upon the time course of behavioural effects. All trials before the time of injection were classified as ‘predrug’. All trials beginning 5–30 min after injection were defined as ‘ondrug’ trials. These trials were then analysed using the same methods as described for the Standard sessions.
To quantify the effect of ketamine administration on the PVB index (Figure 8F, Figure 8—figure supplement 1C,H), we performed a permutation test. Trials collected during ketamine administration were compared with those collected during saline administration. The test statistic was calculated as the difference between the PVB index in ketamine and saline conditions. For each permutation, trials from the two sets of data were pooled together, before two shuffled sets with the same number of trials as the original ketamine and saline data were extracted. Next, the PVB index was computed in each permuted set, and the difference between the two PVB indices calculated. The difference measure for each permutation was used to build a null distribution with 1000000 entries. The difference measure from the true data were compared with the null distribution to calculate a pvalue. For the models including a lapse term (Figure 8—figure supplement 2), the same test was performed with 10,000 permutations.
We later revisited the time course of drug effects by running our regression analyses at each of the binned windows described above (Figure 8—figure supplement 3). To calculate the time window where a parameter differed between ketamine and saline conditions, we used a clusterbased permutation test (Nichols and Holmes, 2002; Cavanagh et al., 2018; Cavanagh et al., 2016). These tests allowed us to correct for multiple comparisons while assessing the significance of time series data. The difference between the parameter of interest (PVB index) was calculated in the true data for each timepoint. All consecutive timepoints when this statistic exceeded a threshold ($(\mathrm{P}\mathrm{V}{\mathrm{B}}_{\mathrm{S}\mathrm{a}\mathrm{l}\mathrm{i}\mathrm{n}\mathrm{e}}\mathrm{P}\mathrm{V}{\mathrm{B}}_{\mathrm{k}\mathrm{e}\mathrm{t}\mathrm{a}\mathrm{m}\mathrm{i}\mathrm{n}\mathrm{e}}\ge 0.15)$) were designated as a ‘cluster’. The size of the clusters were compared to a null distribution constructed using a permutation test. The drug administered (ketamine or saline) in each session was randomly permuted 10,000 times and the cluster analysis was repeated for each permutation. The size of the largest cluster for each permutation was entered into the null distribution. The true cluster size was significant at the p < 0.05 level if the true cluster length exceeded the 95th percentile of the null distribution.
Spiking circuit model
Request a detailed protocolA biophysicallybased spiking circuit model was used to replicate decision making dynamics in a local association cortical microcircuit. The model was based on Wang, 2002, but with minor modifications from a previous study (Lam, 2017). The current model had one extra change in the input representation of the stimulus, described in detail below.
The circuit model consisted of ${N}_{E}=1600$ excitatory pyramidal neurons and ${N}_{I}=400$ inhibitory interneurons, all simulated as leaky integrateandfire neurons. All neurons were recurrently connected to each other, with NMDA and AMPA conductances mediating excitatory connections, and GABA_{A} conductances mediating inhibitory connections. All neurons also received background inputs, while selective groups of excitatory neurons (see below) received stimulus inputs. Both background and stimulus inputs were mediated by AMPA conductances with Poisson spike trains.
Within the population of excitatory neurons were two nonoverlapping groups of size ${N}_{E,G}=240$. Neurons within the two groups received separate inputs reflecting the left and right stimuli streams. Neurons in the same group preferentially connected to each other (with a multiplicative factor ${w}_{+}>1$ to the connection strength), allowing integration of the stimulus input. The connection strength to any other excitatory neurons was reduced by a factor ${w}_{}<1$ in a manner which preserved the total connection strength. Due to lateral inhibition mediated by interneurons, excitatory neurons in the two different groups competed with each other. Inhibitory neurons, as well as excitatory neurons not in the two groups, were insensitive to the presented stimuli and were nonselective toward either choices or the respective neuron groups.
Momentary stimuli bar evidences were modelled as Poisson inputs (from an upstream sensory area) to the two groups of excitatory neurons (Figure 5A). The mean rate of Poisson input for any group, µ, linearly scaled with the corresponding stimulus evidence:
where $h\in \left[0,100\right]$ represented the momentary stimulus evidence, equal to the bar height in ‘ChooseTall’ trials, and 100 minus bar height in ‘ChooseShort’ trials. ${\mu}_{0}=30Hz$ was the input strength when $h=50$, and ${\mu}^{\prime}=1Hz$. For simplicity, we assumed each bar stimulus lasted 250ms, rather than 200ms with a subsequent 50ms interstimuli interval as in the experiment.
The circuit model simulation outputs spike data for the two excitatory populations, which are then converted to population activity smoothened with a 0.001s timestep via a casual exponential filter. In particular, for each spike of a given neuron, the histogrambins corresponding to times before that spike receives no weight, while the histogrambins corresponding to times after the spike receives a weight of $\frac{1}{{\tau}_{\text{filter}}}\mathrm{exp}\left(\frac{\mathrm{\Delta}t}{{\tau}_{\text{filter}}}\right)$, where $\mathrm{\Delta}t$ is the time of the histogrambin after the spike, and $\tau}_{\text{filter}}=20\mathrm{m}\mathrm{s$.
From the population activity of the two excitatory populations, a choice is selected 2 s after stimulus offset, based on the population with higher activity. Stimulus inputs in general drive categorical, winnertakeall competitions such that the winning population will ramp up its activity until a high attractor state (>30 Hz, in comparison to approximately 1.5 Hz baseline firing rate), while suppressing the activity of the other population below baseline via lateral inhibition (Figure 5B). It is also possible that neither population reaches the highactivity state. Both populations, remaining at the spontaneous state, will have similarly low activities, such that the decision readout is random.
In addition to the control model, three perturbed spiking circuit models were considered (Murray et al., 2014; Lam, 2017): lowered E/I balance, elevated E/I balance, and sensory deficit. E/I perturbations were implemented through hypofunction of NDMARs (Figure 7A), as this is a leading hypothesis in the pathophysiology of schizophrenia (Nakazawa et al., 2012; Kehrer et al., 2008; Lisman et al., 2008). NMDAR antagonists such as ketamine also provide a leading pharmacological model of schizophrenia (Krystal et al., 1994; Krystal et al., 2003). NMDAR hypofunction on excitatory neurons (reduced $G}_{E\to E$) resulted in lowered E/I ratio, whereas NMDAR hypofunction on interneurons (reduced $G}_{E\to I$) resulted in elevated E/I ratio due to disinhibition (Lam, 2017). Sensory deficit was implemented as weakened scaling of external inputs to stimuli evidence, resulting in a reduced $\mu}^{\prime$. For the exact parameters, the lowered E/I model reduced $G}_{E\to E$ by 1.3125%, the elevated E/I model reduced $G}_{E\to I$ by 2.625%, and the sensory deficit model had a sensory deficit of 20% (such that $\mu}^{\prime$ was reduced by 20%) (Figure 7, Figure 7—figure supplement 1). The $G}_{E\to E$ reduction parameter was chosen as the perturbation strength which fits most well to the effect of ketamine on monkey behavioural alteration (Figure 8—figure supplements 4, 5). The $G}_{E\to I$ reduction and sensory deficit parameters were chosen to match the reduction of mean evidence regression coefficient in the $\mathrm{G}}_{E\to E$ perturbation (Figure 7—figure supplements 2, 4).
The control circuit model completed 94,000 ‘Regular’ trials, where both streams were narrow in 25% of the trials, both streams were broad in 25% of the trials, and one stream was narrow and one was broad in 50% of the trials (Figure 5, Figure 5—figure supplements 1 and 2). All trials were generated identically as in standard session experiments. The control model also completed 47,000 standard session NarrowBroad trials. To evaluate the effect of circuit perturbations, the control model, the lowered E/I model, the elevated E/I model, and the sensory deficit model all completed an identical set of 40,000 ‘Regular’ trials, where both streams were narrow in 25% of the trials, both streams were broad in 25% of the trials, and one stream was narrow and one was broad in 50% of the trials (Figure 7, Figure 7—figure supplement 1). The same permutation test described earlier for comparing PVB index between ketamine and saline conditions was also used to quantify whether various perturbed circuit models have different PVB indices relative to the control model (Figure 7H).
Testing the versatility of model predictions
Request a detailed protocolTo examine the versatility of the model predictions by perturbations on the provariance bias effect, we parametrically reduced both $\mathrm{G}}_{E\to E$ and $\mathrm{G}}_{E\to I$ concurrently, by {0%, 0.4375%, 0.875%, 1.3125%, 1.75%, 2.1875%, 2.625%} for $\mathrm{G}}_{E\to E$ and {0%, 0.875%, 1.75%, 2.625%, 3.5%, 4.375%, 5.25%} for $\mathrm{G}}_{E\to I$ (Figure 7—figure supplements 2, 3). In addition, we also parametrically varied the sensory deficit, with a sensory deficit of {0%, 5%, 10%, 15%, 20%, 25%, 30%, 35%, 40%, 45%, 50%} (Figure 7—figure supplement 4). 12,000 ‘Regular trials’ were completed for each condition in the parameter scans, with the same distribution of narrow/broad streams as in the four main circuit models.
The effect of various perturbations to the circuit model was compared to the ketamine effect on the choice behaviour of the two monkeys, using coefficients from the regression model with leftright difference in mean evidence and evidence standard deviation as regressors (Equation 5). In particular, for each perturbation condition, the relative difference in mean evidence regression coefficient between the perturbed circuit model and the control model, and the relative difference in evidence standard deviation regression coefficient between the perturbed circuit model and the control model, were computed. Similarly, the relative differences in the two regression coefficients between the monkey data under ketamine vs saline injection were also computed (with lapse rate accounted for). The direction of alterations was mapped to the 2dimensional space of relative coefficient differences for mean evidence and evidence standard deviations, and was compared between the perturbations to model and monkey choice behaviour using cosine similarity (CS) and Euclidean distance (ED) (Figure 8—figure supplements 4 and 5):
where the subscript denoted the regression coefficient (mean evidence or evidence standard deviation), the superscript denoted the data of the regression analysis (monkey under ketamine injection, monkey under saline injection, control circuit model, or the model with the perturbation condition of interest). A higher cosine similarity (and lower Euclidean distance) meant the relative extent (and direction) of alteration, to the regression coefficients of mean evidence and evidence standard deviation, was more similar between the perturbations in the circuit model and the monkey data.
In contrast to the two measures above which evaluate the effect of perturbation (e.g. by ketamine), Kullback–Leibler (KL) divergence allows direct comparison between monkey behavior under saline or ketamine injections, and various model conditions. More explicitly, for each monkey’s data collected under the influence of ketamine or saline, and for each model condition in the parameter space, we computed the KL divergence of the choice behaviors from the model to the monkey data.
where $x}_{HSD$ is summed over the range of net evidence strength in favour of the higher SD option on each trial, while $P}_{monkey$ and $P}_{model$ are the choice probabilities for the monkey and model to respectively choose the broad option given $x}_{HSD$.
Mean field model
Request a detailed protocolThe current spiking circuit model was mathematically reduced to a meanfield model, as outlined in Niyogi and WongLin, 2013, in the same manner as from Wang, 2002 to Wong and Wang, 2006. The meanfield model consisted of two variables ($S}_{1$, $S}_{2$), namely the NMDAR gating variables of the two groups of excitatory neurons representing the integrated evidence for the two choices. The two gating variables evolved according to:
for $i=1,2$. $\tau}_{NMDA}=100\mathrm{m}\mathrm{s$ and $\gamma =0.641$ were the synaptic time constant and saturation factor for NMDAR. $r}_{1},\text{}{r}_{2$ were the firing rates of the two populations, and were quasistatically computed from the transfer function based on the total input currents $I}_{1},\text{}{I}_{2$ (Figure 6D). The input currents
arose from the NMDARs of the same population (e.g. $\alpha}_{1}{S}_{1$ in Equation 18) and competing population (e.g. $\alpha}_{2}{S}_{2$ in Equation 18), the AMPARs of the same population (e.g. $\beta}_{1}{r}_{1$ in Equation 18) and competing population (e.g. $\beta}_{2}{r}_{2$ in Equation 18), and external inputs (e.g. $I}_{1}^{ext$ in Equation 18). GABARs were also expressed in $\alpha}_{i$ and $\beta}_{i$ to account for lateral inhibition. Using change of variables $x}_{1}={\alpha}_{1}{S}_{1}+{\alpha}_{2}{S}_{2}+{I}_{1}^{ext$, $x}_{2}={\alpha}_{1}{S}_{2}+{\alpha}_{2}{S}_{1}+{I}_{2}^{ext$, the transfer function can be written as
where $a,\text{}b,\text{}d$ were constants that depended on $\beta}_{1$, and $f$ was a function of $x}_{i$ that depended on $\beta}_{2$. We omitted the expression of $\alpha}_{i},{\beta}_{1},a,b,d,f,{I}_{i}^{ext$ for the sake of simplicity, but please see Wong and Wang, 2006 for details. The resulting transfer function (Figure 6D) is such that small input below a threshold generated no response, while very large input generated a linear response (note that Figure 6D shows $r}_{1$ as a function of $x}_{1$, with ${x}_{2}=0$). This resulted in an expansive nonlinearity between the two limits, allowing strong inputs to drive the system more strongly than weakinputs. Input streams with large variability, with higher chance to have both strong inputs and weak inputs, can thus leverage such asymmetry better than input streams with small variability, resulting in provariance bias (Figure 6).
$r}_{i$, as a (transfer) function of $I}_{i$, was sensitive to NMDAR hypofunction due to $\alpha}_{1},\text{}{\alpha}_{2$ in the first two terms in Equations 18 and 19. Through $\alpha}_{1$ and $\alpha}_{2$, NMDAR hypofunction altered the transfer function and thus the expansive nonlinearity, thus altering the provariance bias effect. In addition, parameter changes due to NMDAR hypofunction ($\alpha}_{1$ and $\alpha}_{2$) will also alter the attractor dynamics of the circuit model, such that the perturbed circuit will have different dynamics and ranges of $S}_{1$ and $S}_{2$, resulting in a second indirect effect on the provariance bias effect.
The meanfield model completed 94,000 standard session ‘Regular’ trials, in the same manner as the circuit models. We only generated control circuits for the meanfield model. Predictions of perturbations from spiking circuit models generally held for the meanfield model. However, due to detailed distinctions in the dynamics of the spiking circuit model verses the meanfield model, perturbationinduced decision deficit arose from different mechanisms between the two sets of models (Lam, 2017). This complicated the translatability between the two sets of models, so we focused on the control circuit.
Code and data availability
Request a detailed protocolStimuli generation and data analysis for the experiment were performed in MATLAB. The spiking circuit model was implemented using the Pythonbased Brian2 neural simulator (Goodman and Brette, 2008), with a simulations time step of 0.02ms. Further analyses for both experimental and model data were completed using customwritten Python and MATLAB codes. Data and analysis scripts to reproduce figures from the paper will be made publicly available for download from an online repository upon publication. Data has been uploaded to Dryad under the doi:10.5061/dryad.pnvx0k6k3. Code is available on GitHub at https://github.com/normanlam1217/CavanaghLam2020CodeRepository (copy archived at https://github.com/elifesciencespublications/CavanaghLam2020CodeRepository; Lam, 2020).
Data availability
Stimuli generation and data analysis for the experiment were performed in MATLAB. The spiking circuit model was implemented using the Pythonbased Brian2 neural simulator, with a simulations time step of 0.02ms. Further analyses for both experimental and model data were completed using customwritten Python and MATLAB codes. Data has been uploaded to Dryad under the doi:10.5061/dryad.pnvx0k6k3. Code is available on GitHub at https://github.com/normanlam1217/CavanaghLam2020CodeRepository (copy archived at https://github.com/elifesciencespublications/CavanaghLam2020CodeRepository).

Dryad Digital RepositoryData from: A circuit mechanism for decision making biases and NMDA receptor hypofunction.https://doi.org/10.5061/dryad.pnvx0k6k3
References

Highperformance execution of psychophysical tasks with complex visual stimuli in MATLABJournal of Neurophysiology 109:249–260.https://doi.org/10.1152/jn.00527.2012

A flexible software tool for temporallyprecise behavioral control in matlabJournal of Neuroscience Methods 174:245–258.https://doi.org/10.1016/j.jneumeth.2008.07.014

Achieving behavioral control with millisecond resolution in a highlevel programming environmentJournal of Neuroscience Methods 173:235–240.https://doi.org/10.1016/j.jneumeth.2008.06.003

Visual perception and its impairment in schizophreniaBiological Psychiatry 64:40–47.https://doi.org/10.1016/j.biopsych.2008.03.023

Processing of global, but not local, motion direction is deficient in schizophreniaSchizophrenia Research 61:215–227.https://doi.org/10.1016/S09209964(02)002220

Compromised latestage motion processing in schizophreniaBiological Psychiatry 55:834–841.https://doi.org/10.1016/j.biopsych.2003.12.024

HCN1 channel subunits are a molecular substrate for hypnotic actions of ketamineJournal of Neuroscience 29:600–609.https://doi.org/10.1523/JNEUROSCI.348108.2009

Cost evaluation during DecisionMaking in patients at early stages of psychosisComputational Psychiatry 3:18–39.https://doi.org/10.1162/cpsy_a_00020

Performance on a probabilistic inference task in healthy subjects receiving ketamine compared with patients with schizophreniaJournal of Psychopharmacology 26:1211–1217.https://doi.org/10.1177/0269881111435252

Reviewing the ketamine model for schizophreniaJournal of Psychopharmacology 28:287–302.https://doi.org/10.1177/0269881113512909

The neural basis of decision makingAnnual Review of Neuroscience 30:535–574.https://doi.org/10.1146/annurev.neuro.29.051605.113038

Brian: a simulator for spiking neural networks in PythonFrontiers in Neuroinformatics 2:5.https://doi.org/10.3389/neuro.11.005.2008

Probabilistic judgements in deluded and nondeluded subjectsThe Quarterly Journal of Experimental Psychology Section A 40:801–812.https://doi.org/10.1080/14640748808402300

Computational psychiatry as a bridge from neuroscience to clinical applicationsNature Neuroscience 19:404–413.https://doi.org/10.1038/nn.4238

Altered ExcitatoryInhibitory balance in the NMDAHypofunction model of schizophreniaFrontiers in Molecular Neuroscience 1:6.https://doi.org/10.3389/neuro.02.006.2008

Excitation/Inhibition imbalance in animal models of autism spectrum disordersBiological Psychiatry 81:838–847.https://doi.org/10.1016/j.biopsych.2016.05.011

Cortical parvalbumin interneurons and cognitive dysfunction in schizophreniaTrends in Neurosciences 35:57–67.https://doi.org/10.1016/j.tins.2011.10.004

Balanced cortical microcircuitry for maintaining information in working memoryNature Neuroscience 16:1306–1314.https://doi.org/10.1038/nn.3492

KetamineInduced changes in the signal and noise of rule representation in working memory by lateral prefrontal neuronsThe Journal of Neuroscience 35:11612–11622.https://doi.org/10.1523/JNEUROSCI.183915.2015

Ketamine alters lateral prefrontal oscillations in a RuleBased working memory taskThe Journal of Neuroscience 38:2482–2494.https://doi.org/10.1523/JNEUROSCI.265917.2018

NMDA receptor function and human cognition: the effects of ketamine in healthy volunteersNeuropsychopharmacology 14:301–307.https://doi.org/10.1016/0893133X(95)001373

Interneuron dysfunction in psychiatric disordersNature Reviews Neuroscience 13:107–120.https://doi.org/10.1038/nrn3155

Losing control under ketamine: suppressed corticohippocampal drive following acute ketamine in ratsNeuropsychopharmacology 40:268–277.https://doi.org/10.1038/npp.2014.184

Historydependent variability in population dynamics during evidence accumulation in cortexNature Neuroscience 19:1672–1681.https://doi.org/10.1038/nn.4403

Working memory and DecisionMaking in a frontoparietal circuit modelThe Journal of Neuroscience 37:12167–12186.https://doi.org/10.1523/JNEUROSCI.034317.2017

GABAergic interneuron origin of schizophrenia pathophysiologyNeuropharmacology 62:1574–1583.https://doi.org/10.1016/j.neuropharm.2011.01.022

Glutamate receptor dysfunction and schizophreniaArchives of General Psychiatry 52:998–1007.https://doi.org/10.1001/archpsyc.1995.03950240016004

Jumping to conclusions about the beads task? A Metaanalysis of delusional ideation and DataGatheringSchizophrenia Bulletin 41:1183–1191.https://doi.org/10.1093/schbul/sbu187

Beneficial effects of the NMDA antagonist ketamine on decision processes in visual searchJournal of Neuroscience 30:9947–9953.https://doi.org/10.1523/JNEUROSCI.631709.2010

Neural circuit dynamics underlying accumulation of timevarying evidence during perceptual decision makingFrontiers in Computational Neuroscience 1:6.https://doi.org/10.3389/neuro.10.006.2007

A recurrent network mechanism of time integration in perceptual decisionsJournal of Neuroscience 26:1314–1328.https://doi.org/10.1523/JNEUROSCI.373305.2006
Decision letter

Tobias H DonnerReviewing Editor; University Medical Center HamburgEppendorf, Germany

Michael J FrankSenior Editor; Brown University, United States

Konstantinos TsetsosReviewer; University Medical Center HamburgEppendorf, Germany

Valentin WyartReviewer; École normale supérieure, PSL University, INSERM, France
In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.
Acceptance summary:
This study uses a combination of neural circuit modeling with pharmacological intervention and behavioral psychophysics in monkeys to dissect the mechanisms of decisionmaking. It implicates the Nmethylaspartate (NMDA) receptor in the accumulation of decision evidence, linking NMDAmediated recurrent excitation of pyramidal neurons to a wellknown behavioral phenomenon: a bias to choose options exhibiting larger variations in value. The approach opens up new perspectives for the mechanistic assessment of decision computations in the brain.
Decision letter after peer review:
Thank you for submitting your article "A circuit mechanism for decision making irrationalities and NMDAR hypofunction: behaviour, modelling and pharmacology" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by Tobias Donner as Reviewing Editor and Michael Frank as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Konstantinos Tsetsos (Reviewer #1); Valentin Wyart (Reviewer #2).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
While editors and reviewers found your work interesting in principle, all reviewers raised some substantial concerns that would need to be addressed before we can reach a final decision on your paper. The essential revisions are listed below. Indeed, it seems possible that the results of these requested analyses will require a substantial toningdown of several of your claims pertaining to E/I balance, in a way that could undermine the specificity of conclusions and the suitability of your paper for eLife. Even so, we agreed to give you the chance to address the concerns, for which two months should be a realistic time frame.
Summary:
This manuscript reports a computational and pharmacological study in monkeys, into a question of interest to a broad research community: The role of the NMDA receptor in evidence accumulation and decisionmaking. The authors used a protocol developed and tested in humans by Tsetsos and colleagues, in which subjects compare the average length of two sequences of visual bar stimuli. The monkeys exhibit a socalled “provariance bias” (PVB) toward choosing the more variable stream, although the monkey behavior differs from humans in other aspects (see below). The authors show that a neural spiking circuit model of bounded evidence accumulation shows a similar PVB, and that a lowered E/I ratio simultaneously decreases accuracy and increases PVB. Finally, they report that intramuscular injection of ketamine transiently decreases accuracy and increases PVB, as predicted by a lowered E/I ratio. The authors interpret their findings in the context of the previous work on PVB as well as pseudopsychotic effects of ketamine in human subjects.
Essential revisions:
1) Specificity of the pharmacological claim within the circuit model.
You should show that the ketamine behavioural effects are robustly obtained under the lowered E/I hypothesis (e.g. for various magnitudes of E/I reduction ) and, crucially, incompatible with a) sensory deficit, b) elevated E/I, c) concurrent changes in both NMDA receptors. Practically, this means the following.
a) For each hypothesis, model predictions should be shown by varying the relevant model parameter(s) gradually within a range.
b) The similarity between model predictions and behavioural data should always be quantified using a goodness of fit metric (currently this is done by eye balling). Please should focus on perturbations who provide a good quantitative fit to the data.
c) Perturbations appear to be implemented in the same fashion as in Lam et al., 2017. There, the authors also changed other parameters besides the relevant synaptic weights, in order to maintain stability in the model dynamics. It is not clear if and how these extra changes could be pharmacologically induced by ketamine. Please clarify this aspect and derive predictions when stability adjustments are not performed.
2) Effect of drug on lapses.
Please test for a ketamine effect (sedation) on lapse rates. The psychometric functions under ketamine indicate a large change in the lapse rate which is currently not taken into account. All descriptive analyses (logistic regressions) and model simulations should take into account lapse rates. Can an increase in lapse rates explain away the changes in the PVB effect, psychometric curves, and kernels?
3) Validity of circuit model.
Currently, the circuit model is presented as a black box. You devote a couple of sentences in describing how the expansive nonlinearities in the FI curve give rise to the provariance effect. This part is not very well developed. One way to test whether indeed the nonlinearities are crucial in the provariance effect the monkeys show, is to separately analyse trials with "high" (total sum of both streams high) vs. "low" (total sum of both streams low) evidence and see if the PVB effect changes. Or add the total sum as a regressor and compare the regression weights in the model and in the data. In addition to nonlinearities in FI curves, the attractor dynamics of the circuit model may (or may not) promote the PVB effect. Are these dynamics even necessary to produce the provariance effect in the model? And is there any link between signatures of attractor dynamics (e.g. kernel shapes) and the PVB effect in the data? If dynamics were redundant in the model, would this undermine the claim that the PVB can be diagnostic of EI balance?
This relates to the question concerning the way the PVB is quantified: in the model, how can the PVB index change even if the FI nonlinearity remains unchanged? It thus seems that the PVB index is sensitive to the overall signaltonoise associated with the model and it is not a pure marker of the provariance propensity.
Please clarify what the PVB index stands for.
4) Results for both task framings.
Please present separate results for the two framings, i.e. "select higher" and "select lower" trials, which is interesting from an empirical viewpoint. Also: Have you mislabelled the "highvariance correct" and "lowvariance correct" trials in the "select the lower" conditions? (If not, then the quantification of the PVB may be wrong.)
5) Generalizability of findings to humans.
Reviewers raised doubts about the suggested analogy of monkey and human performance, and the underlying computations: Showing that both humans and monkeys have a PVB is not sufficient to establish a crossspecies link. In the human work by Tsetsos et al. (PNAS, 2012, 2016), the temporal weighting of evidence on choice exhibits recency, in sharp contrast to the primacy found here in monkeys. What does this imply in terms of the relationship at a mechanistic level? This point needs to be discussed.
6) Link to schizophrenia.
Reviewers remarked that the link to schizophrenia is very loose: no patients are tested and overall behavioral signatures are different even from healthy human subjects (see point 3). Reviewers agreed that this point should at least be toned down substantially or dropped altogether. This tentative link could be brought up as speculation in Discussion, but not used as the basis for setting up the study.
7) Discuss limitations of pharmacological protocol.
a) The physiological effects of ketamine on cortical circuits remain speculative. The drug is unlikely to have the single, simple effect, as assumed in the model. This should be acknowledged in Discussion. Also, what happens in the model when NMDA hypofunction is implement in both neuron types?
b) The use of an intramuscular injection of ketamine at 0.5 mg/kg (about an order of magnitude stronger than what would be used in humans) produces a massive transient effect on task behavior, which has potential important drawbacks. First, the effect is massive, with decision accuracy dropping from about 85% correct to less than 60% correct after 5 minutes, followed by a sustained recovery over the next 30 minutes. This effect of ketamine is so strong that it is hard to know whether it is truly NMDA receptor hypofunction that produces the behavioral deficit, or task disengagement due to the substantial decrease in reward delivery (for example). The time window chosen for the analysis is also strongly nonstationary, and it is difficult to assess how much an average taken over this window is truly an accurate depiction of a common behavioral deficit throughout this time period (where accuracy goes from 60% correct to 80% correct). Again, the presence of possible attentional lapses should be accounted for (and reported in the manuscript) in all model fits and analyses, given the strength of ketamineinduced deficits triggered by this pharmacological protocol. We realize that this aspect of the study cannot be changed at this point, but it should be acknowledged as an important limitation.
[Editors' note: further revisions were suggested prior to acceptance, as described below.]
Thank you for resubmitting your article "A circuit mechanism for decision making biases and NMDA receptor hypofunction" for consideration by eLife. Your revised article has been reviewed by 2 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Michael Frank as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Konstantinos Tsetsos (Reviewer #1); Valentin Wyart (Reviewer #2).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
We would like to draw your attention to changes in our revision policy that we have made in response to COVID19 (https://elifesciences.org/articles/57162). Specifically, when editors judge that a submitted work as a whole belongs in eLife but that some conclusions require a modest amount of new analyses, as they do with your paper, we are asking that the manuscript be revised to either limit claims to those supported by data in hand, or to explicitly state that the relevant conclusions require additional supporting analyses.
Our expectation is that the authors will eventually carry out the additional analyses and report on how they affect the relevant conclusions either in a preprint on bioRxiv or medRxiv, or if appropriate, as a Research Advance in eLife, either of which would be linked to the original paper.
Summary:
The authors have provided an extensive response to the reviewers' comments based on several additional analyses of their data; they have successfully addressed a large subset of the comments. Specifically, they have performed several additional analyses to (i) test alternative hypotheses as well as the robustness of the favored hypothesis, (ii) examine lapses under ketamine, (iii) unpack the workings of the circuit model, and (iv) examine the frequentwinner effect in the data so they can assess the generalizability of this study to humans. We acknowledge that all these analyses have led to a significant improvement. Nevertheless, we remain uncertain about the validity of the overall conclusion, that ketamine induces NMDAR hypofunction in excitatory neurons, and that this effect is behaviorally manifested as an increase in a provariance bias.
Revisions for this paper:
1) Motivate modeling approach.
Given that you opted not to fit the model (which would be done with the meanfield reduction), or tune its parameters so that it matches the above behavioral patterns, we believe you should unpack the reasoning underlying this particular modeling approach.
2) Plot model predictions along with data.
As we pointed in our first review there seem to be some discrepancies between the data and the model, which we remain concerned about:
i) Ketamine data asymptote at a lower than 100% level. The lapse rates are still not plugged in the circuit model so as to bring the model predictions closer to the data.
ii) The control kernel in Figure 7I and the monkey kernels in Figure 8C look different. In the model, there is a primacy pattern (except for the first item) but in the data we see a flat/ Ushaped pattern. Plotting those together could reveal the degree of discrepancy.
iii) In the control condition, the psychometric functions in Figure 7B and in Figure 8B look different (for example in terms of convergence of the light and dark coloured lines). The elevated E/I plot in Figure 7D appears to be closer to the saline psychometric curve.
Such discrepancies, if true, matter: if the "baseline" model does not capture behavior in the control condition well, we cannot be confident about the validity of the subsequent perturbations performed to emulate the ketamine effect. To allow for better assessing the match, we strongly encourage you to always plot model predictions with the data.
Ideally, you would also assess the goodness of fit using maximum likelihood. (A certain parametrization could exhibit similarity with the data in terms of the logistic regression weights (PVB) but at the same time it can miss largely on capturing the psychometric function.) We believe this would be straightforward, given the simulations you have already performed, but leave the decision to you, whether to not to do this.
We realize that this point was not explicitly raised in the previous round. Then, reviewers had asked for a quantification of the goodness of fit. The approach you chose (logistic regression) is specific to the PVB index (not applied to psychometric functions and kernels) and did not fully convince reviewers.
3) Assess effects of concurrent NMDAblockade on E and I neurons.
You establish that E/I increase reduces the PVB index while E/I decrease has the opposite effect. However, you have not examined the effect of concurrent changes of NMDARs of both, E and I cells, which we had suggested to do. Please comment on the fact that concurrent changes could mimic the effect of EE reduction (Figure 7—figure supplement 2: moving up diagonally the purple point would result in equivalent behavior). Unless there is strong support in favor of the selective NMDA change over the concurrent change (assessed via maximum likelihood), the conclusions should be reframed.
4) Add a lapse rate downstream from circuit model.
You have now assessed lapse rates in your analysis, but reviewers remarked that you do not report the bestfitting lapse rates. This makes it impossible to judge just how much lapses contribute to the decrease in task performance in the initial period following ketamine injection (which is included in all analyses). We are concerned that this massive performance drop under ketamine is not only triggered by aPVB, but also (perhaps largely) by an increase in lapses and a decrease in evidence sensitivity.
We would expect a lapse mechanism to be in play in the circuit model when emulating the ketamine effect. You could use the fraction of lapses best fitted to psychometric curves (which clearly do not saturate at p(correct) = 1) for the circuit model simulations. It seems conceivable that allowing the circuit model to lapse will reduce the weight applied on the mean evidence.
5) Different quantification of provariance bias.
We do not understand the motivation for compressing sensitivity to mean and to variance into a single PVB index. Our reading is that the provariance effect, quantified as a higher probability of choosing a more variable stream (see Tsetsos et al., 2012), can just be directly mapped onto the variance regressor. Combining the weights into a PVB index and framing the general discussion around this index seems unnecessary. The main behavioral result of ketamine can be parsimoniously summarized as a reduced sensitivity to the mean evidence. Relatedly, please discuss if and how the ketamineinduced increase in the PVB effect, the way you quantified it, rides over a strong decrease of the sensitivity to mean evidence under ketamine.
It does seem to be the case that sensitivity to variance remains statistically indistinguishable between saline and ketamine (if anything it is slightly reduced). The E/I increase model consistently predicts that the variance regressor is reduced. This is not the case with the E/I decrease model, which occasionally predicts increases in the sensitivity to the variance (see yellow grids in Figure 7—figure supplement 2). This feature of the E/I decrease model should be discussed, as it seems to undermine the statement that the E/I perturbation produces robust predictions regardless of perturbation magnitude (i.e. depending on the strength of E/I reduction the model can produce a decrease or increase on variance sensitivity, and the relationship is nonmonotonic). Overall, we believe that combining sensitivity to mean and variance obscures the interpretation of the data and model predictions.
Again, we realize that this point appears to be new. But reviewers feel they could not really have a strong case regarding this metric without seeing the more detailed model predictions (in a 2d grid) that you have presented in your revision.
https://doi.org/10.7554/eLife.53664.sa1Author response
Essential revisions:
1) Specificity of the pharmacological claim within the circuit model.
You should show that the ketamine behavioural effects are robustly obtained under the lowered E/I hypothesis (e.g. for various magnitudes of E/I reduction) and, crucially, incompatible with a) sensory deficit, b) elevated E/I, c) concurrent changes in both NMDA receptors. Practically, this means the following.
a) For each hypothesis, model predictions should be shown by varying the relevant model parameter(s) gradually within a range.
b) The similarity between model predictions and behavioural data should always be quantified using a goodness of fit metric (currently this is done by eye balling). Please should focus on perturbations who provide a good quantitative fit to the data.
c) Perturbations appear to be implemented in the same fashion as in Lam et al., 2017. There, the authors also changed other parameters besides the relevant synaptic weights, in order to maintain stability in the model dynamics. It is not clear if and how these extra changes could be pharmacologically induced by ketamine. Please clarify this aspect and derive predictions when stability adjustments are not performed.
We thank the reviewers for this comment. We agree it is important to demonstrate the robustness of our model predictions. We have therefore included a 2dimensional parameter scan with simultaneous NMDAR hypofunction on excitatory (which lowers E/I) and inhibitory (which elevates E/I) neurons in the circuit model. We have also included a 1dimensional parameter scan of the sensory deficit perturbation strength. Crucially, these parameter scans demonstrate robust effects by perturbations on the PVB index and the majority of the regression coefficients, in the three directions of lowered E/I, elevated E/I, and sensory deficit (new Figure 7—figure supplements 2, 3, and 4 ). In particular, PVB index is consistently increased by lowered E/I, decreased by elevated E/I, and unaltered by sensory deficit. Extremely strong sensory deficit resulted in an increase in PVB index, but this effect occurred at the limit where the model can barely perform the task (Figure 7—figure supplement 4), with a psychometric function qualitatively different from the monkey behaviour under ketamine (Figure 8—figure supplement 6).
To address comment 1b, we need to define an appropriate measure to quantify the degree to which the perturbation in the model alters decisionmaking behaviour in a similar manner as does ketamine in the monkeys. Importantly, the control parameters of the biophysicallybased spiking circuit model were not at all fit to the monkey’s baseline behaviour (which is typical for spiking circuit modelling), and instead were the same as in Lam et al., 2017. Despite differences between model and monkey in control psychometric performance, we can quantify whether a perturbation produces a similar change in performance. The same could be applied for the two monkeys – despite baseline differences, does ketamine alter behaviour similarly between them?
Here, we focused on two key aspects of behavioural alteration: the relative changes in the (i) evidence mean and (ii) evidence standard deviation regression weights. We then quantify the comparison between two sets of change (e.g., model to monkey, or between two monkeys) as the cosine similarity (CS) of the two vectors composed of these relative changes (Figure 8—figure supplement 4A). Applying this measure to compare between the two monkeys, we find CS = 0.94, corresponding to an angle of 20.1 degrees, which shows the consistency of ketamine effects between the monkeys.
We applied this analysis to quantify the similarity between a monkey’s behaviour change under ketamine and the model under a range of parameter perturbations (2D sweeps of NMDAR hypofunction, and sensory deficit) (Figure 8—figure supplement 4BI). These analyses found that the lowered E/I perturbation robustly yielded a similar performance change as measured in the monkeys under ketamine, with higher CS values than elevated E/I or sensory deficit perturbations. Specifically, the 1D sweep of lowered E/I yielded maximum CS values of 0.9972 and 0.9968 for Monkeys A and H, respectively (comparable to the betweenmonkey CS of 0.9391). These results were replicated by model comparison analysis using Euclidean distance, as a metric which also accounts for the magnitude of the vectors (Figure 8—figure supplement 5).
It is important to note that our modelling results support the hypothesis of lowered E/I in decision making circuits contributing to the provariance effect, but cannot exclude possible contributions from sensory deficits (which will not alter the provariance bias in our model). For the same reason, we did not consider a 2dimensional parameter scan with both lowered E/I and sensory deficit perturbations, as no dissociable predictions can be inferred from that analysis.
The cosine similarity and Euclidean distance analyses, motivated by comment 1b, informed us that a moderately weaker perturbation of lowered E/I (by ~25%) yielded a better fit than the perturbation strength in our original submission, to the pattern of behavioural alteration observed under ketamine (Figure 8—figure supplement 4D,G and 5D,G). We have therefore updated the main Figure 7 with a lowered E/I perturbation strength that is a better fit by this measure, along with other perturbations to match the reduction in evidence mean regression weight.
Regarding comment 1c, we would like to clarify that the control circuit model in the current study is identical to that in Lam et al., 2017. The only parameter which is different is $\mu $, which scales the input current as a function of the visual stimulus; given the different task paradigms, we believe it is reasonable to retune $\mu $ to better match the observed experimental data. The control circuit model in both the current study and in Lam et al., 2017 are different from the model presented in Wang, 2002. As originally noted in Lam et al., 2017, adjustments were made to the Wang, 2002 parameters to have stability of baseline and memory states under a wider range of E/I perturbations. (We note that all of the same qualitative effects of altered E/I can be observed in the Wang, 2002 parameters, but within a smaller range of perturbation strengths.)
Importantly, in both the current study and in Lam et al., 2017, we considered the control circuit model as the default state, corresponding to no pharmacological E/I perturbation. Therefore, the adjustments to control parameters from Wang, 2002 to the present study are not part of the simulated effects of pharmacological perturbation. The simulated effect of the perturbation on the local circuit, corresponding to ketamine, is solely mediated by reducing the conductance of recurrent NMDA receptors.
For the reviewers’ convenience, we included additions to the manuscript in response to this comment. In response to comment 1a, we added the following text to the Results:
“While all circuit models were capable of performing the task (Figure 7BE), the choice accuracy of each perturbed model was reduced when compared to the control model. […] Together, the circuit model thus provided the basis of dissociable prediction by E/Ibalance perturbing pharmacological agents.”
We also added the details of parameter scans to test the robustness of model prediction
in the Materials and methods, in response to comments 1a and 1b:
“[…] For the exact parameters, the lowered E/I model reduced $G}_{E\to E$ by 1.3125%, the elevated E/I model reduced ${G}_{E\to I}$ by 2.625%, and the sensory deficit model had a sensory deficit of 20% (such that ${\mu}^{\prime}$ was reduced by 20%) (Figure 7, Figure 7—figure supplement 1). […] A higher cosine similarity (and lower Euclidean distance) meant the relative extent (and direction) of alteration, to the regression coefficients of mean evidence and evidence standard deviation, was more similar between the perturbations in the circuit model and the monkey data.”
In response to comment 1b, we added the following text to the Results:
“Additional observations further supported the lowered E/I hypothesis for the effect of ketamine on monkey choice behaviour. […] This shifting of the weights could reflect a sensory deficit, but given the results of the provariance analysis, collectively the behavioural effects of ketamine are most consistent with lowered E/I balance and weakened recurrent connections.”
2) Effect of drug on lapses.
Please test for a ketamine effect (sedation) on lapse rates. The psychometric functions under ketamine indicate a large change in the lapse rate which is currently not taken into account. All descriptive analyses (logistic regressions) and model simulations should take into account lapse rates. Can an increase in lapse rates explain away the changes in the PVB effect, psychometric curves, and kernels?
Thank you for raising this important point. As the term lapse rate is slightly ambiguous, we will initially provide some clarification. Lapse rate may refer to the rate at which incomplete trials occur (i.e. due to the subject not responding, or breaking fixation). Alternatively, it may refer to the animal responding randomly, regardless of the trial difficulty, on a certain proportion of trials. Our response below will address both of these factors.
Firstly, in our initial submission, all incomplete trials (i.e. those where the animal did not commit to a choice, or broke fixation) were excluded from the analyses. The only trials included in the analyses were those where the animal completed a choice. Hence, any change in our accuracy measure (i.e. as in Figure 8A) relates specifically to changes in their actual choices, rather than task engagement. It is also important to stress that these “incomplete trials” occurred rarely, even when the animals were administered with ketamine:
The second type of lapsing, random responses, are an important consideration that our initial submission did not address. As the reviewers suggest, it is possible that an increase in these types of lapses could account for the animals’ reduction in accuracy when administered with ketamine. To address this point, we extended our existing logistic regression models to incorporate an extra parameter which could account for these lapses. The benefits of including this parameter were twofold:
1) To quantify the lapse rate
2) To control for lapsing, and isolate its effect from our other analyses (i.e. PVB index, kernels).
The updated models are listed below (description taken from the revised Materials and methods):
“To control for possible lapse effects induced by ketamine, where the animal responded randomly regardless of the trial difficulty, the behavioural models described above were extended to include an extra “lapse parameter”, Y_{0}. […] Bootstrapping was used to generate error estimates for the parameters of these models (10,000 iterations). As our analyses demonstrate that the animals very rarely lapse when administered with saline, we did not deem it necessary to apply the lapsing models to the standard session experiment (i.e. Figures 26). ”
Crucially, our existing analyses of the ketamine data were not affected when controlling for lapses. It was clear that accounting for lapse rates did not explain away the changes in the PVB effect or the kernels. We have included these new results as a supplementary figure to the main Figure 8. See Figure 8—figure supplement 2.
For the reviewers’ convenience, we have also included Author response image 2 which compares the results from the original submission with the updated results utilising the lapsing model:
As the reviewers implied, the subjects’ lapsing did increase with ketamine. Whilst we have robustly established this is not the cause of our behavioural effects, we felt this was an important point to include in the manuscript. We have therefore updated the main text in the Results section together with changes from comment 1b:
“To understand the nature of this deficit, we studied the effect of drug administration on the provariance bias (Figure 8BF). […]This confirmed that the rise in PVB was an accurate description of a common behavioural deficit throughout the duration of ketamine administration.”
As mentioned in the Materials and methods, our analyses demonstrate that the animals very rarely lapse when administered with saline. As such, we did not deem it necessary to apply the lapsing models to the standard session experiment (i.e. Figures 26). With regards to the psychometric functions (e.g. Figures 8BC), these have not been updated. This is because the three parameters in this model (Equation 2) are already sufficient to capture lapsing behaviour. Regardless, these psychometrics are purely illustrative and are not used in any of the statistical reporting.
3) Validity of circuit model.
Currently, the circuit model is presented as a black box. You devote a couple of sentences in describing how the expansive nonlinearities in the FI curve give rise to the provariance effect. This part is not very well developed. One way to test whether indeed the nonlinearities are crucial in the provariance effect the monkeys show, is to separately analyse trials with "high" (total sum of both streams high) vs. "low" (total sum of both streams low) evidence and see if the PVB effect changes. Or add the total sum as a regressor and compare the regression weights in the model and in the data. In addition to nonlinearities in FI curves, the attractor dynamics of the circuit model may (or may not) promote the PVB effect. Are these dynamics even necessary to produce the provariance effect in the model? And is there any link between signatures of attractor dynamics (e.g. kernel shapes) and the PVB effect in the data? If dynamics were redundant in the model, would this undermine the claim that the PVB can be diagnostic of EI balance?
This relates to the question concerning the way the PVB is quantified: in the model, how can the PVB index change even if the FI nonlinearity remains unchanged? It thus seems that the PVB index is sensitive to the overall signaltonoise associated with the model and it is not a pure marker of the provariance propensity.
Please clarify what the PVB index stands for.
We thank the reviewers for raising these important issues, and for suggesting an interesting analysis which we now include. We agree the mechanism of the provariance effect from the decision making process could be further analysed and explained, especially regarding the expansive nonlinearities in the FI curve. We have now expanded on Results, Materials and methods, and Discussion, to discuss how the evidence integration process can generate a provariance effect. We also discussed the relation of this mechanism with attractor dynamics, the comparison of this mechanism with the selective integration model (Tsetsos et al, 2016), and how E/I balance disruption may change the FI nonlinearity in the meanfield model and thus impact the PVB index.
In particular, regarding the reviewers’ comment on how attractor dynamics may contribute to a provariance bias, we want to highlight that in recurrent circuit models, there is not a clean separation between attractor dynamics and the other factors impacting evidence integration, e.g. to disentangle contributions to PVB. This is in contrast to the Tsetsos et al, 2016 model, which has separable stages from nonlinear transformation of evidence, to the process of integrating that transformed evidence. Figure 6EH illustrates that in the recurrent circuit, the temporal change of the systems state (${S}_{1}$, ${S}_{2}$) depends on the current state (${S}_{1}$, ${S}_{2}$) itself, exhibiting an attractor landscape. Furthermore, Figure 6DH shows that this attractor landscape itself reconfigures dynamically as the stimulus input changes. In a sense, the “gain” of how stimulus impacts the state (i.e. how it is integrated) varies dynamically as a function of both stimulus and the stochastically evolving state of the system (see Materials and methods). This is why these factors cannot be disentangled. These points are now included in the Discussion. Nonetheless, we do agree that future theoretical analysis would be useful to help to link biophysical circuit models, reduced as nonlinear dynamical systems, to more tractable evidence accumulation models (e.g. selective integration). Such algorithmic models may allow us to unveil how various signatures of attractor dynamics are linked to the PVB effect, as raised by the reviewers. For instance, a short integration timescale demonstrated by elevated E/I circuits (Figure 7I) would prevent withintrial variabilities of the stimulus from being inferred, especially when only one or two bars are integrated.
Based on the suggestion for a new analysis, we tested for differential effects of “high” vs. “low” amounts of total evidence, in both the model and the monkeys (Figure 5—figure supplement 2). In the circuit model, trials with more total evidence more strongly drive the neurons to the nearlinear regime of the FI curve, and thus have a smaller PVB index than trials with less total evidence. Interestingly, the monkeys also demonstrated a consistent trend, though this effect did not achieve statistical significance. The temporal regression weights were also different between more vs. less total evidence, consistently between model and monkeys. The Results section is now expanded to discuss the support of the FI nonlinearity and more generally attractor dynamics from this analysis.
Finally, in relation to the question about how can the PVB index change even if the FI nonlinearity remains unchanged, we now include more details on our meanfield model in the Materials and methods section, in order to explain how E/I balance disruption may lead to changes in PVB index. The transfer function as a function of variables ${x}_{1}$ and ${x}_{2}$ (Equations 18, 19) is unchanged across the circuit models. However, ${x}_{1}$ and ${x}_{2}$ can be expressed in terms of underlying input currents, and the transfer function thus expressed as a function of the synaptic currents ($I}_{1$ and $I}_{2$) depends on NMDAR mediated recurrent interactions. As a result, the effective transfer function on the stimulus input is actually altered by E/I perturbation (because E/I perturbation changes the recurrent contributions to synaptic currents). As such, NMDAR hypofunction alters PVB index, both due to changes in NMDAR coupling strengths (${\alpha}_{1}$ and ${\alpha}_{2}$), and also from distinct dynamics and ranges of $S}_{1$ and ${S}_{2}$ as a result of different ${\alpha}_{1}$ and ${\alpha}_{2}.$
The updated texts are included below for the reviewers’ convenience.
In Results:
“To understand the origin of the provariance bias in the spiking circuit, we mathematically reduced the circuit model to a meanfield model (Figure 6A), which demonstrated similar decisionmaking behaviour to the spiking circuit (Figure 6BC, Figure 6—figure supplement 1). […] In addition, distinct temporal weighting on stimuli were observed in both the circuit model and experimental data, for trials with more versus less total evidence (Figure 5—figure supplement 2D,H).”
In Materials and methods:
“The current spiking circuit model was mathematically reduced to a meanfield model, as outlined in (Niyogi and WongLin, 2013), in the same manner as from (Wang, 2002) to (Wong and Wang, 2006). […] This complicated the translatability between the two sets of models, so we focused on the control circuit.”
In Discussion:
“The results from our spiking circuit modelling also provided a parsimonious explanation for the cause of the provariance bias within the evidence accumulation process. […] While other phenomenological models may also explain provariance bias, their link to our circuit model is similarly indirect, and were out of the scope of this study.”
4) Results for both task framings.
Please present separate results for the two framings, i.e. "select higher" and "select lower" trials, which is interesting from an empirical viewpoint. Also: Have you mislabelled the "highvariance correct" and "lowvariance correct" trials in the "select the lower" conditions? (If not, then the quantification of the PVB may be wrong.)
Thank you for this suggestion. In response to this point, we have included three additional supplementary figures (Figure 2—figure supplement 1, Figure 3—figure supplement 2, Figure 4—figure supplement 2). It is clear from these figures that very similar results are attained for all analyses regardless of the task framing.
Unfortunately, we are slightly unclear what the reviewers meant with regards to the mislabelling of conditions. To clarify, the quantification of the PVB is determined by Equation 5:
where P_{L} refers to the probability of choosing the left option, $\beta}_{0$ is a bias term, $\beta}_{1$ reflects the influence of evidence mean, and $\beta}_{2$ reflects the influence of standard deviation of evidence (evidence variability). Author response table 1 outlines how this relates to the bar heights in each of the conditions:
In the main paper (i.e. Figure 4D), the analysis is not calculated separately for “select higher” and “select lower” conditions. Furthermore, it does not depend on whether the trial is labelled as “highvariance correct” or “lowvariance correct”. The purpose of these labels was only for visualisation as part of the psychometric plots (Figure 4C).
We believe some of this confusion may be resulting from the terminology we are using. To address this, we have updated references to “select higher” and “select lower” to “select taller” and “select shorter”. For example,
“Subjects were presented with two series of eight bars (evidence samples), one on either side of central fixation. Their task was to decide which evidence stream had the taller/shorter average bar height, and indicate their choice contingent on a contextual cue shown at the start of the trial.”
“Subjects had previously learned that two of these cues instructed to choose the side with the taller average barheight (“ChooseTallTrial”), and the other two instructed to choose the side with the shorter average barheight (“ChooseShortTrial”).”
We have also added the following sentences to the Materials and methods section to add some clarity with how this ties in with the illustrative psychometric plots of the provariance:
“To illustrate the effect of provariance bias, we also fitted a threeparameter psychometric function to the subjects’ probability to choose the higher SD option (${P}_{\mathrm{\text{HSD}}}$) in the “Regular” trials, as a function of the difference in mean evidence in favour of the higher SD option on each trial (${x}_{\mathrm{\text{HSD}}}$[…] On “ChooseShortTrials”, the mean evidence in favour of the higher SD option was calculated by subtracting (100 – mean bar height of the lower SD option) from (100 – mean bar height of the higher SD option).”
To clarify, it is not necessary to split the results for the two framings for the circuit model data. This is because the inputs to the circuit model are the transformed evidence values (i.e. bar height on “Select Higher” trials; 100 – bar height on “Select Lower trials”). Therefore, the circuit model will not show any difference in results between the two task framings.
5) Generalizability of findings to humans.
Reviewers raised doubts about the suggested analogy of monkey and human performance, and the underlying computations: Showing that both humans and monkeys have a PVB is not sufficient to establish a crossspecies link. In the human work by Tsetsos et al. (PNAS, 2012, 2016), the temporal weighting of evidence on choice exhibits recency, in sharp contrast to the primacy found here in monkeys. What does this imply in terms of the relationship at a mechanistic level? This point needs to be discussed.
Thanks for raising this point. We agree that there are differences between the primacy bias found in our paradigm and the recency bias found in the previous Tsetsos papers. We now discuss this point in the Discussion:
“Crucially, our circuit model generated dissociable predictions for the effects of NMDAR hypofunction on the provariance bias (PVB) index that were tested by followup ketamine experiments. […] A stronger test will be to record neurophysiological data while monkeys are performing our task; this would help to distinguish between the “selective integration” hypothesis and the cortical circuit mechanism proposed here.”
6) Link to schizophrenia.
Reviewers remarked that the link to schizophrenia is very loose: no patients are tested and overall behavioral signatures are different even from healthy human subjects (see point 3). Reviewers agreed that this point should at least be toned down substantially or dropped altogether. This tentative link could be brought up as speculation in Discussion, but not used as the basis for setting up the study.
Thanks for this comment. We agree that our previous version focussed too heavily on the potential link to schizophrenia, and that it is indeed unreasonable for us to do this without including data from patients or human volunteers. As such, we have extensively rewritten the Abstract, significance statement, and Introduction to tone them down substantially. In particular, we have removed most of the references to schizophrenia that were found throughout the previous version.
On the other hand, we think that it is reasonable to discuss the relationship between NMDAreceptor hypofunction and its effects on cognition and behaviour (we are directly manipulating/measuring these in the present study). We also feel that it is important to motivate this with an initial reference to the (vast) literature on NMDAR antagonism via ketamine administration as an acute model of schizophrenia in humans. This was, after all, one of the main motivating factors for wanting to characterise the effects of ketamine in the present task.
We have therefore kept an initial reference to this relationship at the beginning of the Introduction, and then in the rest of the Introduction have limited our discussion to those of mechanisms of action of ketamine and NMDAR hypofunction, rather than schizophrenia. We hope that the reviewers find this to be a reasonable compromise.
7) Discuss limitations of pharmacological protocol.
a) The physiological effects of ketamine on cortical circuits remain speculative. The drug is unlikely to have the single, simple effect, as assumed in the model. This should be acknowledged in Discussion. Also, what happens in the model when NMDA hypofunction is implement in both neuron types?
We thank the reviewers for this excellent point and agree that we should address the complex effect of ketamine on the brain. We now discuss that point in Discussion (see below). Regarding the effects when NMDA hypofunction is implemented in both neuron types, this is covered in our response to major comment 1.
“Our pharmacological intervention experimentally verified the significance of NMDAR function for decisionmaking. […] Finally, receptors of other brain areas might also be altered by intramuscular ketamine injection, which is beyond the scope of the microcircuit model in this study.”
b) The use of an intramuscular injection of ketamine at 0.5 mg/kg (about an order of magnitude stronger than what would be used in humans) produces a massive transient effect on task behavior, which has potential important drawbacks. First, the effect is massive, with decision accuracy dropping from about 85% correct to less than 60% correct after 5 minutes, followed by a sustained recovery over the next 30 minutes. This effect of ketamine is so strong that it is hard to know whether it is truly NMDA receptor hypofunction that produces the behavioral deficit, or task disengagement due to the substantial decrease in reward delivery (for example). The time window chosen for the analysis is also strongly nonstationary, and it is difficult to assess how much an average taken over this window is truly an accurate depiction of a common behavioral deficit throughout this time period (where accuracy goes from 60% correct to 80% correct). Again, the presence of possible attentional lapses should be accounted for (and reported in the manuscript) in all model fits and analyses, given the strength of ketamineinduced deficits triggered by this pharmacological protocol. We realize that this aspect of the study cannot be changed at this point, but it should be acknowledged as an important limitation.
Thank you for this comment. We have structured our response to first address the reviewers’ concerns regarding the drug dose and administration route. Then we address the reviewers’ point about task disengagement. Finally, we address the point regarding the analysis time window. We have previously addressed accounting for attentional lapses in our response to reviewer comment 2.
i) Firstly, we acknowledge that an intravenous infusion approach would have advantages over intramuscular injections. However, this was not possible because it was not within the remit of the ethical approval granted by the local ethical procedures committee and UK Home Office. Despite this, it is important to stress that intramuscular injections of ketamine at around 0.5mg/kg has been the standard approach used in several previous nonhuman primate studies (see Author response table 2). We are not aware of any nonhuman primate cognitive neuroscience studies that have used an infusion approach.
Secondly, as stated in our original submission, we extensively piloted different doses ranging from 0.1 – 1.0 mg/kg before data collection began. 0.5mg/kg was chosen as it was consistently inducing a performance deficit, while not causing significant task disengagement.
Finally, with regards to the chosen dose, we respectfully disagree that it is an order of magnitude stronger than that used in humans. Although it is slightly difficult to compare with relevant human studies as the vast majority of these have used infusion approaches, we will consider one such protocol (Anticevic, Gancsos et al., 2012; Corlett, Honey et al., 2006). In these studies, the authors gave an initial intravenous bolus of 0.23 mg/kg over 1 minute, followed by a subsequent continuous target controlled infusion (0.58 mg/kg over 1 h; plasma target, 200 ng/mL). This dose is relatively similar to what could be expected shortly after a 0.5mg/kg intramuscular injection. Furthermore, in the most relevant intramuscular study we could find, (Ghoneim, Hinrichs et al., 1985) did use intramuscular injections of ketamine to study its cognitive effects in humans. The dose they used was 0.25 – 0.5 mg/kg.
ii) With regards to task disengagement, we did not find evidence of a significant increase in incomplete trials (see response to reviewer comment 2, Author response image 1). Although we did find the animals lapse more frequently when administered ketamine, our behavioural effects were still present when controlling for this (see response to reviewer comment 2).
iii) The reviewers make a good point with regards to the analysis time window. Firstly, a similar approach of averaging across all trials after an intramuscular injection has been used in previous nonhuman primate studies (Blackman, Macdonald et al., 2013; Ma, Skoblenick et al., 2018; Ma, Skoblenick et al., 2015; K. Skoblenick and Everling, 2012, 2014; K. J. Skoblenick, Womelsdorf et al., 2016; M. Wang, Yang et al., 2013). However, we agree that it would be beneficial to investigate this further. To determine the time course of ketamine’s influence on the PVB index, we ran a sliding regression analysis:
“We later revisited the time course of drug effects by running our regression analyses at each of the binned windows described above (Figure 8—figure supplement 3[…] The true cluster size was significant at the p < 0.05 level if the true cluster length exceeded the 95th percentile of the null distribution.”
This reveals that the increase in PVB index is present when data from individual time periods are analysed. We believe this should allay the reviewers’ concern that the increase in PVB index is not a common behavioural deficit throughout this time period.
iv) The presence of possible attentional lapses has been accounted in all the drug day analyses. This was covered in our response to reviewer comment 2 above.
[Editors' note: further revisions were suggested prior to acceptance, as described below.]
Summary:
The authors have provided an extensive response to the reviewers' comments based on several additional analyses of their data; they have successfully addressed a large subset of the comments. Specifically, they have performed several additional analyses to (i) test alternative hypotheses as well as the robustness of the favored hypothesis, (ii) examine lapses under ketamine, (iii) unpack the workings of the circuit model, and (iv) examine the frequentwinner effect in the data so they can assess the generalizability of this study to humans. We acknowledge that all these analyses have led to a significant improvement. Nevertheless, we remain uncertain about the validity of the overall conclusion, that ketamine induces NMDAR hypofunction in excitatory neurons, and that this effect is behaviorally manifested as an increase in a provariance bias.
Thank you for the summary of our revisions, and the opportunity to incorporate this new round of feedback. We believe these revisions, which include new figures and text, address the reviewers’ concerns and improve the manuscript through increased clarity. Importantly, we believe we have provided strong evidence to further support our main conclusion that ketamine induces NMDAR hypofunction to lower E/I balance (by acting predominantly, but not necessarily exclusively, on excitatory neurons), and that this effect is behaviorally manifested as an increase in provariance bias.
Revisions for this paper:
1) Motivate modeling approach.
Given that you opted not to fit the model (which would be done with the meanfield reduction), or tune its parameters so that it matches the above behavioral patterns, we believe you should unpack the reasoning underlying this particular modeling approach.
We agree that further text would help to explain the reasoning behind our modeling approach.
We did not include direct fitting of the psychophysical data with circuit models for several reasons:
 We are not aware of any prior literature which has quantitively fit this class of circuit model – for either the spiking model or the meanfield reduction – directly to psychophysical behavior. We believe that developing approaches to do so is an important methodological challenge, but that it is beyond the scope of the present paper.
Simulation of the spiking circuit model is too computationally expensive for model fitting.
 Fitting via the meanfield model reduction is a potentially tractable strategy. However, there are issues with the meanfield model related to its reduction which make it less than ideal. In particular, the effective noise parameter is added back, by hand as a free parameter, after the reduction. As such, this meanfield model does not derive what the magnitude of that noise parameter should be, nor how the strength of effective noise changes under a parameter perturbation. For this reason we do not use the meanfield model to examine E/I perturbations, as there is not a way to derive how the effective noise should vary across E/I perturbations which we expect would be important. (Instead, we used the meanfield model to examine the circuit mechanisms of the PVB phenomenon for a generic circuit.)
 Both spiking and meanfield models have a large number of parameters. It is not clear which parameters should be free and fitted vs. fixed. Even toward the conservative end, the number of plausibly fittable parameters is well over 10. Numerical simulation of the meanfield model needed for model fitting is still too computationally expensive in such a highdimensional parameter space. There is not a principled reason to only fit over 2 dimensions. (In contrast, we did a 2D sweep over the NMDAR conductances, motivated by ketamine as a perturbation, to characterize their impact.)
 The parameterization of the meanfield model is not amenable to model fitting. Within the large number of parameters, there is a high degree of degeneracy, or “sloppiness” in how a parameter impacts psychophysical behavior, and parameters can effectively trade off each other at least locally. This is because the model is parameterized for biophysical mechanism rather than parameter parsimony at the level of behavioral output. This poses important – and largely unexplored – challenges for parameter identifiability and estimation, which are beyond the scope of the current study. Given these challenges, even if a model could be fit in the highdimensional parameter space, it would be unclear how to interpret the set of fitted parameter values in light of potential degeneracies and how they may map onto lowerdimensional effective parameters (e.g., related to E/I ratio).
Although not well suited for model fitting to empirical behavioral data, biophysicallybased circuit modeling can be fruitfully applied and interpreted for at least two purposes, which is why we chose this approach for this particular study:
 To examine whether, and through what dynamical circuit mechanism, a behavioral phenomenon (here, provariance bias) can emerge in biophysical circuit models within a particular dynamical regime (here, one previously developed to study decision making).
 To characterize how modulation of a biophysical parameter (here, NMDAR conductance, motivated by the pharmacological actions of ketamine) changes an emergent phenomenon (here, choice behavior) within a dynamical circuit regime.
Circuit modeling can demonstrate that a set of mechanisms is sufficient to produce a phenomenon. Furthermore, the pharmacological component of our study with ketamine naturally raises the question of how NMDAR hypofunction within this influential circuit model of decision making (Wang, 2002) impacts the behavioral phenomena studied here, which we examined from a bottomup approach. Such a bottomup approach is complementary to more topdown approaches of fitting behavior with computational and algorithmiclevel models. We believe that this modeling approach thereby provides useful insights even without behavioral model fitting, and furthermore it generates circuitlevel predictions which can be investigated in future studies through experimental methods including electrophysiology and brain perturbations.
We have now added the following paragraph to the Discussion to note these issues with model fitting and the reasoning underlying our modeling approach:
“In this study we did not undertake quantitative fitting of the circuit model parameters to match the empirical data. […] The bottomup mechanistic approach in this study, which makes links to the physiological effects of pharmacology and makes testable predictions for neural recordings and perturbations, is complementary to topdown algorithmic modeling approaches.”
2) Plot model predictions along with data.
As we pointed in our first review there seem to be some discrepancies between the data and the model, which we remain concerned about:
i) Ketamine data asymptote at a lower than 100% level. The lapse rates are still not plugged in the circuit model so as to bring the model predictions closer to the data.
We thank the reviewers for bringing up these issues. For clarity, here lapse rate is defined as asymptote error rate at strong evidence. (Note that lapse rate is measured in completed trials, and therefore does not reflect uncompleted trials.) First, we would like to clarify our methods in the previous revision (previous Figure 8—figure supplements 4 and 5), which perhaps did not clearly emphasize how it accounted for lapse. The model comparisons to monkey data did indeed account for lapse rates for each monkey. Specifically, these analyses were comparing regression β weights between models and monkeys. Preceding that comparison, the regression β weights for the monkeys were calculated from a regression model that includes a lapse rate term (see Equations 811). Therefore, the model β weights were compared to lapsecorrected monkey β weights.
We see that for greater clarity it would be beneficial to visualize the model data that includes lapse rates at empiricallyset levels, to facilitate direct comparison to the ketamine data. We have also decided to combine our response to this point with the related suggestion from comment 4 below to visualize the circuit model with empirical lapse rate, where we added a lapse mechanism downstream of the spiking circuit model. Specifically, we select a random subset of trials in the model, at a proportion according to the Monkey’s empirical lapse rate, and then randomize responses for those trials.
Finally, we have chosen to maintain Figure 7 without empiricallyset lapses. We believe this is most logically consistent, with Figure 7 appearing chronologically first in the paper as a model prediction based on nondrug results, before lapses are demonstrated in ketamine data in Figure 8. Instead, the spiking circuit models with added empiricallyset lapse rates are demonstrated in new supplementary figures (new Figure 8—figure supplements 89), for direct visual comparison to empirical ketamine results. In addition, we have added the empirically derived lapse rates to the results already presented in Figure 7—figure supplement 1.
ii) The control kernel in Figure 7I and the monkey kernels in Figure 8C look different. In the model, there is a primacy pattern (except for the first item) but in the data we see a flat/ Ushaped pattern. Plotting those together could reveal the degree of discrepancy.
Following the reviewers’ suggestion, we now include the juxtaposition of model and empirical plots, for the ketamine data, as new supplementary figures (new Figure 8—figure supplement 89; please see the prior comment above for details).
We also want to emphasize that the comparison of temporal weights might be more informative between control model (Figure 7I) and the nondrug data (Figures 2C,D), which both show a primacy effect. It is also interesting, and potentially important, to note that although the kernels differ somewhat between the control data in Figure 2, and the saline data in Figure 8 – namely, between showing more primacy vs. flat/Ushaped – both datasets show similar and robust provariance bias, which suggests that the precise shape of the kernel is not determinative of the provariance bias phenomenon.
The saline data (Figure 8G, Figure 8—figure supplement 1) might demonstrate a flat/ Ushaped pattern, distinct from both model and nondrug experimental data, for other reasons. For instance, the task structure for the saline/ketamine trials is different from that of the nondrug (and model) trials, with 6 instead of 8 stimuli, and was also made easier to keep the monkeys motivated (Please see “Task Modifications for Pharmacological Sessions” in Materials and methods for details). The difference between Figures 2C,D and Figure 8G might instead be due in part to such task modifications. Furthermore, we note that Figure 2 is based on about 7 times more trials than the saline data in Figure 8 and should therefore be more reliable. On the other hand, it is also possible the Ushaped pattern in the saline data suggests dynamical regimes in circuit models distinct from that considered here.
Motivated by this comment, we have added the following text to the Results:
“Additional observations further supported the lowered E/I hypothesis for the effect of ketamine on monkey choice behaviour. […] This may be due to task modifications for the ketamine/saline experiments compared with the nondrug experiments, but could also potentially arise from distinct regimes of decision making attractor dynamics (e.g. see Ortega et al., 2020).”
iii) In the control condition, the psychometric functions in Figure 7B and in Figure 8B look different (for example in terms of convergence of the light and dark coloured lines). The elevated E/I plot in Figure 7D appears to be closer to the saline psychometric curve.
Such discrepancies, if true, matter: if the "baseline" model does not capture behavior in the control condition well, we cannot be confident about the validity of the subsequent perturbations performed to emulate the ketamine effect. To allow for better assessing the match, we strongly encourage you to always plot model predictions with the data.
Ideally, you would also assess the goodness of fit using maximum likelihood. (A certain parametrization could exhibit similarity with the data in terms of the logistic regression weights (PVB) but at the same time it can miss largely on capturing the psychometric function.) We believe this would be straightforward, given the simulations you have already performed, but leave the decision to you, whether to not to do this.
We realize that this point was not explicitly raised in the previous round. Then, reviewers had asked for a quantification of the goodness of fit. The approach you chose (logistic regression) is specific to the PVB index (not applied to psychometric functions and kernels) and did not fully convince reviewers.
We thank the reviewer for raising these important issues. We agree it will be beneficial to add a more direct comparison of the model behavior to each of saline and ketamine data, in both visualization and quantitative measures, in parallel to those in the last revision (which compared the effect of perturbation).
In brief, in the last revision, we focused primarily on comparing the model to the monkey ketamine behavior in terms of characterizing the change in behavioral measures of interest: namely, the mean weight, SD weight, and PVB index. We believe this is especially of interest because the change under ketamine is also important for comparing the two monkeys: for instance, two subjects may have very different baseline psychophysical performance, yet show a very consistent change from that baseline under ketamine. The same perspective applies to comparing the model to the monkeys, by focusing on the similarity of their change in behavior under a perturbation.
Nonetheless, we agree it is also of interest to provide assessment of how well the models’ psychometric performance agrees with the monkeys’ performance in saline and ketamine conditions. We have thus added a new supplementary figure which computes the KullbackLeibler (KL) divergence between the saline and ketamine data of both monkeys and models of various perturbations (new Figure 8—figure supplement 6, NB. Figure 8—figure supplement 6 from the previous submission has moved to Figure 8—figure supplement 7). We chose KL divergence, instead of likelihood, because it was a more robust measure that was less sensitive to extreme responses in the behavior where the model had negligible likelihood (e.g., an error at strong evidence).
We note that we do not approach this as model fitting (for reasons elaborated in the response to the reviewers’ first comment above), but rather as providing a quantitative measure of psychometric similarity.
In this new analysis, we demonstrated the saline data for each monkey is more similar to the control model (green symbol) than to the lowered E/I model (purple) (Figure 8—figure supplement 6C, F), consistent with the previous conclusion. Importantly, while the elevated E/I plot in Figure 7D may appear visually more similar to the saline plot (combined across monkeys) in Figure 8B (as the reviewers pointed out), quantitatively comparing the model data to the saline plot (separated between the 2 monkeys) using KL divergence shows the control model is more similar to the saline data (Figure 8—figure supplement 6C, F). Finally, we would like to reiterate that the key model comparison determining the perturbation parameters in Figure 7 is still done with the previous perturbed vs baseline comparison (Figure 8—figure supplement 4,5,7), which we believe is at least as critical as the comparison in Figure 8—figure supplement 6. We have also changed the text to mention KL divergence wherever model comparison is mentioned.
In the new text, we now include these KL divergence results, alongside the measures of change in behavioral features. We have also expanded the “Testing the versatility of model predictions” section of Materials and methods to include KL divergence.
3) Assess effects of concurrent NMDAblockade on E and I neurons.
You establish that E/I increase reduces the PVB index while E/I decrease has the opposite effect. However, you have not examined the effect of concurrent changes of NMDARs of both, E and I cells, which we had suggested to do. Please comment on the fact that concurrent changes could mimic the effect of EE reduction (Figure 7, suppl. 2: moving up diagonally the purple point would result in equivalent behavior). Unless there is strong support in favor of the selective NMDA change over the concurrent change (assessed via maximum likelihood), the conclusions should be reframed.
We would like to clarify that we examined the effect of concurrent changes of NMDARs on both E and I cells (e.g. please see Figure 7—figure supplement 23, Figure 8—figure supplement 45, and the newly added Figure 8—figure supplement 7 where we explicitly computed the resulting E/I ratio across EandI cells perturbation conditions). These 2D sweep analyses illustrate that the net effect on E/I ratio is a key effective parameter, and that concurrent changes can cause the same effects illustrated by the ‘pure’ perturbations. We have also decided to more strongly emphasize the discussion of concurrent changes of NMDARs of both E and I cells, with a focus on the net effect on E/I ratio which can arise from concurrent changes. We take care not to suggest our findings support such a pure perturbation acting on a single cell type, which is not required for there to be a preferential impact on a cell type (e.g. via differential NMDAR subunits) yielding a net impact on E/I ratio. We further added the following text in the Results section to justify why our main figures primarily considered perturbation to either E or I cells, but not both:
“[…] Crucially, the effect of E/I and sensory perturbations on PVB index and regression coefficients were generally robust to the strength and pathway of perturbation (Figure 7—figure supplement 2, 3).
Disease and pharmacologyrelated perturbations likely concurrently alter multiple sites, for instance NMDARs of both excitatory and inhibitory neurons. We thus parametrically induced NMDAR hypofunction on both excitatory and inhibitory neurons in the circuit model. The net effect on E/I ratio depended on the relative perturbation strength to the two populations^{27}. Stronger NMDAR hypofunction on excitatory neurons lowered the E/I ratio, while stronger NMDAR hypofunction on inhibitory neurons elevated the E/I ratio. Notably, proportional reduction to both pathways preserved E/I balance and did not lower the mean evidence regression coefficient (a proxy of performance) (Figure 7—figure supplement 2A). […]”
We have also added an additional clarification to the Abstract to make clear that our main conclusion is that ketamine induces NMDAR hypofunction to lower E/I balance (by acting predominantly, but not necessarily exclusively, on excitatory neurons):
“[…] Ketamine yielded an increase in subjects' PVB, consistent with lowered cortical excitation/inhibition balance from NMDAR hypofunction predominantly onto excitatory neurons.”
4) Add a lapse rate downstream from circuit model.
You have now assessed lapse rates in your analysis, but reviewers remarked that you do not report the bestfitting lapse rates. This makes it impossible to judge just how much lapses contribute to the decrease in task performance in the initial period following ketamine injection (which is included in all analyses). We are concerned that this massive performance drop under ketamine is not only triggered by aPVB, but also (perhaps largely) by an increase in lapses and a decrease in evidence sensitivity.
We would expect a lapse mechanism to be in play in the circuit model when emulating the ketamine effect. You could use the fraction of lapses best fitted to psychometric curves (which clearly do not saturate at p(correct) = 1) for the circuit model simulations. It seems conceivable that allowing the circuit model to lapse will reduce the weight applied on the mean evidence.
We thank the reviewers for this excellent suggestion. We have added two supplementary figures, in which we incorporated a ‘downstream’ lapse mechanism to the circuit model, with lapse rate fitted to the two monkeys’ ketamine data (Figure 8—figure supplement 89). We believe this allows readers to better evaluate the effect of lapse using the circuit models.
We would also like to clarify that we have reported our bestfitted lapse rates in the previous submission. As shown in the previous submission, accounting for such lapse rates did not significantly change the regression weight or evidence sensitivities when analyzing the subjects’ behavior (Figure 8—figure supplement 2). Our further analyses have also shown the regression weights and evidence sensitivities are not affected in the circuit models’ behavior when empirical lapses are incorporated (Figure 8—figure supplement 89, please also see the new Figure 7—figure supplement 1). For clarity, we have further expanded on explaining the lapse rate.
“In further analysis, we also controlled for the influence of ketamine on the subjects’ lapse rate – i.e. the propensity for the animals to respond randomly regardless of trial difficulty. […] This confirmed that the rise in PVB was an accurate description of a common behavioral deficit throughout the duration of ketamine administration.”
Figure 8—figure supplements 8 and 9 can be found above in the response to comment 2. We also added the following text together with Figure 8—figure supplements 8 and 9:
“To quantify the effect of lapse rate on evidence sensitivity and regression weights in general, we examined the effect of a lapse mechanism downstream of spiking circuit models (Figure 8—figure supplement 89). Using the lapse rate fitted to the experimental data collected from the two monkeys, we assigned such portions of trials to have randomly selected choices for each circuit model, and repeated the analysis to obtain psychometric functions and various regression weights. Crucially, while the psychometric function as well as evidence mean and standard deviation regression weights were suppressed, the findings on PVB index were not qualitatively altered in the circuit models, further supporting the finding that the lapse rate does not account for changes in PVB under ketamine.”
5) Different quantification of provariance bias.
We do not understand the motivation for compressing sensitivity to mean and to variance into a single PVB index. Our reading is that the provariance effect, quantified as a higher probability of choosing a more variable stream (see Tsetsos et al., 2012), can just be directly mapped onto the variance regressor. Combining the weights into a PVB index and framing the general discussion around this index seems unnecessary. The main behavioral result of ketamine can be parsimoniously summarized as a reduced sensitivity to the mean evidence. Relatedly, please discuss if and how the ketamineinduced increase in the PVB effect, the way you quantified it, rides over a strong decrease of the sensitivity to mean evidence under ketamine.
It does seem to be the case that sensitivity to variance remains statistically indistinguishable between saline and ketamine (if anything it is slightly reduced). The E/I increase model consistently predicts that the variance regressor is reduced. This is not the case with the E/I decrease model, which occasionally predicts increases in the sensitivity to the variance (see yellow grids in Figure 7—figure supplement 2). This feature of the E/I decrease model should be discussed, as it seems to undermine the statement that the E/I perturbation produces robust predictions regardless of perturbation magnitude (i.e. depending on the strength of E/I reduction the model can produce a decrease or increase on variance sensitivity, and the relationship is nonmonotonic). Overall, we believe that combining sensitivity to mean and variance obscures the interpretation of the data and model predictions.
Again, we realize that this point appears to be new. But reviewers feel they could not really have a strong case regarding this metric without seeing the more detailed model predictions (in a 2d grid) that you have presented in your revision.
We thank the reviewer for raising this point. We agree that further explanation of our choice to define a PVB measure as the ratio would improve the paper. We also agree that it is important to clearly report the effects of the mean and variance terms separately as well, to be explicit about what is driving the change in the ratio measure of PVB index.
First, as the reviewers note, there is a downside to reporting a ratio, which is that it can obscure how a change in the ratio is driven by changes in the numerator (SD) or denominator (mean) terms. Therefore, to accommodate the reviewers’ suggestion, we believe that the best solution for clarity is to report the changes in SD and mean weights, individually, alongside where changes in the ratio PVB index are reported. We have now included this information throughout the text, wherever a change in PVB index is reported for the model or monkeys, so that readers can readily keep track of how mean and SD terms are impacted, alongside the PVB index.
We believe that describing a PVB index as the ratio of SD to mean weights, is conceptually useful when interpreting changes in these behavioral sensitivities (as here by ketamine).
A key motivation relates to a point raised by the reviewers in the previous round of review, which said: “You should stress that the model does not feature any explicit PVB, and that PVB emerges through samplebysample competition between the two streams.” We agree that PVB should be understood as an emergent phenomenon arising from the decisionmaking process.
In evidence accumulation models, it is a nontrivial problem how to possibly reduce the sensitivity to the mean of evidence without a proportional reduction in the sensitivity to the SD of evidence. The simplest way to reduce the mean sensitivity would be downscaling of the incoming evidence strength, but this would presumably downscale the SD sensitivity by the same factor. Indeed, this is what is demonstrated by our “upstream deficit” perturbation: mean weight reduced, and SD weight reduced by the same proportion, leaving the PVB index unchanged. This is therefore a useful feature of our definition of PVB index: a ‘reference’ proportional change of SD weight is the same as the proportional change of mean weight, which results in no change in PVB index.
All three of our circuit perturbations (lowered E/I, elevated E/I, upstream deficit) reduce the sensitivity to the mean, and therefore consideration of the mean evidence regressor is not sufficient to dissociate these three circuit perturbations in the model. The qualitative behavioral dissociation between the three is their impact on the PVB index: increased PVB index for loweredE/I, decreased PVB index for elevatedE/I, and unchanged PVB index for upstream deficit. Therefore, the key question to dissociate these three circuit perturbations is how the SD weight changes relative to the mean weight, which is captured by their ratio.
The reviewers are correct in pointing out that elevating E/I ratio consistently predicts that the evidence standard deviation regressor is reduced, while lowering E/I ratio can predict an increase in the sensitivity to the variance (as shown in Figure 7—figure supplement 2). This is, in fact, a nontrivial property of the circuit model. The circuit model, in the strong recurrent regime, has decision making choice accuracy following an invertedU shape as a function of E/I ratio (e.g. see Lam et al., 2017). The control model is in fact not at the peak of this invertedU shape, but slightly to the side. Instead, the peak of the invertedU shape occurs at a weakly lowered E/I circuit. Similar to decision making choice accuracy, the evidence standard deviation regression weight also follows an invertedU shape as a function of E/I ratio (Figure 7—figure supplement 2B). In contrast to choice accuracy, the control model is even further to the side away from the peak of the invertedU shape, and the peak occurs at an E/I ratio lower than that of the peak for choice accuracy (e.g. compare Figure 7—figure supplement 2A and B).
If we consider the distinct locations of the control model on the invertedU shape for choice accuracy and evidence standard deviation regression weight we explained above, the effects of elevating and lowering E/I ratio on the mean evidence regressor, evidence standard deviation regressor, and PVB index are more clear and interpretable. Elevating E/I ratio always moves the model down the invertedU shapes for both choice accuracy and standard deviation regression weight, resulting in a consistent effect regardless of the scale of perturbation. On the other hand, weakly lowering E/I ratio would drive the model up the invertedU shape for standard deviation regression weight, while driving the model down the invertedU shape for choice accuracy, leading to a decrease in mean evidence regression weight but increase in standard deviation weight. It is only if the perturbation is strong enough to push the model across the peak (of standard deviation regression weight), that lowering E/I ratio will result in a decrease in both mean and standard deviation regressor. Notably, in both regimes the PVB index increases. This is another reason we utilized PVB index: as a robust measure of E/I ratio while alleviating the readers of the detailed changes and mechanisms of mean and standard deviation regression weights. (As an aside for completeness, when considering how the mean and standard deviation regression weights could possibly move along their respective invertedU curves, an even weaker perturbation of lowered E/I will drive the control model up the peak of both invertedU shapes. However, the range is smaller than the perturbation strengths used in this study, and the slight increase in mean evidence regression weight, which is bounded by the peak for choice accuracy which is not much higher than the already nearby control model, will be dominated by the larger increase in standarddeviation weight, where the control model is further from the peak and is not nearly as bounded. Therefore, this remains consistent with our argument that E/I perturbations produce robust effects regardless of perturbation magnitude).
We note that in our two monkey subjects, both showed strong significant decreases in mean weight, while only one showed a significant decrease in SD weight, yet both showed a consistent proportional change in PVB index. This is consistent, and can be explained by, the aforementioned invertedU description.
Finally, another attractive property of presenting the PVB index as a ratio is that it is a dimensionless quantity, which facilitates comparisons between the monkeys and the model.
In the new revision, we have added in text noting changes in mean and SD weights where changes in PVB index are noted. We have also expanded the text when introducing the SD/mean ratio as a PVB index, to better motivate why it is an interesting measure for this phenomenon:
“In addition, we defined the provariance bias (PVB) index as the ratio of the regression coefficient for evidence standard deviation over the regression coefficient for mean evidence. […] From the ‘Regular’ trials, the PVB index across both monkeys was 0.173 (Monkey A = 0.230; Monkey H = 0.138).”
We have also added the following text in the Results to explain how the invertedU phenomena relates to PVB index:
“Since the decision making choice accuracy depends on E/I ratio along an invertedU shape – where the control, E/I balanced model is right next to the (slightly lowered E/I) peak (Lam et al., 2017) – both elevating and lowering E/I ratio drive the model away from the peak, resulting in lowered mean evidence regression weight. […]Notably, regardless of the magnitude with which E/I ratio is lowered, PVB index is consistently increased, providing a robust measure of provariance bias.”
We have also added the following text in Discussion to provide further motivation of the PVB index as a useful measure:
“The PVB index, as the ratio of standard deviation to mean evidence regression weights, serves as a conceptually useful measure to interpret changes in provariance bias due to ketamine perturbation in this study. […] The two monkeys, both interpreted as lowered E/I ratio using the modelbased approach in this study, may therefore experience slightly different degrees of E/I reduction when administered with ketamine, as shown through concurrent changes in NMDAR conductances in the circuit model (Figure 7—figure supplement 2).”
https://doi.org/10.7554/eLife.53664.sa2Article and author information
Author details
Funding
National Institute of Mental Health (R01MH112746)
 John D Murray
Wellcome (098830/Z/12/Z)
 Laurence Tudor Hunt
Wellcome (208789/Z/17/Z)
 Laurence Tudor Hunt
Brain and Behavior Research Foundation
 Laurence Tudor Hunt
National Institute for Health Research Oxford Health Biomedical Research Centre
 Laurence Tudor Hunt
Middlesex Hospital Medical School General Charitable Trust
 Sean Edward Cavanagh
NSERC (PGSD2  502866  2017)
 Norman H Lam
Wellcome (096689/Z/11/Z)
 Steven Wayne Kennerley
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All experimental procedures were approved by the UCL Local Ethical Procedures Committee and the UK Home Office (PPL Number 70/8842), and carried out in accordance with the UK Animals (Scientific Procedures) Act.
Senior Editor
 Michael J Frank, Brown University, United States
Reviewing Editor
 Tobias H Donner, University Medical Center HamburgEppendorf, Germany
Reviewers
 Konstantinos Tsetsos, University Medical Center HamburgEppendorf, Germany
 Valentin Wyart, École normale supérieure, PSL University, INSERM, France
Publication history
 Received: November 15, 2019
 Accepted: August 19, 2020
 Version of Record published: September 29, 2020 (version 1)
 Version of Record updated: October 1, 2020 (version 2)
 Version of Record updated: October 8, 2020 (version 3)
Copyright
© 2020, Cavanagh et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 2,361
 Page views

 238
 Downloads

 5
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.