I am a Ph.D. candidate in economics at New York University. My primary interests are in behavioral economics, experimental economics and applied econometrics. Most of my projects study the interaction between attention, information and choice. I am on the economics job market in the 2018-2019 academic year and will be available for interviews at the AEA Annual Meeting in Atlanta.
New York University
19 W 4th Street, 6 Floor
New York, NY 10012
Job Market Paper
I study how attentional constraints affect elasticity and substitution patterns in demand. I identify "attentional complementarity", whereby goods that are substitutes in utility appear as complements in behavior due to limits on attention. Adopting the framework of rational inattention, I identify the channels through which attention influences observed substitution patterns. The in-sample fit of these models can be similar to that of standard discrete choice models, yet counterfactual predictions differ substantially when attentional frictions are accounted for. I provide conditions for identification of the general rational inattention model in standard stochastic choice data. A key obstacle in estimation is the genericity of corner solutions which endogenously gives rise to consideration sets. The estimation strategy I employ is robust to this feature. In an empirical application, I show that elasticities are underestimated by models not accounting for attentional frictions. I conduct an experiment allowing for the detailed analysis of attention strategies and find that subjects allocate their attention in line with the theory.
Paper | Replication files
Rational Inattention, Competitive Supply, and Psychometrics
Revise & Resubmit at The Quarterly Journal of Economics
with Andrew Caplin, John Leahy and Oded Nov
Costs of attention, while central to choice behavior, have proven hard to measure. We introduce a simple method of recovering them from choice data. Our recovery method rests on the observation that costs of attention play precisely the same role in consumer choice as do a competitive firm's costs of production in its supply decision. This analogy extends to welfare analysis: consumer welfare net of attention costs is measured in precisely the same way as the profits of a competitive firm. We implement our recovery method in a purpose-built experiment. We quantitatively assess the trade-off between reward level and task complexity. Estimated attention costs are highly correlated with decision time, an important common input in process-based models of attention.
Paper | Replication files
Range Effects in Multi-attribute Choice: An Experiment
with Tommaso Bondi and Evan K. Friedman
Several behavioral theories suggest that, when choosing between multi-attribute goods, choices are systematically affected by the range of values in each attribute. Two theories provide such predictions explicitly in terms of attribute ranges. According to the theory of Focusing (Kőszegi and Szeidl, 2013), attributes with larger ranges receive more attention. On the other hand, Relative thinking (Bushong, Rabin, and Schwartzstein, 2017) posits that fixed differences look smaller when the range is large. It is as if attributes with larger ranges are over- and under-weighted, respectively. Since the two theories make opposing predictions, it is important to understand what features of the environment affect their relative prevalence. We conduct an experiment designed to test for both of these opposing range effects in different environments. Using choice under risk, we use a two-by-two design defined by high or low stakes and high or low dimensionality (as measured by the number of attributes). In the aggregate, we find evidence of focusing in low-dimensional treatments. Classifying subjects into focusers and relative thinkers, we find that focusers are associated with quicker response times and that types are more stable when the stakes are high.
Learning with Misspecified Models
with Bálint Szőke
We consider Bayesian learning about a stable environment when the learner's entertained probability distributions (likelihoods) are all misspecified. We evaluate likelihoods according to the long-run average payoff of the policy function they induce. We then show, that generically, the value that the Bayesian learner attains in the long run is lower than what would be achievable with her misspecified set of likelihoods. We introduce two kinds of indifference curves over the learner's set: one based on the likelihoods' induced long-run average payoff, and another capturing their statistical similarity. In case of misspecification, we show that misalignment of these curves can lead the Bayesian learner to focus on payoff-irrelevant features of the environment. On the other hand, under correct specification this misalignment has no bite. We provide conditions under which it is feasible to construct an exponential family that allows the learner to implement the best attainable policy in the long-run irrespective of misspecification. We demonstrate applications of the introduced concepts through examples.
Some projects I was working on under QuantEcon.
with Thomas J. Sargent and Bálint Szőke
This notebook explores the Black-Litterman model of mean-variance portfolio choice theory. We discuss issues regarding the relative difficulty of estimating means compared to covariances and its implication of recommending extreme long-short positions. We modify the baseline model using robust control theory and applying Bayesian estimation techniques which help regularizing the extreme recommendations of the model.
with Spencer Lyon
rvlib is a Python package for distributions. The aim of rvlib is to mimic the user friendly API of the Distributions.jl package while improving upon the performance of scipy.stats by exploiting numba.
with Bálint Szőke
This is a collection of notes we took mostly to organize our own thoughts and catalogue some interesting concepts that we encontoured while reading about statistical learning theory and econometrics. Some of the concepts are demonstrated through short simulations using Python.