I am an economist with primary interests in behavioral economics and econometrics.
I currently work at QuantCo at the intersection of data science and economics.
I received my PhD from New York University in 2019.

CV

Contact

csaba.daniel@gmail.com


Publications

Rational Inattention, Competitive Supply, and Psychometrics

with Andrew Caplin, John Leahy and Oded Nov

The Quarterly Journal of Economics, Volume 135, Issue 3, August 2020, Pages 1681–1724

We introduce a simple method of recovering attention costs from choice data. Our method rests on a precise analogy with production theory. Costs of attention determine consumer demand and consumer welfare just as a competitive firm’s technology determines its supply curve and profits. We implement our recovery method experimentally, outline applications, and link our work to the broader literature on inattention and mistaken decisions.

Paper | Replication files

Working Papers

Attention Elasticities and Invariant Information Costs

We consider a generalization of rational inattention problems by measuring costs of information through the information radius (Sibson, 1969; Verdú, 2015) of statistical experiments. We introduce a notion of attention elasticity measuring the sensitivity of attention strategies with respect to changes in incentives. We show how the introduced class of cost functions controls attention elasticities while the Shannon model restricts attention elasticity to be unity. We explore further differences and similarities relative to the Shannon model in relation to invariance, posterior separability, consideration sets, and the ability to learn events with certainty. Lastly, we provide an efficient alternating minimization method—analogous to the Blahut-Arimoto algorithm—to obtain optimal attention strategies.

Paper | Code

Learning with Misspecified Models

with Bálint Szőke

We consider likelihood-based learning when the learner's entertained set of likelihoods is misspecified. We focus on the welfare implications of misspecified sets through the limit point of learning and the associated best-responding policy. Building on the best-responding policies, we define consistency requirements for sets of likelihoods that a utility-maximizing agent would find desirable. We characterize a class of decision problems for which one can construct exponential families of likelihoods—using derived payoff-relevant moments as sufficient statistics—that satisfy our consistency requirements therefore guaranteeing the implementation of optimal policies irrespective of the data generating process.

Paper

Attentional Complements

I study how attentional constraints affect elasticity and substitution patterns in demand. I identify "attentional complementarity", whereby goods that are substitutes in utility appear as complements in behavior due to limits on attention. Adopting the framework of rational inattention, I identify the channels through which attention influences observed substitution patterns. The in-sample fit of these models can be similar to that of standard discrete choice models, yet counterfactual predictions differ substantially when attentional frictions are accounted for. I provide conditions for identification of the model in standard stochastic choice data. A key obstacle in estimation is the genericity of corner solutions which endogenously gives rise to consideration sets. The estimation strategy I employ is robust to this feature. In an empirical application, I show that elasticities are underestimated by models not accounting for attentional frictions. I conduct an experiment allowing for the detailed analysis of attention strategies and find that subjects allocate their attention in line with the theory.

Paper | Replication files

Range Effects in Multi-attribute Choice: An Experiment

with Tommaso Bondi and Evan Friedman

Several behavioral theories suggest that, when choosing between multi-attribute goods, choices are systematically affected by the range of values in each attribute. Two theories provide such predictions explicitly in terms of attribute ranges. According to the theory of Focusing (Kőszegi and Szeidl, 2013), attributes with larger ranges receive more attention. On the other hand, Relative thinking (Bushong, Rabin, and Schwartzstein, 2017) posits that fixed differences look smaller when the range is large. It is as if attributes with larger ranges are over- and under-weighted, respectively. Since the two theories make opposing predictions, it is important to understand what features of the environment affect their relative prevalence. We conduct an experiment designed to test for both of these opposing range effects in different environments. Using choice under risk, we use a two-by-two design defined by high or low stakes and high or low dimensionality (as measured by the number of attributes). In the aggregate, we find evidence of focusing in low-dimensional treatments. Classifying subjects into focusers and relative thinkers, we find that focusers are associated with quicker response times and that types are more stable when the stakes are high.

Paper


Teaching

Instructor

Industrial Organization (Syllabus) NYU, 2016, 2017 Summer
Python Data Bootcamp NYU Stern, 2017 Spring
Reinforcement in Calculus and Statistics Barcelona GSE, 2012 Fall

Teaching Assistant

Microeconomics NYU Stern, 2018, 2017 Fall, 2016 Spring
Introduction to Microeconomics NYU, 2017, 2016 Fall
Statistics NYU, 2015 Fall
Topics in Econometrics NYU, 2015 Spring
Game Theory (graduate) UAB, 2012 Fall

Miscellaneous Projects

Some projects I worked on for QuantEcon.

Black-Litterman Model

with Thomas J. Sargent and Bálint Szőke

This lecture explores the Black-Litterman model of mean-variance portfolio choice theory. We discuss issues regarding the relative difficulty of estimating means compared to covariances and its implication of recommending extreme long-short positions. We modify the baseline model using robust control theory and applying Bayesian estimation techniques which help regularizing the extreme recommendations of the model.

rvlib

with Spencer Lyon

rvlib is a Python package for distributions. The aim of rvlib is to mimic the user friendly API of the Distributions.jl package while improving upon the performance of scipy.stats by exploiting numba.

Notes on Econometrics

with Bálint Szőke

This is a collection of notes we took mostly to organize our own thoughts and catalogue some interesting concepts that we encontoured while reading about statistical learning theory and econometrics. Some of the concepts are demonstrated through short simulations using Python.