June 27th, 2019

The Politics of Machine Learning, pt. I

Terminology like "machine learning," "artificial intelligence," "deep learning," and "neural nets" is pervasive: business, universities, intelligence agencies, and political parties are all anxious to maintain an edge over the use of these technologies. Statisticians might be forgiven for thinking that this hype simply reflects the success of the marketing speak of Silicon Valley entrepreneurs vying for venture capital. All these fancy new terms are just describing something statisticians have been doing for at least two centuries.

But recent years have indeed seen impressive new achievements for various prediction problems, which are finding applications in ever more consequential aspects of society: advertising, incarceration, insurance, and war are all increasingly defined by the capacity for statistical prediction. And there is crucial a thread that ties these widely disparate applications of machine learning together: the use of data on individuals to treat different individuals differently. In this two part post, Max Kasy surveys the politics of the machine learning landscape.

 Full Article

May 31st, 2019

Copyright Humanism

It's by now common wisdom that American copyright law is burdensome, excessive, and failing to promote the ideals that protection ought to. Too many things, critics argue, are subject to copyright protections, and the result is an inefficient legal morass that serves few benefits to society and has failed to keep up with the radical transformations in technology and culture of the last several decades. To reform and streamline our copyright system, the thinking goes, we need to get rid of our free-for-all regime of copyrightability and institute reasonable barriers to protection.

But what if these commentators are missing the forest for the trees, and America's frequently derided copyright regime is actually particularly well-suited to the digital age? Could copyright protections—applied universally at the moment of authorship—provide a level of autonomy that matches the democratization of authorship augured by the digital age?

 Full Article

March 28th, 2019

Experiments for Policy Choice

Randomized experiments have become part of the standard toolkit for policy evaluation, and are usually designed to give precise estimates of causal effects. But, in practice, their actual goal is to pick good policies. These two goals are not the same.

Is this the best way to go about things? Can we maybe make better policy choices, with smaller experimental budgets, by doing things a little differently? This is the question that Anja Sautmann and I address in our new work on “Adaptive experiments for policy choice.” If we wish to pick good policies, we should run experiments adaptively, shifting toward better policies over time. This gives us the highest chance to pick the best policy after the experiment has concluded.

 Full Article

March 22nd, 2019

The Emerging Monopsony Consensus

Early on in The Wealth of Nations, Adam Smith asked who had the edge in negotiations between bosses and wage laborers. His answer: the bosses. In the case of a stalemate, landlords and manufacturers “could generally live a year or two” on their accumulated wealth, while among workers, “few could subsist a month, and scarce any a year, without employment.” Thus, concluded Smith in 1776, “masters must generally have the advantage.”

As economic thought progressed over subsequent centuries, however, Smith’s view of labor markets gave way to the reassuring image of perfect competition. In recent years, a model more in line with Smith’s intuitions has grown to challenge the neoclassical ideal. Under the banner of monopsony, economists have built up an impressive catalog of empirical work that offers a more plausible baseline model for labor markets.

 Full Article

March 19th, 2019

Ideology in AP Economics

When the media talks about ideological indoctrination in education, it is usually assumed to refer to liberal arts professors pushing their liberal agenda. Less discussed is the very different strain of ideology found in economics. The normative import is harder to spot here, as economics presents itself as a science: it provides an empirical study of the economy, just as mechanical engineering provides an empirical study of certain physical structures. When economists offer advice on matters of policy, it’s taken to be normatively neutral expert testimony, on a par with the advice of engineers on bridge construction. However, tools from the philosophy of explanation, in particular the work of Alan Garfinkel, show how explanations that appear purely empirical can in fact carry significant normative assumptions.1 With this, we will uncover the ideology embedded in economics.

More specifically, we’ll look at the ideology embedded in the foundations of traditional economics—as found in a typical introductory micro-economics class. Economics as a whole is diverse and sprawling, such that no single ideology could possibly be attributed to the entire discipline, and many specialized fields avoid many of the criticisms I make here. Despite this, if there are ideological assumptions in standard introductory course, this is of great significance.

 Full Article

March 1st, 2019

The Case for an Unconditional Safety Net

The 'magic bucket' of universal cash transfers

Imagine a system where everyone had a right to basic material safety, and could say “no” to abuse and exploitation. Sounds utopian? I argue that it would be quite feasible to get there, and that it would make eminent economic, moral, and political sense.

In my paper, I discuss four sets of arguments why it would make economic, moral, and political sense to transition from the current system of subsidizing low wage work to a system providing an unconditional safety net.

 Full Article

January 24th, 2019

Why Rational People Polarize

U.S. politics is beset by increasing polarization. Ideological clustering is common; partisan antipathy is increasing; extremity is becoming the norm (Dimock et al. 2014). This poses a serious collective problem. Why is it happening? There are two common strands of explanation.

The first is psychological: people exhibit a number of “reasoning biases” that predictably lead them to strengthen their initial opinions on a given subject matter (Kahneman et al. 1982; Fine 2005). They tend to interpret conflicting evidence as supporting their opinions (Lord et al. 1979); to seek out arguments that confirm their prior beliefs (Nickerson 1998); to become more confident of the opinions shared by their subgroups (Myers and Lamm 1976); and so on.

The second strand of explanation is sociological: the modern information age has made it easier for people to fall into informational traps. They are now able to use social media to curate their interlocutors and wind up in “echo chambers” (Sunstein 2017; Nguyen 2018); to customize their web browsers to construct a “Daily Me” (Sunstein 2009, 2017); to uncritically consume exciting (but often fake) news that supports their views (Vosoughi et al. 2018; Lazer et al. 2018; Robson 2018); and so on.

So we have two strands of explanation for the rise of American polarization. We need both. The psychological strand on its own is not enough: in its reliance on fully general reasoning tendencies, it cannot explain what has changed, leading to the recent rise of polarization. But neither is the sociological strand enough: informational traps are only dangerous for those susceptible to them. Imagine a group of people who were completely impartial in searching for new information, in weighing conflicting studies, in assessing the opinions of their peers, etc. The modern internet wouldn’t force them to end up in echo chambers or filter bubbles—in fact, with its unlimited access to information, it would free them to form opinions based on ever more diverse and impartial bodies of evidence. We should not expect impartial reasoners to polarize, even when placed in the modern information age.

 Full Article

November 9th, 2018

Banking with Imprecision

​In 1596, Spanish troops under the leadership of the Duke of Medina-Sidonia set fire to their own ships in the waters near Cadiz. The sinking of these thirty-two vessels was a tactical necessity: a joint Anglo-Dutch navy had annihilated the slapdash defenses of the city, driving the Spanish ships off to nearby Puerto Real. The Spanish had preferred to see their ships sunk rather than captured by the enemy. Cadiz itself was occupied and sacked, and its most prominent civilians were held for ransom. War, as the Spanish were acutely aware, was very costly. Later that very year, Philip II, King of Spain, would declare bankruptcy. 1

Though he was one of the most powerful monarchs of the era, it is difficult to sympathize with the sheer magnitude of the work with which King Philip II of Spain had to contend. Not only did he have to protect his Iberian possessions, but he also had to prosecute a war against the recalcitrant Dutch in the Low Countries, outmaneuver the Protestants in France, and maintain a bulwark against the Turks in the Mediterranean. 2

In their book, Lending to the Borrower from Hell, Drelichman and Voth have done a remarkable job of illuminating Spanish finance in the 16th century.Notably, the fiscal machinery underpinning imperial operations was managed mostly by a tight-knit cartel of Genoese bankers. Sovereign lending, astonishingly, allowed for a plethora of state actions in a time before instant communication. The foundations of empire rested on a relatively simple model: control certain streams of income and then borrow against them. The institutional origins of our modern sovereign lending come from this tradition. Dealing with uncertainty is an inherent part of this model – now as it was then. What is of use to modern scholars is how the same problem was conceived of and partly surmounted by our institutional forebears.

 Full Article

October 18th, 2018

Machine Ethics, Part One: An Introduction and a Case Study

The past few years have made abundantly clear that the artificially intelligent systems that organizations increasingly rely on to make important decisions can exhibit morally problematic behavior if not properly designed. Facebook, for instance, uses artificial intelligence to screen targeted advertisements for violations of applicable laws or its community standards. While offloading the sales process to automated systems allows Facebook to cut costs dramatically, design flaws in these systems have facilitated the spread of political misinformation, malware, hate speech, and discriminatory housing and employment ads. How can the designers of artificially intelligent systems ensure that they behave in ways that are morally acceptable--ways that show appropriate respect for the rights and interests of the humans they interact with?

The nascent field of machine ethics seeks to answer this question by conducting interdisciplinary research at the intersection of ethics and artificial intelligence. This series of posts will provide a gentle introduction to this new field, beginning with an illustrative case study taken from research I conducted last year at the Center for Artificial Intelligence in Society (CAIS). CAIS is a joint effort between the Suzanne Dworak-Peck School of Social Work and the Viterbi School of Engineering at the University of Southern California, and is devoted to “conducting research in Artificial Intelligence to help solve the most difficult social problems facing our world.” This makes the center’s efforts part of a broader movement in applied artificial intelligence commonly known as “AI for Social Good,” the goal of which is to address pressing and hitherto intractable social problems through the application of cutting-edge techniques from the field of artificial intelligence.

 Full Article

October 10th, 2018

Who cares about stopping rules?

Can you bias a coin?

Challenge: Take a coin out of your pocket. Unless you own some exotic currency, your coin is fair: it's equally likely to land heads as tails when flipped. Your challenge is to modify the coin somehow—by sticking putty on one side, say, or bending it—so that the coin becomes biased, one way or the other. Try it!

How should you check whether you managed to bias your coin? Well, it will surely involve flipping it repeatedly and observing the outcome, a sequence of h's and t's. That much is obvious. But what's not obvious is where to go from there. For one thing, any outcome whatsoever is consistent both with the coin's being fair and with its being biased. (After all, it's possible, even if not probable, for a fair coin to land heads every time you flip it, or a biased coin to land heads just as often as tails.) So no outcome is decisive. Worse than that, on the assumption that the coin is fair any two sequences of h's and t's (of the same length) are equally likely. So how could one sequence tell against the coin's being fair and another not?

We face problems like these whenever we need to evaluate a probabilistic hypothesis. Since probabilistic hypotheses come up everywhere—from polling to genetics, from climate change to drug testing, from sports analytics to statistical mechanics—the problems are pressing.

Enter significance testing, an extremely popular method of evaluating probabilistic hypotheses. Scientific journals are littered with reports of significance tests; almost any introductory statistics course will teach the method. It's so popular that the jargon of significance testing—null hypothesis, $p$-value, statistical significance—has entered common parlance.

 Full Article