In this EconTalk episode, host Russ Roberts welcomes back Cathy O’Neil, author of the fantastically titled new book, Weapons of Math Destruction. The “weapons” O’Neil is concerned with are problematic algorithms- widespread, in some sense secret or proprietary, and in some way destructive. While Roberts and O’Neil agree about the dangers inherent in certain algorithmic applications, they disagree on many as well.
1. How does the use of recidivism risk scores “create its own reality” in criminal sentencing? According to O’Neil, how does this practice confuse accuracy with causality?
2. Who loses more as a result of school districts employing the teacher value-added model, teachers or students? Why?
3. Both O’Neil and Roberts bemoan the transformation of many college campuses into resort-like settings. Why does this bother them so much, and to what extent does this bother you? What accounts for this transformation–competing for data-driven rankings, or something else?
4. O’Neil, a data scientist, doesn’t object to all algorithms, only those that are destructive. What constitutes a destructive algorithm, in your opinion? How does your evaluation compare to O’Neil’s?
5. What sort of ethical standards for the use of big data does O’Neil support? To what extent do you think these are warranted? Potentially effective? Do you think having an ethical data advisor is a good idea?)
READER COMMENTS
SaveyourSelf
Oct 7 2016 at 9:42am
In this episode—like so many Econtalk episodes in the past—Russ Roberts bemoaned the the fact that jail time, when viewed as a rehabilitation tool, seems to have a neutral or even negative benefit—as in it’s not helpful or even makes crime worse, never mind that it is terribly expensive. I’ve been chewing on that same problem for many years without much luck. I think if we could somehow structure incarceration so as to improve the lives of the convicts after release, then all of society would benefit because societal-welfare is a sum of individual welfare.
My original thought was to accomplish this lofty goal through compulsory schooling for convicts. But the education system, as it is currently structured, has not been proven to causally improve outcomes for regular people, thus I have little confidence that compulsory school will improve outcomes for convicts either. In addition there is a question of motivation. Why would a convict want to go to school when everything is provided without him having to lift a finger? Perhaps higher grades in the jail-school could result in reductions in sentence time or better food or better lodging or something of that sort. So maybe motivation wouldn’t be as big a problem as I thought, but, even with perfect motivation, there’s still a lack of causal evidence that school, even when done well, can help.
More recently, based on my own experiences parenting and the little I’ve read about behavioral therapy, I have been thinking behavioral therapy techniques could work where incarceration or cognitive therapy [school] would fail. “Behavioral Therapy” is a scientific discipline built around the simple observation that future repetitions of a behavior become more likely when the behavior is followed within 3 seconds by a reward and future repetitions of a behavior become less likely when the behavior is followed within 3 seconds by a punishment. Studies already exist that prove behavioral therapy is causally associated with changes in future behavior…on average. Rewards or punishments outside of the 3 second window have no influence whatsoever on future performance. Unfortunately that means incarceration—regardless of its length—is unlikely to have any influence on recidivism following release because incarceration is experienced hours to months after a criminal behavior—well outside the 3 second window of opportunity for learning.
On another recent Econtalk episode , Leo Katz mentioned the concept of mutually beneficial exchanges between prisoners and society. He said that we don’t allow them, which is unfortunate when [if] they could benefit society and the convict both. But I’ve been thinking that a mutually beneficial exchange might be permissible if done as a voluntary, scientific study.
So I’ve designed a 5 arm study comparing behavioral therapy to cognitive therapy and usual incarceration and described it below. I’m not a researcher or scientist or a behavioral therapist and I’m not involved in the Justice system. So any improvements to this design would be appreciated.
Hypothesis: Behavioral therapy using rewards and punishments is the only intervention that will reduce the likelihood of future recidivism of a specific type of behavior.
Null Hypothesis: All treatment efforts to reduce recidivism will produce the same outcomes.
Design: prospective, randomized, single blind [evaluator is blind, convict is aware of the treatment he is receiving], usual treatment and placebo [sham] controlled study. Each treatment arm would occur is separate prisons. The control arm would include prisoners at all prisons. Ideally the schooling arm and the behavioral therapy arm would be administered by the same teachers/therapists at the different prisons.
Exclusion Criteria: Initially accept only convicts who have committed the same type of crime. It does not matter which crime is chosen initially, the experiment should work equally well for all types.
Follow up: I think one year of follow up should be adequate to determine the efficacy of the interventions.
Primary treatment outcomes would be: recidivism rates for the specific crime.
Secondary treatment outcomes would be: recidivism rates for all types of crime, convict employment after release.
Chris
Oct 7 2016 at 8:13pm
This could have been such an interesting discussion if the credibility of the author was not tainted by her ideological viewpoints. It seemed that she backed up her criticism primarily with anecdotes – perhaps the book is different.
I am sympathetic to the commonsense argument that the predictions that fall from these algorithms should not be interpreted to be without error. However, I’m not convinced that using these algorithms is a net negative.
Are value added scores for teachers over time just noise or is there useful information?
Do urban, poor, black young men with prior records and/or minor criminal violations not have a greater chance of recidivism?
Do college rankings actually have any impact on the cost of college education?
Are targeted marketing algorithms actually harmful to poor people?
If I were to guess, you could eliminate the algorithms mentioned and have no positive impact on the end result and possibly a negative impact.
Remove VAR teach measures – teacher turnover will be entirely subjective and, more importantly, we may possibly lose data that actually has some relevance over a longer period of time.
Remove judicial sentencing algorithms and you will still have a disproportionate number of minorities in jail and variability in sentencing based on the judge – who may actually be racist.
Remove the computer rankings of college and now schools primarily compete on only brand and perks. Do you really think the school that was ranked 100th has a rock climbing wall and indoor water park because of the computer rankings? For profit colleges charge high tuition because they are not ranked? So confusing how anyone could attribute the increase in college costs to a magazine ranking system. If the ranking was that important, wouldn’t lower ranked schools compete on costs?
Lastly, remove targeted marketing and poor people will not give money away to predators? This argument is disrespectful to poor people since it implies that they are too stupid to make an informed decision as a consumer. I would not be surprised if targeted marketing of payday loans actually lowers costs to borrow for people with bad credit that want to borrow a small amount of money.
Very frustrating episode for someone that is actually interested in the pitfalls of using algorithms to make decisions! Is this how academics works now? Start with an ideological conclusion and build a case to support it?
Steve
Oct 10 2016 at 11:26am
Quoting Chris:
This could have been such an interesting discussion if the credibility of the author was not tainted by her ideological viewpoints. It seemed that she backed up her criticism primarily with anecdotes – perhaps the book is different.
Agreed. I think she needs to get out to the rest of the country where urinating on the sidewalk is regarded as a crime rather than a nuisance.
Anna Svensson
Oct 29 2016 at 2:15pm
Thank you for a great series of high quality interviews on big data and causality. Working as a data scientist with a background in causal inference, I too often encounter the opinion that data in itself will solve all our problems, whereas I hold a belief that intellegent interpretations of data and how it should be used will become ever more important in the future. Big data might be sexy, but I think it’s
time to put causal inference higher up on the agenda.
George Gantz
Nov 1 2016 at 11:36am
Have we thought enough about what it implies to use algorithms that are “inscrutable”? (see http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable ).
Consider our interactions with other human beings. Admittedly, human beings are often inscrutable. For example, we are intuitive, having a capacity for knowing or predicting without being able to explain where the knowledge comes from. Many psychologists attribute intuition to the hidden workings of our subconscious minds and innate instincts, as well as to tacit knowledge – perception and cognition skills learned through experience and practice. Yet our “intuition” can also lead us astray. Sometimes we make very bad decisions on the basis of a “hunch”, and we are often unable to tell the difference between a valid realization and a post-hoc rationalization. Moreover, intuition may serve as a mask for bias and prejudice, and there is always a risk in human interactions that a person is deceitful or manipulative, based on intentions and motivations that may be hidden. In the context of power relationships between humans, these behaviors can lead to very bad results.
The thought that we will rely increasingly on complex self-learning algorithms that give us results we cannot interpret may not seem particularly chilling. Yet any such human / algorithm relationship will be subject to one particular asymmetry that we should be concerned about – the asymmetry of information. To the extent this asymmetry leads humans to cede authority to inscrutable algorithms (as Cathy O’Neill documents), a power dynamic will come into play and the risks of catastrophe rise dramatically. Just consider the incentives for hackers, programmers, corporate interests or governments to seek to manipulate such inscrutability to their own ends, not to mention the risk of inadvertent or undiscoverable errors.
There is a reason why many regard AI as raising potential existential risks. (see http://swedenborgcenterconcord.org/chapter-iii-the-technology-race-to-super-intelligence/ )
Comments are closed.