File Name: Noise: A Flaw in Human Judgment
Author : Daniel Kahneman , Olivier Sibony , Cass R. Sunstein
ISBN : 9780316451406
Format : Hardcover 464 pages
Genre : Psychology, Nonfiction, Science, Business, Economics, Leadership, Sociology, Philosophy, Self Help, Personal Development,
Rating: really liked it
The sheer variety of ways judgment can be clouded is mind-boggling. The more closely we examine judgments, the more noise turns up as a factor. In Noise, an A-list team of celebrity psych stars, Daniel Kahneman, Olivier Sibony and Cass Sunstein pull together their confrères and evidence from the usual innumerable studies to delineate how bad it really is.
Noise, at least in psychology, is “unwanted variability”. In practical terms, that means even the most focused person might be swayed by unnoticed noise. Noise can be the home team losing the night before, lunch coming up in half an hour, miserable weather, a toothache – pretty much anything that has nothing to do with the issue at hand. This is all in addition to personal prejudices and the framework of bureaucratic rules that are always in play, restricting the range of possible decisions, and misdirecting them where they should not be going.
All kinds of studies show that trial judges are inconsistent when not totally wrong. The authors say two judges viewing the same evidence in the same case will come to two completely different decisions. So will the same judge given the same case on two different occasions. Sentencing is all over the place, which has led to enforced sentencing guidelines that often make things worse. It has also led to judge-shopping, as the decision patterns of judges builds up over the years. This is not based on evidence or argument, but in which way the judge’s decisions can be erroneous. Think political parties, religion, and stubborn pig-headedness.
The same goes for mere mortals, like supervisors. They all believe they do a creditable job, but the stats show the direct opposite. Even simple linear models do a far better job in every case. Not just sometimes – every time, according to Noise. Even randomly generated models do a far more accurate job of judging people correctly than people do. Artificial intelligence algorithms can also add a little more accuracy, though surprisingly, not significantly so. But people on their own perform miserably.
Still, no one, but no one, would trust a simple model to make a decision on their future; they feel better having personally tried with another human, regardless of the facts. It immediately reminded me of Lake Wobegon, where all the kids are above average. Doesn’t work like that. In the authors’ words, “Models of reality, of a judge or randomly generated models all perform better than nuanced, intuitive, insightful and experienced humans.” To which I would add: anyone who claims they can accurately size up a person on meeting them, can’t.
Errors occur far more frequently than people realize, because everyone trusts their own judgment foremost, and far too often, the judgment of others (their lawyers, doctors and managers, for example).
The worst example of this occurs in job interviews and performance appraisals. Everyone knows the single worst way to make a hire is through a personal, unstructured interview. Yet managers still insist on interviews, and so do candidates, thinking they can master the battle and win the job if they can simply deal with someone in person. Both are totally wrong, yet nonetheless, they both persist. Job interviews have become a nightmare for candidates, going back multiple times for essentially no good reason, as the more people interview them, the more inaccurate their ultimate decision will be.
As for quarterly, semi-annual and annual performance appraisals, those who have to work with the results know they are usually totally worthless. Managers burdened with multiple reports grind them out against a deadline, having little or nothing to do with an individual’s performance. Most everyone is “satisfactory”, especially when managers are required to rate them on a scale. No decisions can validly be taken from these exercises in frustration, but they are taken anyway. And while essentially no one in any organization likes or ever looks forward to the whole process, the noise persists, clouding futures.
Scales themselves are useless, as the authors show in examples such as for astronauts. A bell-curve distribution would show one or two excellent performers, one or two total failures, and most in the middle. But there are no total failures among astronauts. The yearslong training requires and ensures it. So grading on a scale against a bell-curve can be just more noise.
For the open-minded, Noise provides details, tips and tricks to leverage. For example, deliberation, the vaunted value of teams, actually increases the noise around a decision. The mere fact that team members discuss their reasoning before they make a decision increases the noise for everyone participating. The key to making teams work, ironically, is for everyone to do their own research in isolation, and once they have all come to a decision, they can then compare with others on the team.
They call this independent work “decision hygiene”. It cuts down noise in general, but no one can know what specifically, or by how much. The authors liken it to handwashing- no one knows what germs were there to kill. All they know is that handwashing kills germs, and that you can never get rid of all of them.
The authors show that noise occurs in almost any shape or form. The quality of the paper used for a business plan, and the font it is presented in, can tip the success or failure of a proposal in the hands of potential investors.
Another interesting noise source is called whitecoat syndrome. This is noise some people generate going to see a doctor, nurse or lab technician. Their blood pressure rises in anticipation, sometimes causing an erroneous diagnosis.
Things like prejudice are not so much noise as bias. When assessing decisions that go wrong, noise is the standard deviation of errors, while bias is the mean itself. The book is a thorough attempt to make a science of noise and errors in judgment.
Bias is a likely driver of noise. But the book is all about separating the two. It shows that biases, such as “planning fallacy, loss aversion, overconfidence, the endowment effect, status quo bias, excessive discounting of the future, and various biases against various categories of people” are all factors in erroneous decisions. But despite all this, sheer noise outweighs bias heavily.
They use Gaussian mean squared errors to demonstrate the effect of both bias and noise, with noise the clear winner, and dramatically so. Squaring the errors makes them visually arresting, But they still need to be stopped - somehow.
It transpires that errors do not cancel each other out, either. Instead, they add up, taking decisionmakers farther away from the right decision. And with the book piling on a seemingly infinite selection of noise factors and sources, it’s a wonder Man has made it even this far.
Speaking of erroneous judgments, it is difficult to decide what kind of book Noise is. It is steeped in psychology, but it is not a groundbreaking new discipline. People and firms have been actively trying to filter out noise since forever (the better ones, anyway). Nor is it a psych textbook, really, though there are exercises the reader can use right while poring over it. I think it is closer to a handbook of what to be aware of: forewarned is forearmed sort of thing. Though clearly, mere knowledge of the situation is far from enough to counteract it. The book includes how-tos like implementing an audit to identify and isolate noise, so the book definitely has practical applications. Handbook it is, then.
This noise thing is ego-deflating for all humans, who run their lives continually making decisions, not only on facts, but predictive judgments as well (Predictions provide an “illusion of validity”). That we are not equipped to pull this off successfully – at all – should cause a total rethink of where we go from here. Noise is pernicious. Trusting models looms heavily over us all.
Rating: liked it
I’ve only ever come across the idea of noise in the context of information theory – something I thought this book would have made more mention of, but it didn’t, really. The idea being that the transmission of any signal is likely to involve noise (entropy being the one truly inevitable law of the universe – more than taxes, on par with death) and so figuring out ways to reduce noise ultimately depends on how important the signal is. At the start of the Life of Brian there is a perfect example. Jesus is giving his sermon on the mount, and he says, “Blessed are the peacemakers” – but some people further back hear “blessed are the cheese makers.” Unsurprisingly, this causes an argument over why cheese makers might be singled out for a special blessing. Perhaps one of the reasons information theory isn’t mentioned here is that a major means of controlling noise in information theory is by redundancy, you put some form of redundancy into the signal and it lets you know if you are getting signal, or noise. I don’t think redundancy is something the authors of this book are interested in increasing, perhaps even the opposite.
I’m not sure what to make of this book – so, I’m going to give you my view of what it is about and then some concerns I have about what it is about. One of the things I like about this book is that if you don’t have time to read the whole thing, you can flick to the last chapter and get all the major ideas of the book fired at you in quick succession. All signal, no noise, and very little redundancy, if you like...
This is, at least in part, a treatise against judgement. You only need to make a judgement when you do not know for certain – no one says, “I judge the fire to be hot” or “to the best of my judgement, the sea is salty.” Judgement implies a kind of weighing of variables, and so black and white ideas don't require judgement. All the same, we tend to be far too confident in our ‘judgement calls’, and if the methods discussed in this book have one thing in common, it is to make us pause before we pass judgement. A range of methods are discussed to achieve this, but almost all of them involve delaying our reaching a judgement, that is, ways of ensuring you do look before you lead, or kicking people off your team who you know are going to prejudge or ensuring you organise the inputs to your judgements so that you create lots of diversity.
I think the last point is the one that I will be most likely to take away from this book. This idea is designed to correct the problems with the ‘wisdom of crowds’. That is, that if you get lots of people making a judgement, and you average their judgements, you are likely to get closer to the real value than most, if not all, of the individual judgements themselves. So, if you have to decide how many jellybeans there are in a jar, and you can chose the average of all guesses as your guess, then always do that. In real life you probably don’t get to do too many ‘how many jellybeans are there?’ type quizzes. But just about any judgement call is improved by having an increased diversity of opinions added to the mix.
The only problem is that it is remarkably easy to mess this up. When my second child was born she was breech and so everything was panic in the delivery room – that is, right up until the obstetrician walked into the room. I really have never known a man to have such a presence. Everyone deferred to him. The point made here is that such a person cancels the wisdom of crowds, because people are much less likely to ‘put their own two cents worth in’ if they think they will contradict the wise one. As the authors here say, finding ways to ensure people provide their own judgements independent of everyone else is a major step towards making better judgements. Which is part of the reason why I write these reviews without reading other people's reviews.
And this also goes for choosing inputs that you will use to make your judgements. You should make sure these are diverse too. So that if you are thinking of employing someone, one of your inputs might be their intelligence. But that might mean that your next input shouldn’t really be what university did they go to, because those two inputs are probably quite strongly correlated. You want to avoid asking the same question in five different ways and then thinking you have covered all the bases.
Okay – so, what has any of this got to do with noise? Well, their argument is that the nature of judgements is that they are always noisy. And whenever this is tested, judgements prove to be much more noisy than we would guess. This means that one judge might give a drug addict a suspended sentence while another might give someone else under the exact same charge 20 years. Now, normally when we read that that has happened the next thing said is, ‘can you guess which of them was black?’ But this book isn’t about bias, it is about noise. Bias shifts all results in a predictable direction – noise is unpredictable. We can think that noise is fairer than bias – except, you probably wouldn’t think that if you were sitting next to the guy that got the suspended sentence.
The takeaway here is ‘wherever there is judgement there is noise, and more of it than you imagine.” There are some lovely metaphors too – like the idea of treating reducing noise as a hygiene task – since, you are unlikely to actually know the consequence of decisions you didn’t make, but reducing noisy decisions is likely to make better judgements anyway. And so, like washing your hands, you can never know the infection you might have ended up with if you hadn’t washed them – but that is, after all, the point of washing your hands in the first place.
The problem I have with this book is that it says that one way to reduce noise is to reduce the situations where judgements are necessary. And we can do this by building algorithms. Anyone who has read 'Weapons of Math Destruction' will be feeling a little uncomfortable right about now. And they even mention that book here. Their point is that the identified problems with algorithms are more likely to be them being impacted by bias, rather than their ability to reduce noise. And even if the bias is unconscious, well, we need to find ways to tackle that bias. Getting rid of the algorithms isn’t the solution, the solution is in getting rid of the bias.
Which all sounds well and good. But the problem seems to me to be much more fundamental than that. Signal and noise are interesting because we can assume they are obvious and absolutes. There is a ‘true’ signal and noise gets in the way of that signal. But it isn’t clear that a lot of things in life do fit the definition of a true signal. Or better (and perhaps clearer) is the example they give at the start of the book. Throwing darts at a dart board. We all want all of our darts to go into the bullseye. If they did, universal happiness will be achieved and the kingdom of heaven shall reign for a thousand years – something like that, anyway. Except, who decides where the bullseye is? Are we all really aiming towards identical bullseyes? It may well be true that algorithms can reduce noise in our decision making, but I struggled to get over one of the examples they gave – mandatory sentencing. Sure, there is no noise, but I certainly don’t feel comfortable with that particular bullseye. The authors don’t either, by the way, but I don’t know that I was convinced by their brushing of this aside.
I guess I am worried that this book is written by social psychologists, and I’m not sure all of the questions to be answered here fit all that neatly into their field. In fact, a lot of what was said here, particularly in the bits on algorithms, made me think of sociology and the need to take intersectional perspectives into account – and to question if judgements are no less unfair just because they reinforce and reproduce the accepted prejudices of our society.
I know this sounds like I’m arguing about bias, rather than noise – but it wasn’t clear to me how one can reduce the scatter of noise without some sort of an organising principle – or how that organising principle could be something other than some fundamental form of prejudice.
I might sound much more certain about all this than I feel. Clearly, noisy decisions are anything but good decisions. I guess one of my problems here is that I want to agree and disagree at the same time. They give this lovely example where the algorithm says there is a 70% chance Jane will go to the movies tonight, but you know Jane has a broken leg – and so, you assume there is actually no chance she will go to the movies. The problem is that while this is perhaps true, given she has a broken leg – we tend to exaggerate other exceptions to the rule as if they were the same as a broken leg. I do this all the time, by the way. I can’t believe anyone could vote for the Coalition Parties here in Australia – them having been for as long as I can remember a mixture of corruption, unspeakable nastiness and incompetence – but they are re-elected time and again. The thing is that what I take to be broken legs, others don’t even notice as problems.
The authors give an example of where reducing noise might prove too costly – that is, in teachers marking essays – it being obviously too expensive to mark every essay twice. Except, that isn’t how teachers go about reducing noise in their marking. They create rubrics and they cross-mark selections of essays and then they compare results. And they add comments to essays and give the students the right to appeal if they think the mark was too low and they can justify that on the basis of the rubric.
And that is where I’m going to leave this. I do want to see judgements, because I don’t want to be ruled by algorithms that I can’t see or understand or follow the internal decisions of. But I want those decisions and judgements that are made to be based on standards that are clearly documented, with reasons given for those standards, that I can challenge or vote against or shout about when I get seriously pissed off with them. Yeah, judgements are messy and noisy, they are trying to solve complex problems, and so noise really is inevitable, but it would be nice and better to make them less noisy, I can agree with that – but while a lot of the suggestions in this book go a long way to reducing noise, some of the suggestions make me feel very, very uncomfortable.
Rating: did not like it
You know what the real lesson here is, don’t pre-order books based on the authors reputation alone. In a world filled with noise, these authors contribute to it through their generally inadequate book.
I really wanted to like this. I liked Nudge which has Cass as an author, I generally liked Thinking Fast and Slow, and I want someone who’s not Nate Silver explain signal to noise ratios to help me curate better information in my life. But this book isn’t it. This book is literally noise. Worthless noise in an already noisy world.
Someone like Kahneman, a founder of behavioral economics, you would think would have interesting new research and considered takes on how to cut through the amount of chatter out there in the world. It’s an important problem. But it seems like behavioral economics has stalled out into finding goofy and minor errors in our cognitive biases. Hey look! two people came to different answers when asked to mentally calculate an abstract concept. Look at how I can create methodologically dubious and unreplicatible studies that confuse people into making decisions against their best interests. Am I a behavioral economist yet?
I’m so sick of people writing shitty books to promote themselves as “thought leaders” and charge more for their consulting. I expected a better book out of these authors but found myself extremely disappointed in the shallowness of the ideas and writing. It’s a bad regurgitation of ideas that has been done better in other places.
If you like feeling cocktail-party smart without actually having to put in the effort to be smart, you will probably like this. It’s full of pithy blurbs. (Judges are impacted by whether or not their favorite football team won the night before.) Memorize a few and you’ll impress your wife’s-bosses-cousin in no time. Freakonomics did it better. But the fundamental problem is that this book doesn’t say anything that hasn’t been beaten to death before.
Essentially decisions come down to judgments and judgments can be skewed through bias and noise. Noise = randomness except it’s a lot harder to charge six figure consulting fees when you say “oh jeez, there’s just a lot of randomness all up in here.” Much sexier to call it a “noise audit” and point to your crappy book as a guide. People may not be great predictors but we sure are predictably gullible.
Then this book plays a bad game of telephone where the authors summarize research they did not do, and at times seems like it might’ve been sourced from a Reddit comment section, in an effort to make their publishers and publicist happy by hitting a page count.
Read Phillip Tetlocks “Expert Political Judgement” and “Superforcasting” for better and more in depth research on the core topics covered here. Invisible Women does a good job with some of this. Honestly this book felt like a psych sophomore five solo cups of thunder punch deep trying to explain their thoughts on cognitive bias. Don’t waste your time.
Rating: really liked it
Noise is bad no matter where in life we find it. In their new book Daniel Kahneman, Olivier Sibony, and Cass Sunstein say there is too much of it in our judgments and explain how noise arises and what might be done about it.
“Judgment” is not “thinking”.The book defines “judgment” as “a form of measurement in which the instrument is a human mind.” Judgments may be less than optimal due to bias, which is systematic deviation from optimal, e.g.the group’s predictions are ALWAYS overly optimistic, or noise, which is a more random scatter. The main topic of the book is “system noise”, which is “unwanted variability in judgments that should ideally be identical.” (I should get the same jail sentence no matter which judge hears my case.) System noise has two main components. There is level noise ( A particular judge is lenient in granting bail.) and pattern noise. Pattern noise also has two components: stable pattern noise, (Such as a tendency to give women lighter jail terms), and occasion noise ( I just had a run-in with my boss)..
The book discusses each of these types of noise and their psychological aspects, drawing on earlier work such as Sunstein’s “nudge” and Kahneman’s “System 1 and 2” thinking. Readers who are not somewhat familiar with this work might find a quick google search helpful. There is also some discussion of the statistics involved that I suspect will be cryptic to most people who do not already know a bit about statistics. If so, you can certainly ignore the math.
So once you know sources of noise in judgment, what do you do about it? The authors describe some remedies, such as a “noise audit” or a “decision observer” to help remove bias from judgments in groups or a judicious use of rules or standards.
There is a lot of good and thought-provoking insight in Noise, principles that everyone will recognize once they are pointed out but that interfere with good judgment unless we identify and address them. The authors show how to do this with extensive descriptions of judgments in a number of fields, like selecting new hires, setting bail or sentences in criminal cases, and medical decisions. As a result, this is rather a long book, and these descriptions can be skimmed if you are very focused on task, but they are interesting.
The applications described in this book are primarily decisions made by multiple people, whether they be judges setting bail or group recommendations on whether a company should acquire another company. It does not focus much on decisions people might make in their personal lives, but the principles certainly seem applicable there as well. I am sure the authors would recommend that I not review this book just before lunch and after an argument with my spouse!
Insightful analysis of why we make bad judgments
Rating: it was amazing
I have been very interested in the work of the psychologist and economist Daniel Kahneman since around 2000 where I came across some of the ideas around over-confidence bias on an Executive MBA at Insead, and this was only cemented with his Nobel Prize win (with Amos Tversky) in 2002.
I spent a lot of time over the years researching their work including their 2000 publication “Choices, Values and Frames” and applying the ideas (both Prospect Theory and the various heuristics and biases they identified in the field of Behavioural Economics( in some of my own professional work (as well as speaking on it to fellow actuaries and to others in insurance).
Kahneman of course came to much wider prominence in 2011 with his publication “Thinking, Fast and Slow” which made it easier to talk about his ideas and their applications to professional work in general and to my own fields of insurance and actuarial work more specifically as more people had familiarity with them.
See here for example for an article I joint authored which discussed both that book and Taleb’s “Anti- Fragility” (https://web.actuaries.ie/sites/defaul...)
So of course I was immediately going to read any new book co-authored by Kahneman – and I was interested to see that one of the other three authors is the co-writer of “Nudge” (a book I have not read but whose ideas I am familiar with, not least for the way that the UK government established a “Nudge Unit” in 2010 to apply some of its ideas).
What were my first impressions of this book:
My first – and negative reaction - was that it was a lot simpler than I was used to from this author and not in a good way.
“Choices, Values and Frames” was effectively a compilation of academic papers (of course academic papers from the “Dismal Science” of Economics where it seems possible to logically argue both X and not-X from the same data and where empirical evidence is gathered from experiments which are both artificial and with ridiculously small sample sizes – normally a group of 20 graduate students earning $10 to take part constitutes major experimental evidence). And while “Thinking Fast and Slow” attempted to be for mass-consumption it was still dense with ideas. This book by contrast seems to be light on ideas (particularly early on)– explaining what to me seemed sometimes very simple ideas in rather excruciating detail. It felt like the first 80 pages in particular would have almost have been taken as a page of initial definitions in the 2000 work.
The second – and by contrast positive reaction – was that the book was much more addressed to my own field. In fact the very first example given in the book is actually from underwriting premium judgements and claims case assessment in an unnamed insurance company which given I run a global team of mathematicians whose key functions are assisting underwriters with the provision of tools to assist in setting premiums, and in carrying out calculations to complement case setting seemed rather relevant.
Whereas much of the earlier work was drawn on social science type examples and often on the aforemention artificial experiments, this book draws heavily both in its empirical data and its recommendations on areas of professional judgement. In fact in this interview – which serves as a good introduction to the book https://www.mckinsey.com/business-fun...# – one of the authors actually describes Noise as “the unwanted variability in professional judgments”. Most of the repeated examples – the insurance example is one of a number of one-offs - are drawn from judicial work (particularly sentencing), forensic science medical work, and HR areas (both recruitment and performance assessment) – the former much more mappable to my own work and the latter of course relevant to almost all workers.
So what is the book about? Well I am sure there will be copious articles over time on the book, but to use the McKinsey article again and Kahneman’s explanation – it is all about distinguishing between bias and noise. “bias is the average error in judgments. If you look at many judgments, and errors in those judgments all follow in the same direction, that is bias. By contrast, noise is the variability of error. If you look at many judgments, and the errors in those judgments follow in many different directions, that is noise”.
A key assertion of the book is that noise has been largely overlooked – particularly in professional areas as professionals are not prepared to admit quite how noisy their views actually are. They claim and aim to show from data that in terms of accuracy in post-fact verifiable judgements – noise is a much greater source of error than bias; and also make the point that with non-verifiable judgements, bias is anyway not a concept that can be easily investigated anyway. Note on the latter though they perhaps miss the point that whereas individual judgements may not be verifiable, aggregate ones perhaps are (insurance premiums – which they correctly say cannot be verified on an individual basis – being a case of an area that can be on an aggregate basis).
In terms of noise they later split noise into level noise (taking the examples of judge sentencing – the difference in average sentences between lenient and draconian judges) and pattern noise (variability of judges responses to particular cases). They later split pattern noise into stable pattern noise (this could be seen as for example an otherwise lenient judge who is systematically harsh on knife crime, or a harsh judge who is sympathetic to young offenders) and occasion noise.
And this goes some way towards explaining why they define noise as so critical as I think many people would more naturally group both level noise and even pattern noise with bias.
Some interesting areas:
- Although later showing that stable pattern noise is perhaps one of the biggest contributors to error, an earlier chapter gives lots of example of how occasion noise is perhaps the most embarrassing part for professionals to admit: caused either (hence its name) by judgements being changed by extraneous circumstance (weather, time of day, results of local sports teams have all been shown to influence judicial sentences) or by internal inconsistency (forensic scientists – including fingerprint experts – will commonly reach a different conclusion if given the same case months later, as well clinicians).
- They spend a lot of time arguing for the greater use of models – and often simple models – in professional judgement. One example is that “The Model of You beats You” – for many professional judgement a model simply built off a weighted average of past judgements by an individual outperforms the future judgements of the individual
- It is clear that in both judicial areas and medical areas there is an almost complete resistance to the use of models (despite examples like the AGPAR score which have been used successfully for years) as being de-humanising, over-simplified, arbitrary etc. From my own professional viewpoint this can seem odd – underwriting professionals are more than happy to have models to complement and ground their assessments
- There is an interesting discussion on model sophistication which effectively argues for one of two ends of a continuum. Either simple equal weights (or frugal simply weighted) models which aggregate a number of assessments known to be partly predictive OR a complex machine learning model (when large data sets are available including factors not traditionally assessed). In the absence of the latter, they find the former comes very close to (or sometimes outperforms) a regression/GLM type model (partly due to data-mining/over-fitting and partly due to often faulty professional judgement used to interpret the findings of the modelling).
Although not really in scope - while acknowledging the risk of models having inbuilt biases (either implicit and hidden ones due to proxy variables or ingrained ones due to reinforcing the past biases data sets used to build them) they also make the point that these biases are typically much more pernicious in individual judgements
- Some of their recommendations include Noise Audits, Decision Observers and then Decision Hygiene which includes such things as: using judgements rooted in statistical and external evidence (how many M&A deals of this type actually succeed) rather than causal/narrative judgement (constructing a story to fit the case); taking judgements into several, distinct steps which ideally are carried out independently - so that the halo effect of the first judgement does not outweigh the rest (for interviewing this for example could mean an evidence based competency assessment rather than an informal chat which is likely biased by early impressions); ideally use multiple professions and aggregate their judgements; try to use relative judgements – for example for performance reviews use pair-wise comparisons and scales which set out explicitly what is required for each level of achievement
I am tempted to finalise my review by adding a mark based on :
- comparing it to a number of other books
- evaluating it against an objective rating scale
- aggregating the views of other readers
- running it through my book rating algorithm
Instead as its a thought provoking book and one I am already starting to apply to my day to day work - I will settle for 5* - but I would urge persisting through the first few chapters.
Rating: it was ok
Although interesting, the authors clearly show their bias in “Noise”. It was a disappointing book after reading the incredibly interesting and applicable “Thinking Fast and Slow”. My main concern is that they imply causation where statisticians would not claim more than correlation. Implying causation is sloppy and a bad statistical practice.
They are greatly concerned with the randomness of individual impacts to people from judgments, insurance companies, and job interviews. Although they state that overall impacts to individuals on average are fair (i.e. unbiased), the impact to each individual may have a large variance (be wildly different from the average), which is caused by systemic noise. They assume this is inherently bad and that we should systematically reduce this noise by implementing more algorithms and rules into all sorts of private and public institutions. My concern is: who determines the rules for those algorithms? (Unbiased statisticians or policy makers?)
Essentially, this is a long discussion about statistical models that have larger variances than the authors would like (and larger variances than the general population would expect). They use the variability in human judgement to illustrate that humans are flawed. Their solution is to use more models, but they also point out that models can be flawed in similar ways. It’s a conflicting book of “don’t trust anyone’s judgement” and “don’t trust models”, but “do trust that individuals are likely having unfair things happen to them, even if there isn’t any bias in the system.” It was unfortunate that they didn’t include a discussion of what individuals can do to improve themselves (reducing their own biases and noise) rather than waiting for big institutions to reduce that noise for them.
Rating: liked it
Doesn't add enough to "Thinking, Fast and Slow" to warrant another book. Feels like one of those books where the author gets paid for every time they use a specific word (in this case, "noise") and have said it to themselves so much it has become a cult-like world view. In this instance, noise refers to the variations in human decision making which Kahneman attributes to a mixture of situational and systemic cognitive biases that covers old territory in the behavioural psychology world. He makes a case for a utopian rules-based slash AI system to guide decision making in spheres including law, medicine and HR, which can work to a degree to eliminate noise and bias but can also mute gestalt and out-of-the-box thinking. Aside from the odd forcefully inserted and admittedly interesting behavioural psychology study The 5 page conclusion at the end is all that's worth your time here.
Rating: really liked it
Interesting look at noise- anything and everything from time of day, to weather, to unconscious preconceptions- that causes inconsistencies in judgement. The authors go through several studies and cases including the judiciary branch, actuary science, and medicine and take a look at examples of noise in the decision-making processes. They call for a hygiene makeover for the way that judgments and decisions are handed down. They maintain that too much noise has permeated our society and it is a major contributor to societal injustices. An interesting, quick, nonfiction read.
Rating: did not like it
This book was a disappointment ... I thought that it's gonna be a scientific book . But it seemed written by Malcolm Gladwell ... Its a punch of stories nothing more
Rating: liked it
This book might be interesting if you're new to the topic, but overall, there's much less food for my brain than I would expect based on the previous "Thinking, Fast and Slow"
Half of the book is describing multiple experiments that prove that people are biased and don't act rationally or make the right judgements all the time. Like, happy and fed judges do less sentencing and so on.
The rest talks that mood, weather and other factors creating noise and affect our judgements. And that's pretty much it. Even the practical part is too generic to add anything.
There's an obvious difference between bias and noise, but the latter could nicely fit in the other format than a book. It took some effort to finish it up.
Rating: did not like it
A boring, in many cases misleading, simplification of concepts decision scientists, machine learning engineers, and statisticians have known and systematically studied for decades in far more detail than these authors do. The authors are out of their depth here and contribute nothing new to the conversation. Their folk, popular-press series of books have grown tired and at this point seem mostly like money-making machines for them in which they restate the obvious and botch the nuances and state of the art.
Rating: really liked it
Would we all be better off if we got rid of human judges and used algorithms to make decisions? Most of us would say “No,” but this book might make some of us change our minds. There are many examples in the book, but let’s look at doctors. There is some evidence that entering symptoms, medical history, etc into an algorithm would give more consistently good diagnoses than human doctors provide. Why? The book gives many reasons but a couple are that doctors are more likely to order follow up diagnostic tests in the morning than in the afternoon, and that some doctors make an initial diagnosis within a few seconds and then are very reluctant to change that diagnosis, regardless of subsequent evidence. Algorithms don’t do either of these things, which are examples of “noise.”
This is one small facet of this eye-opening book. I would definitely recommend it. My one complaint is that the last half of the book was very heavy on business examples. “thinking Fast and Slow” seemed more generally social science-y and I did enjoy it more. (ARC received for review)
Rating: it was ok
This book was a long slog. The topic of noise (variability error —not to be confused with bias error), is important and has serious consequences on human judgements. Unfortunately, the novel insights in this book are buried within many pages of uninteresting, poorly edited text. The bottom line is that people make noisy decisions most of the time. Most of us tend to believe we make rational decisions. We tend to accept the idea that the decisions of others are often noisy but don’t believe our own decisions are noisy. In some cases (such as the judicial and medical systems) noisy decisions can have undesired and even tragic outcomes. The reasons for this noise are categorized in detail in the book but they all come down to the fact that people are emotional and easily influenced, sometimes by non-obvious tangential things like the weather and if their home team won the game the night before). Our decisions are much, much more emotional and noisy than we realize so we should first be aware of this, and second take steps to reduce noise for judgements and decisions that impact our lives and the lives of others. The book gives some examples of how to do this but many seemed impractical, involved, and not easy to implement (e.g. doing a noise audit, seeking out baseline statistics, breaking decisions into smaller well defined tasks, practice observing your judgments using an ‘outsider’ lens, establishing guidelines or checklists, and using machine learning algorithms or simple linear decision tools —as long as they aren’t biased). Many of the solutions presented come with their own downsides such as reducing noise errors but increasing bias errors. An important thing to keep in mind is that noisy decision errors are additive and do not ‘cancel each other out’ as many assume so the impact of noise errors is large and important. The last chapter of the book provides an excellent summary for those wanting the cliff notes. For a much more accessible and practical understanding of human judgements and how to address our vulnerabilities to make better decisions, I recommend the books “Influence” and “Pre-Suasion” by Robert Cialdini. I think both of these books on influence could help reduce noisy decision making.
Rating: really liked it
A good book based on better judgment, reducing noise and reducing bias. Though ideas presented in book are not new but description of some concepts and why trap happens are indeed good.
So book mainly covers points on - types of noise present- pattern, level, occasional noise
Various biases and noise involved when u are in group, and effective way to come out of it.
I love points on information cascade, social pressure and group polarization.
It also talks about bias involved while predicting, forecasting such as illusion of validity, objective ignorance, overconfidence, pre conclusion and excessive coherence.
Some points on improving judgment like decision hygiene, sequencing information, structuring complex judgment is very engaging.
But there are many instances where book gets very dull, very repetitive and some professional reference which might not be appealing to general readers.
For detail review- https://dddebjeet.wordpress.com/
Rating: liked it
Once again, I was dazzled by the name of the author and somehow, apart from a few novelties, I found myself scrolling through pages that repeated the same content several times, without it being new in itself. I must also admit that, not for the first time, I think that the author is unnecessarily long when the same concept is clear after a hundred pages (if not before). So this is a book that I would recommend only to those who are totally unfamiliar with the subject.
Anche stavolta mi sono fatta abbacinare dal nome dell'autore e in qualche modo, a parte alcune novitá, mi sono trovata a scorrere pagine che ripetevano piú volte lo stesso contenuto, senza che lo stesso fosse a sua volta una novitá. Devo anche ammettere che, non per la prima volta, ritengo che l'autore si dilunghi inutilmente, quando lo stesso concetto é chiaro giá dopo un centinaio di pagine (se non prima). Quindi questo é un libro che consiglierei esclusivamente a coloro che sono totalmente a digiuno sull'argomento.