Why those wine ratings keep getting higher.


When Robert Parker introduced his 100-point rating system for wine decades ago, the highest score he gave that year was 91 points. Now many wines each year receive perfect scores from his publication, the Wine Advocate. Similarly, in 2000 just 15% of wines rated by Wine Spectator received a score above 90. By 2015 the frequency of those scores had more than doubled: Nearly a third of all wines reviewed now receive a score above 90.

It was after noting this inflation of wine scores that academics Kieran O'Connor and Amar Cheema set out to research the question of Why Ratings on Everything from Wine to Amazon Products Improve Over Time. A summary of their initial conclusions is published in Harvard Business Review
It appears that the more ratings a person makes the higher the ratings they give. Why would ratings rise in this way?
The findings suggest that biased evaluations are the result of a misattribution process: If something feels easier to evaluate, people believe that it must actually be better. In other words, they misattribute their own feelings about evaluation (it feels easier to make an evaluation) onto their assessment of the actual merits (this thing must deserve a higher rating). This was true even though each person’s sequence of stories was randomized.
So there. As O'Connor and Cheema note, when you depend on others’ numerical evaluations, keep in mind that the rating not only reflects the inherent product quality but may also be higher due to more-experienced raters. Indeed, it may be worthwhile, they suggest, to buy that older, lower-scored bottle of wine.

Comments