Wine Ratings Are Made Up and the Points Don't Matter (But Also They Completely Matter)
Let me tell you about wine ratings, which are a system where we pretend that the subjective experience of drinking fermented grape juice can be quantified on a 100-point scale...
Let me tell you about wine ratings, which are a system where we pretend that the subjective experience of drinking fermented grape juice can be quantified on a 100-point scale that actually starts at 50, where different publications use completely different criteria to assign the same numbers, where a single point can mean the difference between a $30 bottle and a $300 bottle, and where approximately 75% of wine never gets rated at all.
It's a fascinating system. It's also deeply flawed. And it completely runs the industry.
How We Got Here
The modern wine rating system was popularized in the 1980s by Robert Parker, who at the time was a lawyer writing a wine newsletter on the side. Parker wanted to create a system where wines would be judged on their actual merits rather than just their pedigree and reputation. This was genuinely noble—the European wine industry was deeply invested in the idea that a wine's value came from whose dirt it grew in rather than how it actually tasted.
Parker's innovation was to apply a 100-point scale to wine, borrowed from the American school grading system. He famously went all-in on the 1982 Bordeaux vintage when other critics were dismissing it, saying these wines were too rich, too ripe, too opulent to be "serious" Bordeaux. He was right. They were wrong. His newsletter became Wine Advocate, and suddenly one person's palate was moving markets.
Note: The part-time wine critic who becomes influential enough to reshape an entire industry is a recurring pattern in wine. Often the outsider perspective—someone who isn't embedded in the traditional wine establishment—brings valuable clarity. Though it can also bring its own biases.
Which brings us to today, where we have Wine Spectator (WS), Wine Advocate/Robert Parker (RP/WA), Wine Enthusiast (WE), Decanter (D), James Suckling (JS), Vinous (V), Jeb Dunnuck (JD), and several others, all rating wines on scales that look the same but aren't, using criteria that overlap but diverge in important ways, tasted under conditions that vary significantly, and published with descriptors that mean different things depending on who's using them.
The 100-Point Scale (That Doesn't Actually Use 100 Points)
Here's the first peculiarity: the 100-point scale doesn't actually use 100 points. Most systems start at 50. Some critics never publish anything below 80. So you're really working with a 50-point scale, or a 30-point scale, that we've labeled from 50-100 or 80-100 for reasons that aren't entirely clear. Presumably "100-point scale" sounds more authoritative than "50-point scale."
Here's how the major scales break down:
Wine Advocate / Robert Parker:
- 96-100: Extraordinary
- 90-95: Outstanding
- 80-89: Barely above average to very good
- 70-79: Average
Wine Spectator:
- 95-100: Classic: great wine
- 90-94: Outstanding
- 85-89: Very good
- 80-84: Good
- 75-79: Mediocre
Wine Enthusiast:
- 98-100: Classic
- 94-97: Superb
- 90-93: Excellent
- 87-89: Very good
- 83-86: Good
Notice the problem? Wine Spectator's 85-89 is "very good" while Wine Advocate's 80-89 range spans from "barely above average to very good." An 88-point wine from Wine Spectator is definitively "very good." An 88-point wine from Wine Advocate might be very good, or might be just above average. The number is identical but means different things.
Wine Enthusiast considers 87-89 "very good" while 83-86 is just "good," which means an 86 from them is categorically different from an 87, but an 86 from Wine Spectator falls comfortably in the "very good" range. Meanwhile, Decanter uses medals—bronze, silver, gold, platinum—mapped to point ranges, creating yet another conversion problem.
James Suckling uses a 100-point scale where 95-100 is "A+ Outstanding (and a must buy)" and anything below 88 "might still be worth buying but proceed with caution." In Suckling's system, "B" territory starts at 88, which is fairly aggressive grade inflation.
These organizations are all tasting the same wines and using versions of the same scale, but an 88 means completely different things depending on who assigned it. There's no standardization, no conversion chart, no easy way for consumers to compare across publications.
How Wine Ratings Actually Work
Let's talk about methodology, because this is where you discover that wine ratings aren't just subjective—they're derived under wildly different conditions.
Most major publications conduct "blind tastings," though "blind" means different things to different organizations. Wine Spectator does proper blind tastings where judges know the varietal, region, and vintage but not the producer or price. They taste 60-100 wines in a sitting and assign scores based on "typicity"—whether the wine expresses what you'd expect from that grape and region—plus structural elements like balance, tannins, acidity, and aromatics.
Wine Advocate, however, does NOT taste blind. Their reviewers know exactly what they're tasting: producer, vineyard, vintage, everything. There's an argument for this—context matters, and knowing a wine comes from a legendary producer helps you evaluate whether they're meeting their own standards. But it's fundamentally different from blind tasting and introduces different considerations into the scoring.
Decanter does blind tastings in peer groups with some context. Vinous doesn't conduct blind tastings—they taste at wineries and private tastings where everything is known. James Suckling does "mostly" blind tastings, which is an interesting qualifier that suggests some flexibility in the approach.
None of these approaches is necessarily wrong. Wine Spectator is essentially saying "strip away the marketing and let's see if this wine is objectively good at being what it claims to be." Wine Advocate is saying "context matters and we should evaluate wines with full knowledge of their provenance." These are both defensible philosophies. They're also philosophies that can lead to different conclusions about the same wine.
The Structural Problems
Problem #1: The ratings shape the wines
Here's what happens: a winemaker makes a wine. Robert Parker gives it 100 points. That wine becomes nearly impossible to find and the price quintuples. Other winemakers in the region look at what succeeded—probably a bold, fruit-forward style with significant oak—and adjust their winemaking accordingly. Over time, the region starts producing wines that chase the same high-scoring profile.
This is excellent if you love that style. It's less ideal if you value diversity or experimentation. The system creates a feedback loop where critics rate wines that were made to score well with critics, which becomes somewhat self-referential over time.
Problem #2: Most wine isn't rated
Wine Spectator rates about 15,000 wines per year. Wine Advocate rates about 30,000. James Suckling's site does about 18,000. These are substantial numbers—these publications taste dozens of wines daily.
Yet 75% of wine globally is unrated. Tens of thousands of wineries produce hundreds of thousands of different wines every year, and most never get reviewed by major publications. They're too small, too far from established wine regions that critics focus on, or they didn't submit samples.
This means the rating system performs price discovery and quality signaling for maybe 25% of the market, while the other 75% exists unquantified. Some of that 75% is genuinely poor wine. Some is probably spectacular and interesting in ways that might never score well because it's not "typical" enough.
Problem #3: Low ratings don't get published
When did you last see a wine bottle with a shelf-talker proclaiming "79 points!"? Low ratings exist but never get advertised. Wine Advocate rates wines from 50-100, but you'll only see 90+ scores displayed in stores. This creates selection bias where ratings become purely a marketing tool rather than a complete information system.
If you see a wine with no displayed rating, it doesn't necessarily mean it's bad. It might have received an 85, which is actually quite good. Or it was never submitted for review. Or it got a 73 and the winery chose not to publicize that score. There's no way to know.
Problem #4: Critics have different palates
Even experienced critics who agree that a wine is technically well-made will diverge dramatically in the 90+ range. Some prefer bold, powerful wines with concentration and intensity. Others prefer subtle, elegant wines with restraint and finesse. These are stylistic preferences, and they result in the same wine getting notably different scores.
A wine might receive 95 points from James Suckling and 89 points from Wine Spectator. Both scores accurately reflect what those critics thought. But they create confusion for consumers who assume points are objective quality measures rather than subjective taste expressions.
Why This System Exists (And Why It Persists)
For all its flaws, the wine rating system serves genuinely useful functions.
First, it provides a quality signal in a complex market. If you're in a wine shop looking at 500 bottles and don't know much about wine, a 92-point rating gives you a useful heuristic. It indicates that someone knowledgeable tasted this and thought it was very good.
Second, it creates accountability. Before Parker, the wine world ran largely on reputation and pedigree. Famous châteaux could rely on their name to sell wine regardless of actual quality. Parker's approach forced producers to maintain standards or risk exposure. That benefits consumers.
Third, it helps retailers and restaurants make decisions. When a wine shop decides which Napa Cabernets to stock, or a restaurant builds its wine list, ratings provide a practical shortcut for evaluation.
Fourth, it gives producers something to strive for. A 100-point wine can transform a winery's reputation and financial trajectory. That's powerful. It incentivizes quality and excellence, even if it sometimes incentivizes a specific type of quality rather than innovation or distinctiveness.
The system does all these things while being inconsistent, poorly standardized, and prone to creating feedback loops that reduce diversity. It's useful but flawed. It's necessary but imperfect.
What You Should Actually Do
If you're buying wine and want to use ratings intelligently:
- Pick a critic and stick with them. If you consistently enjoy wines that Jeb Dunnuck rates highly, use Dunnuck's scores as your guide. Don't try to compare across different critics—an 88 from one source isn't equivalent to an 88 from another.
- Learn what the numbers mean for your preferred source. If you're using Wine Spectator, understand that 85-89 is "very good" and 90+ is special. If you're using Wine Advocate, know their scale runs slightly differently. Read the descriptions, not just the numbers.
- Don't ignore unrated wines. Some excellent values are bottles that never got reviewed because the producer is too small or too obscure. If you see something interesting from a region you like, try it regardless of ratings.
- Remember that ratings measure typicity and technical quality, not personal enjoyment. A 95-point Burgundy and a 95-point Napa Cabernet are both excellent wines, but they taste completely different. Ratings can't tell you which one you'll prefer.
- Use ratings as one input among many. Look at the score, certainly, but also consider the price, region, vintage, varietal, and the actual tasting notes. Critics write descriptions for good reason—they contain more useful information than the number alone.
Most importantly:
Develop your own palate. The entire ratings system assumes that experienced tasters can identify quality. You can develop this skill too. Taste widely. Notice what you enjoy. Figure out why. Over time, you'll build your own internal rating system calibrated to your preferences, which will always be more useful than what any critic can tell you.
The Conclusion
Wine ratings are an imperfect system that we've collectively decided to treat as authoritative while simultaneously acknowledging their subjectivity and inconsistency. They provide useful information and misleading information in equal measure. They help consumers navigate a complex market and also constrain that market in limiting ways.
A 100-point wine isn't necessarily better than a 92-point wine in any absolute sense—it's more typical, more technically perfect, more aligned with what critics expect from great wine. Sometimes that's what you want. Sometimes you want something unusual and distinctive that might never score above 88 because it's too idiosyncratic.
The points matter enormously to the industry. They also can't tell you what you'll actually enjoy drinking. Both things are true simultaneously.
Use ratings as a tool, not a gospel. They're helpful guideposts in a vast and sometimes overwhelming wine landscape. But they're just one piece of information among many, and your own developing taste and knowledge will always be more valuable than any number on a shelf-talker.
After all, the wine that scores 95 points is only the best wine if you actually like drinking it.
Share this article