If you’re participating in the Merit-based Incentive Payment System (MIPS) or MIPS Value Pathways (MVPs), you’ve likely encountered a familiar scenario: you review your estimated MIPS score in your dashboard, only to find that months later, when CMS releases the final score, the numbers don’t quite add up. For many providers, this discrepancy can lead to confusion, frustration, and even financial uncertainty.
So, why does this happen? Let’s break down the key factors that contribute to differences between the estimated and final MIPS scores, and what you can do if your score isn’t what you expected.
The Timing of Final Scores
First, it’s important to understand the timing. The final MIPS score is released by CMS several months after the data submission period ends. While many practices rely on vendor dashboards, such as those from MRO, to estimate their score in advance, the final score often looks different. This isn’t because the dashboard was wrong, but because there are factors that these platforms cannot predict or calculate during the performance year.
Key Factors That Affect Your Final MIPS Score
There are several moving parts in MIPS scoring, and understanding how each works can help clarify where the differences come from.
1. The Elusive Cost Score
One of the biggest contributors to a difference in score is the Cost category, which accounts for 30% of the total MIPS score. Here’s the challenge, cost is calculated entirely by CMS after the submission. During the performance year, there’s no way for providers or vendors to track or estimate cost data since CMS calculates it based on Medicare claims data after the performance period ends. This lack of visibility means your final score in this category can feel unpredictable. For practices with high patient costs, this can significantly impact the overall score.
2. Quality Category, More Than Meets the Eye
The Quality category is another area where discrepancies often arise, especially for practices relying on estimates from dashboards. Several elements within the quality score can shift between estimation and final scoring:
- Improvement Scoring: Improvement scoring is one area where vendors cannot provide an accurate estimate. This score reflects how much your performance has improved compared to previous years, excluding bonus points. Since this comparison requires CMS to have precise data from both the current and prior years, it’s impossible for vendors to predict this in advance.
- Performance Year Benchmarks: Some quality measures do not have established benchmarks at the start of the performance year. These benchmarks are derived post-submission by CMS based on national performance data. If a measure doesn’t have a benchmark when your dashboard calculates your estimated score, this can result in changes once CMS establishes a performance year benchmark. This ultimately shifts your measure score, either positively or negatively.
- Complex Patient Bonus: For practices serving complex patient populations, the Complex Patient Bonus offers a potential score boost. However, this bonus is not typically calculated by MRO, or other vendor systems. This is because the bonus is determined by CMS after analyzing risk scores and dual-eligibility data. Therefore, you might be pleasantly surprised (or frustrated) when this bonus impacts your final score.
- Administrative Claims Measures: Another wild card in the Quality score is the Administrative Claims Measures. These measures are often more relevant to larger practices due to the higher volume of claims data required for accurate calculations. Smaller practices may not have enough patient encounters reflected in Medicare claims to meet the statistical thresholds for inclusion Since practices can’t control or predict whether they’ll qualify for these measures, they can’t be included in your estimated score. These measures can end up being a positive or negative surprise when CMS finalizes your score.
What Happens When the Score Isn’t What You Expected?
If your final MIPS score is lower than expected, it’s natural to feel concerned, especially when Medicare payment adjustments are on the line. The good news is, you’re not stuck with that score if you believe there was an error.
Targeted Review, Your Safety Net
CMS offers a Targeted Review process for practices who feel their final score doesn’t accurately reflect their performance. This allows you to request a review of your score if you believe there was a calculation error, data wasn’t submitted correctly, or other issues occurred that negatively impacted your final score.
Common reasons for requesting a Targeted Review include:
- Errors in the calculation of your Cost, Quality, or Improvement Activities scores
- Complex Patient Bonus not applied correctly
- Data submission errors that impacted your score
If you do choose to apply for a Targeted Review, it’s important to gather all relevant documentation that supports your claim. CMS reviews these requests carefully, and having solid evidence can make the difference in whether your score is adjusted.
How to Be Prepared for Next Year’s MIPS Score
While some discrepancies in your MIPS score may be inevitable, there are ways to reduce surprises in the future. Here are a few strategies that can help:
1. Review Performance Feedback Early: CMS provides feedback on your performance during the year. Make it a habit to review this data early and often. Although it won’t include everything (such as Cost data), it can still give you insights into areas for improvement.
2. Consult with Your Vendor: If you use a third-party vendor for your MIPS submissions, like MRO, regularly check in with them to ensure your data is accurate and timely. They can help you identify gaps in data submission that might affect your score.
3. Consider Complex Patient Populations: If your practice treats a large volume of high-risk patients, factor in the potential for the Complex Patient Bonus when estimating your score. While it’s difficult to predict exactly how this bonus will affect your final score, knowing that it exists can help you set more realistic expectations.
4. Understand the Role of Administrative Claims: Keep in mind that some measures are calculated after the fact by CMS using administrative claims data. While you can’t influence these measures directly, understanding which ones might apply to your practice can help you prepare for potential score fluctuations.
Final Thoughts: Understanding MIPS Scores
The gap between the estimated and final MIPS score often comes down to factors beyond your control including CMS calculations, derived benchmarks, and claims-based measures. But by understanding how each category of the MIPS score is calculated and preparing for potential discrepancies, you can navigate the program more confidently.
And remember, if you’re ever in doubt about your final score, the Targeted Review process is there to ensure fairness and accuracy.
How MRO Can Help You Navigate MIPS Scoring
At MRO, we understand how complex MIPS scoring can be and how crucial it is to get it right. Our solutions are designed to streamline the data submission process, ensure your data is accurate, and help you understand the nuances of MIPS categories, from Quality to Cost.
Our team of experts works closely with practices like yours to reduce the uncertainty around MIPS scoring and give you better insights into your performance. If you’re looking for a trusted partner to help you navigate the complexities of MIPS, including final score discrepancies, reach out to us today to learn how our services can support your practice in achieving better outcomes and maximizing your reimbursement potential.
Ranu Ray, CMS Research Business Analyst at MRO contributed to the above blog post.