CMS has gotten a lot of heat this year from insurers on its use of less-than-perfect Medicare Advantage encounter data in determining risk scores — resulting in the agency slowing down its planned transition to 100% use of encounter data by 2020 (MAN 4/13/17, p. 1). But the abrupt cancellation last month of a conference session analyzing 2014 MA encounter data put the agency in hot water with the research community, and highlights the ongoing quality concerns associated with the use of encounter data for payment and other purposes.
The session of the AcademyHealth Annual Research Meeting, held June 25-27 in New Orleans, would have marked the first time CMS made such data publicly available. But conference attendees learned two days prior to the meeting via Twitter that the planned “Analysis of 2014 Medicare Advantage Encounter Data” had been cancelled. Niall Brennan, who was until January the chief data officer at CMS, tweeted that the sudden cancellation of the session was “[h]ugely disappointing” and that he hoped it didn’t mean CMS was “backsliding on #opendata,” which sparked a string of responses speculating on why CMS was keeping the data under wraps. In a response to a question from another Twitter user, he added, “Like any new data source [MA] data had some quirks to be sure but if it was used for payment why can’t it be used for research?”
CMS Says Data Weren’t Adequate for Research
A CMS spokesperson says the data simply weren’t complete. “The 2014 encounter data were collected from MA plans as additional sources of diagnoses that are also collected for risk adjustment via the RAPS [risk adjustment payment system],” he tells AIS Health. “After evaluating the quality of the 2014 encounter data, CMS ultimately determined that it cannot be used to independently establish either the range of diagnoses for Medicare Advantage enrollees or to determine the services provided by Medicare Advantage organizations. After further analyzing the 2014 encounter data, CMS has determined that the data are not complete enough to support research use.”
CMS in 2012 began collecting data from MA organizations through the encounter data system and in 2016 started phasing in EDS-based payments, beginning with 10% of the payment based on EDS scoring, while the other 90% came from the RAPS. CMS’s intent was to advance to a 50/50 mix for 2017, but it revised the blend to 25% EDS/75% RAPS after hearing the concerns of stakeholders about plan readiness and the quality of the data coming from providers (MAN 4/3/16, p. 1). For 2018, CMS in February proposed keeping that same blend but in the final 2018 payment notice released in early April walked it back even further to 15% EDS/85% RAPS.
Meanwhile, separate studies from Avalere Health and Milliman have uncovered significant differences between RAPS and EDS scoring based on a sampling of clients’ MAO-004 reports that inform plans of risk-adjustment-eligible diagnoses submitted to the EDS. Avalere, for example, found that the average EDS risk score was 26% lower than the RAPS score in the 2015 payment year, and 16% lower in the 2016 payment year (MAN 3/2/17, p. 1). Even though plans ranged from a 2% to 28% difference, every plan was impacted, noted Avalere.
Likewise, Milliman found a wide range of risk score differences among clients when it analyzed 2015 data, with the midpoint plans having risk scores based on encounter data for payment year 2016 that were 4% lower than those based on the RAPS (MAN 2/9/17, p. 1). But because the EDS process introduces new filtering logic in addition to a new process for submitting diagnosis data, it is difficult to tell which risk score reductions are due to submission process issues vs. filtering issues, observed the firm.
Charlie Mills, principal and consulting actuary in the Seattle office of Milliman, says it makes sense for CMS to hold back the data because of reported quality issues. “Plans weren’t really focusing on 2014 EDS submissions; it was more of a testing ground,” he recalls. “It wasn’t until 2015 and 2016 that plans really made it a big priority to get complete data submittedâ€¦.My sense is that the 2014 data wasn’t very good, which is consistent with what CMS has said.”
Moreover, questions surrounding the data are reflective of the Government Accountability Office’s assertion earlier this year that CMS has made “limited progress” in validating encounter data and that it should implement GAO’s earlier recommendation that CMS fully assess data quality before use (MAN 1/26/17, p. 8). GAO also observed that the agency’s plan and timeframes for purposes other than risk-adjusted payment remained “undeveloped.” For example, when CMS in a 2014 rule finalized the use of encounter data for payment purposes, the agency clarified that it was potentially going to use the data in eight additional ways, including to support public health initiatives and health care-related research.
Stakeholders in interviews with GAO stressed the important of privacy protections when releasing MA encounter data to researchers and other interested parties, noted the January report. GAO included several of their recommendations around protecting proprietary information and patient privacy, such as aggregating the data, denying or delaying the release of certain data (e.g., payment or utilization data) and limiting data access.
Mills suggests that the 2015 data may be more meaningful if released, since plans have made “a big push” to turn in accurate and complete submissions because of the increasing weight of EDS in their risk-adjusted payments. CMS says it is still evaluating the quality of the 2015 encounter data and “has not made a determination about whether this data will be released to researchers.”
View the GAO report at www.gao.gov/assets/690/682145.pdf.