Wednesday, December 26, 2007

Physician code creep: evidence in medicaid and state employee form insurance billing

INTRODUCTION


Many studies refer to code creep as a contributing factor to improper billing, but policymakers hold few estimates of its magnitude to use for guidance. Despite the need of studies estimating code creep and improper billing, the 2005 Deficit Reduction Act progressively increases funding for the Medicaid Integrity Program, reaching its maximum of $75 million in 2009. With few studies to guide policy, Medicaid agencies hold little guidance on whether code creep is a problem they should target with the assistance of the 2005 Deficit Reduction Act. This article estimates an upper bound for code creep surrounded by physician office billing for the State Medicaid Program in South Carolina.


A formal definition of code creep is equivocal, but Steinwald and Dummit (1989) summarized code creep as "... changes contained by hospital record keeping practices to increase bag mix and reimbursement." Code creep is also often referred to as upcoding and, in hospital billing, diagnosis-related group (DRG creep). Finally, not all temporal flux in coding falls lower than code creep. Changes over time in billing can also be attributable to true change in defence mix (sicker patients), improvements in coding (both in provider background and in level of detail in codes or their definitions), and change instituted by the payer (program reforms) (Carter, Newhouse, and Relles, 1990).


The code creep literature has focused primarily on hospital billing of DRGs, especially following Medicare's switch to the prospective return system (PPS) in the 1980s. Results for these impulsive studies proved mixed. Multiple studies did find evidence for DRG creep during the implementation of PPS (Steinwald and Dummit, 1989; Chulis, 1991; Hsia et al., 1988) beside the estimates falling below 3 percent. Subsequent studies found no evidence of code creep that could not be attributed to true case mix change and improved coding practices (Hsia et al., 1992; Carter, Newhouse, and Relles, 1990).


After the articles assessing the billing impact of the switch to PPS, dry interest in code creep became sporadic. Unlike the mixed results examining PPS, latter studies produced repeated evidence indicating that code creep exists. Survey data enjoy indicated that 44 percent of health charge managers own received pressure from their senior managers to promote coding optimization, but 33 percent reported that their coding behavior vary depending on the payer (Lorence and Richards, 2002; Lorence and Spink, 2002). Other authors have examined specific diagnoses that provide strong incentives to code a better complexity in that diagnosis loved ones. Silverman and Skinner (2004) found extensive code creep for pneumonia across all hospitals, but the largest increase appeared in for-profit hospitals, hospitals converting to for-profit status, and hospitals where on earth physicians have an equity stake. Similarly, Psaty et al. (1999) examined charts for patients diagnosed beside heart failure and could find no documentation in 38 percent of the charts to support sophisticated reimbursement diagnoses. Lastly, code creep has not be limited to U.S. hospitals, next to German studies attributing 1 percent of all inpatient payments to code creep (Lungen and Lauterbach, 2000).


Few studies examine physician billing for organization visits contained by the U.S. Two studies in Canada own found that code creep is not limited to hospitals and also occur in Canadian physician office (Nassiri and Rochaix, 2006; Chan, Anderson, and Theriault, 1998). Evidence of code creep for physician office billing in the U.S. remains indirect. Wynia et al. (2000) surveyed physicians and found that 39 percent of physicians reported manipulate reimbursement rules, with 54 percent indicating that they be manipulating their billing more frequently in 1998 than they did in 1993. Interestingly, distress of prosecution did not affect the billing decisions of physicians admit to manipulating reimbursement rules. Lastly, Cromwell et al. (2006) cited code creep as one possible explanation why the physicians within their study dedicated up to 32 percent smaller number time to patient visit than the visit times associated beside the Medicare fee calendar.


This study expands on previous work in three ways. First, this study will be the first to examine code creep in Medicaid. Excluding those using survey methods, adjectives code creep studies in the U.S. hold examined Medicare data. Second, it will be the first to examine billing for alike providers across two payers by comparing physicians' Medicaid billing to their own billing in the South Carolina State Employees Health Plan. Lastly, this study will be the first to estimate the magnitude of code creep for physician organization visit billing. Specifically, this study test (1) whether physicians bill office visit at equal levels of complexity across the two State programs, (2) whether the billing behavior displays unexplained change over time, and (3) estimates the rate of increase for physician billing.


METHODOLOGY


In State fee-for-service programs, physician prices are routinely set by a fixed price schedule or through parley with the payer. Although prices are fixed, physicians still hold the power to choose the level of complexity or billing code for the stop by. If probability of detection is low, profit maximizing physicians can be expected to choose highly developed reimbursement codes or upcode on the margin.


Tables 1 and 2 present an overview of the physician's choice set when assigning a code to an organization visit. When billing the stop by for an established patient (a tolerant seen previously), the provider must choose from one of the five billing codes down in Table 1. The American Medical Association (2004) establishes definition for the codes, and an extensive literature explains and analyzes each within detail (Hill, 2001; King, Sharp, and Lipsky, 2001). For visits dominated by counseling or coordination of fastidiousness, a physician may use the length of the call on assigned to the billing code. Otherwise, the provider bases the code assignment on the complexity of the call round (documenting the problem's history, examination, and the complexity of the medical decisionmaking). Established visit are most frequently billed by complexity, and in these cases, the look in must meet or exceed the criteria for two of the three complexity category (history, examination, and medical complexity) nominated in Table 1. Lastly, payers reimburse providers for respectively visit base on the reported complexity and the administratively set rates attached to that billing code.


Table 2 lists the median reimbursements salaried for office visit in South Carolina Medicaid and the State Employees Health Plan programs. These median reimbursements are calculated from the full population of adjectives paid organization visit claims and imitate payment adjustment for provider type (nurse practitioner, specialist, etc.). Over the 3 years in the study, reimbursement rates for the established lenient visits remained flat for both plans. Reimbursement for the most adjectives Medicaid code, 99213, in 2001 be $36 and in 2003 decline to $35 ($44 and $47 for the State Employees Health Plan). For the less adjectives new merciful and consultation codes, reimbursement rates remained flat in the State Employees Health Plan and declined for Medicaid. In 2001 and 2002, Medicaid utilized a separate rate programme for specialists and paid nurse practitioners at a discount to the broad practitioner rate.


The question of whether flat reimbursement rates influence providers' coding of complexity of bureau visits are examined here. Reimbursement rates influencing providers' coding choices contrasts beside the accepted judgment that prices are exogenous for physicians (that providers accept prices as given). If a physician considered the probability of detection low, a substantial incentive exists for the provider to upcode or report visit of higher complexity. Although stipend rates for individual codes changed little over the study period, a provider could purchase a 50/60-percent increase in their reimbursement for a visit by assigning a code one stratum higher than the true code for the call on.


DATA


This study utilizes 2001-2003 health concern claims from South Carolina Medicaid and the State Employees Health Plan to estimate a fixed effects ordered logit model of physician office pop in billing. The initial data set begin as the full population of all salaried Medicaid and State Employees Health Plan physician office visit. The analysis excludes claims at locations other than the provider's department. Limited information on providers outside of South Carolina required the elimination of claims from any provider near an address outside of the State. Physicians providing less than 150 total fee-for-service visit to Medicaid and State Employees Health Plan patients over the 3-year period be dropped from the data. Due to the fundamentally large number of remaining claims, a chance sample be drawn of 500 providers and for each provider, 800 Medicaid visit and 800 State


Employees Health Plan visits, (1,600 visit total). The sample retained adjectives Medicaid or State Employees Health Plan claims for physicians that provided less than 800 visit in that program. This sampling procedure produced a final dataset of 204,945 organization visits for the 500 providers.


Provider authorization proved difficult in some cases. Although every physician is assigned a one-off provider identification number, tons group practices file adjectives claims under a single group designation number. Since groups share billing resources and behaviors, the model analyzes billing behavior at the group level. The Federal export tax identification number (FTIN) file with respectively claim allowed the linking of providers (or groups for multiphysician practices) across programs. Not all providers participated contained by both programs, and some physicians filed claims below separate FTINs for each program. The analysis controls for these physicians who do not play a part in both programs or who could not be related across programs.


Model Specification


The model combines three classes of office visit into a single visit complexity mutable. Routine office visit fall below new tolerant visits (codes 99201-99205), established long-suffering visits (codes 99211-99215), or consultations (codes 99241-99245), near each group broken into five codes representing lowest through high-est complexity. The study considers five potential outcomes:


   [Y.sub.1] = Codes 99201, 99211, or 99241
[Y.sub.2] = Codes 99202, 99212, or 99242
[Y.sub.3] = Codes 99203, 99213, or 99243
[Y.sub.4] = Codes 99204, 99214, or 99244
[Y.sub.5] = Codes 99205, 99215, or 99245

With the rank nature of the dependent unpredictable, an ordered logit can estimate the probability of choosing outcome [Y.sub.j],


[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]


where the probability that the estimated linear function of the independent variables plus a logistic distributed unsystematic error lies between the estimated cut-points, ki (Zavoina and McKelvey, 1975; Greene, 2003). Stata[R] Version 8 (StataCorp LR 2003) provided a convenient estimator for the ordered logit models.


In equation (1), [X.sub.j] represents a matrix of independent variables indicating patient demographics and provider characteristics. Although claims background provides a rich source of information of provider behavior, potential independent variables are limited to the field common to the claims forms for both programs. Given this shortening, the model includes age, sex, marital status, and urban residence to control for long-suffering demographics. A dummy variable identify providers who can be matched on both lists of participating physicians to control for providers not participating in both programs and those that use separate FTINs when billing Medicaid and the State Employees Health Plan.


Because sicker patients will also produce superior billing codes, the model includes controls for the 15 most expensive conditions and the patient's number of diagnoses that year. The claims notes uses International Classification of Diseases, Ninth Revision, Clinical Modification (Centers for Disease Control and Prevention, 2007) codes to classify diagnoses, so the Clinical Classifications Software developed by the Agency for Healthcare Research and Quality (2007) was used to collapse the more than 12,000 potential diagnosis codes into 260 clinically expressive categories (Elixhauser, Steiner, and Palmer, 2006). From these 260 category, the model includes dummy variables for the 15 most expensive medical conditions: heart disease, pulmonary conditions, mental disorders, cancer, hypertension, trauma, cerebrovascular disease, arthritis, diabetes, back problems, skin disorders, pneumonia, infectious disease, endocrine, and kidney (Druss et al., 2002; Thorpe, Florence, and Joski, 2004). Lastly, the model includes dummy variables indicating the number of separate conditions, out of the 260 clinical conditions software category, reported for that patient contained by the year of the claim.


An array of program and year dummy variables tests the code creep and differential billing hypotheses. A Medicaid dummy flags adjectives claims to Medicaid and tests whether physicians as a adjectives bill Medicaid differently than the State Employees Health Plan. Interactions between the Medicaid dummy and 2-year dummies test whether Medicaid versus the State Employees Health Plan relationship change over time. Finally, dummies for 2002 and 2003 test whether physicians are billing increasingly difficult codes every year. Table 3 presents the variable method and distribution of the dependent variable.


RESULTS


The summary statistics surrounded by Table 3 indicate that physicians bill both Medicaid and the State Employees Health plan in a similar manner, despite serving fundamentally different demographic groups. In both programs, physicians code one-half of their visits (49 percent for Medicaid and 50 percent for the State Employees Health Plan) at complexity Level Three. The lowest and unmatched complexities are both uncommon, next to only 6 percent billed at Level 1 and 3 percent at Level 5. The remaining visit fall almost equally across the remaining two category with 24 percent billed at Level 2 and 18 percent at Level 4. Between the two programs, lower complexity visit were marginally more adjectives in Medicaid while the State Employee Health Plan displayed more Level 4 and Level 5 visit.


For the independent variables, Medicaid patients tend to be younger and less imagined to be married, but females make two-thirds of the visit in both programs and another two-thirds of visit are made by individuals living in urban areas. Providers that cannot be matched across both datasets are more likely to appear contained by the State Employees Health Plan, with 97 percent of visit in Medicaid individual made to physicians on both lists compared near 85 percent in the State Employees Health Plan. Finally, the case-mix controls miscellaneous widely by the sample drawn and should not be used to infer prevalence of these conditions in the Medicaid and State Employees Health Plan populations.


Table 4 presents two sets of estimates for the ordered logit model next to provider fixed effects. Comparing the estimates from the two models (excluding case-mix variables and including case-mix variables) reveals the contribution of a sicker population to billing of higher complexity visit. The provider fixed effects control for time-invariant physician characteristics, including specialty and physician practice patterns.


In both models, call on complexity increased with time (p=0.000). Including the case-mix controls produced solely modest reductions contained by the coefficients for the year dummies. Medicaid visits be billed at lower complexities in both models (p=0.000). The positive coefficients for the Medicaid*Year dummies indicate that the difference between Medicaid and State Employees Health Plan billing decrease over the 3-year period, but the decline be not statistically significant. For the sample used within Table 4, providers participating in both programs billed higher complexity visit, but this result proved sample dependent. All other estimates be robust across repeated samples.


The ordered logit estimates (Table 4) indicate that department visit complexities billed to Medicaid and the State Employees Health Plan did increase over the 3-year length after controlling for case mix and time-invariant physician characteristics, but they reveal little nearly the magnitude of the increase. Table 5 presents the average predicted probabilities for respectively complexity, illustrating the effect of code creep on physician billing. For respectively visit surrounded by the data, the model predicts the probability of the physician assigning respectively complexity level. The simulated values represent the averages for these probabilities for respectively complexity level (Table 5). Only the values for the simulated mutable change, near all other variables contained by the model retaining their original values.


Table 5 simulates two scenario. In the first scenario, all visit are billed under the prevailing billing pattern in 2001, 2002, and 2003. In the second scenario, adjectives visits are billed below the billing patterns representative of the State Employees Health Plan and next with Medicaid billing pattern. Again, all other variables surrounded by the model retain their original values. The table shows respectively scenario, first, based on the model excluding the case-mix control variables, and second, next to the case-mix variables.


The scenarios show the complexity of the average call in gradually increasing over the study time. Over the 3 years, Levels I and 2 visits become progressively smaller number frequent, with Level 1 past its best from 5.9 percent to 4.6 percent and Level 2 decreasing from 25.7 to 22.1 percent. In contrast, visits coded at Levels 3-5 respectively become more common, beside Level 3 increasing from 49.7 to 50.5 percent, Level 4 accelerating from 15.9 to 19.2 percent, and Level 5 increasing from 2.7 to 3.5 percent. Including the case-mix variables produces no appreciable difference on the predicted complexity, next to the frequencies changing no more than one-tenth of 1 percent.


The significant difference between Medicaid and State Employees Health Plan billing manifest in the simulations, but the case-mix variables can article for some of the observed differences between the two programs. With the billing patterns typical of the State Employees Health Plan, 50.4 percent of adjectives visits are coded at Level 3, compared near 49.7 percent in Medicaid. Adding in the case-mix controls narrow these differences across all coding option, with Level 3 State Employees Health Plan visit slipping to 50.3 percent, and Medicaid increasing to 49.9 percent. Similarly, Level 4 visits start highly developed in State Employees Health Plan, at 18.4 percent and Medicaid at 16.3 percent, but this difference decline to 17.9 and 16.9 percent after including the case-mix controls.


After controlling for case mix and physician characteristics, code creep increased the cost of the average visit by 2.2 percent annually over the study interval (Table 5). The average costs collapse the billing distributions into a single number and are calculated by multiplying the percent of visits at respectively complexity by the 2003 Medicaid reimbursement rate for established patient organization visits from Table 2. These Medicaid rates be used for both Medicaid and State Employees Health Plan visits and foreign and established patients, and consultation visits. With this conversion, the average call in in 2001 cost $35.87, increasing by 1.9 percent to $36.54 in 2002, and in 2003 increasing 2.7 percent making the average look in cost $37.53. Including the case-mix controls changes the cost of the average call on increased by no more than $0.04. Based on these average visits, the case-mix controls diminish the code creep estimated from 2.3 percent annually to 2.2 percent per year. Finally, comparing all Medicaid to all State Employees Health Plan visits reveals that physician claims for Medicaid visit averaged $0.48 or 1.3 percent lower than the average State Employees Health Plan visit. This difference is smaller amount than one-half of the $1.14 spread between the average Medicaid and State Employees Health Plan visit when the case-mix controls are excluded.


DISCUSSION


The ordered logit estimates and their associated simulations indicate that code creep increased the payments for physician visit by 2.2 percent annually over the study period. Although the existence of code creep should be a concern for Medicaid agencies, merely an estimate of the total cost of the issue can indicate whether code creep would prove a worthwhile program integrity target. In 2003, South Carolina's Department of Health and Human Services (2004) spent $73 million on physician office visit out of a total $244 million on all physician services. Excluding increases in utilization, Medicaid can expect code creep to inflate physician organization expenditures by 2.2 percent per year or $1.6 million in 2004 and a total of $8.4 million over 2004-2008. It should be noted that these information only consider physician organization visits and exclude hospital-based expenditures. Additional research will be vital to determine if billing by South Carolina physicians is representative of other States and to determine how code creep in physician office visit compares to other physician and hospital billing.


The key restriction to this study also highlights a difficulty program integrity offices obverse in address code creep. As Carter and colleagues (1990) highlighted, changes within billing can be attributed to true changes within case mix (sicker patients), improvements contained by coding (provider education), changes instituted by the payer (program reforms), and code creep. South Carolina Medicaid did not implement any program reforms during the study time of year, and the model includes case-mix variables to control for sicker patients. However, distinguishing code creep from legitimate improvements within coding attributable to provider education would require documentation audits of medical charts. Therefore, the 2.2 percent annual increase attributed to code creep here article should be considered an upper bound because it was not viable to distinguish code creep from legitimate improvements contained by coding. Code creep's diffuse nature make it a difficult problem to address. Expensive, and unpopular chart reviews are unlikely to produce sufficient recoveries from audited physicians, but in good health publicized audits may hold sufficient deterrent value to cause enforcement cost effective.


CONCLUSIONS


This study found significant code creep surrounded by both South Carolina Medicaid and the South Carolina State Employees Health Plan. No difference in code creep was observed across the two programs, near code creep increasing expenditures on physician office visit at a rate of 2.2 percent annually. The models also indicate that physician billing patterns differ between the two, beside the Medicaid claims averaging 1.3 percent less expensive per call round than comparable State Employees Health Plan claims. Controlling for case-mix produced little change surrounded by the code creep estimates, but did account for one-third of the difference between the two programs.


No comments: