The VBA Consistency Project has collected information to help study potential variation of disability decisions across regions. This report provides analysis on review consistency, regional variation, and factors related to agreement/disagreement with the original determination.
The purpose of this consultation is to aide in analysis of the reviewer checklist data and to interpret the meaning of the results.
The VBA initiated a review on the consistency of rating decisions for PTSD, hearing loss, and knee conditions. A random sample comprised 723 hearing loss cases, 727 knee condition cases, and 357 PTSD cases. The sample was stratified across four geographic regions (Eastern, Southern, Central and Western). The “rating variation checklist” was designed and pilot tested to assess agreement with the grant/denial of the service condition and reasons for disagreement along the attributes of incurrence, diagnosis and nexus along the dimensions of law/guidance and evidence. Hearing loss and knee conditions were reviewed further for agreement with the assigned disability “evaluation” percentage in accordance with law/guidance and evidence.
The "reviewers" for this project were three STAR rating consultants and seven experienced field rating veteran service representatives, a total of ten reviewers. The claim files were gathered in boxes and shipped to the review station (Nashville, Tennessee). For case selection, a reviewer placed his/her initials on the outside of one of the boxes and then proceeded to audit all the cases within that box. The hearing cases were audited first and sent back upon completion. The knee and PTSD cases were then audited and project timelines allowed for a second audit. Thus, the hearing cases underwent a single review and the knee and PTSD cases underwent a double review.
In the typical audit process, the reviewers read from the receipt of the claim through the decision determination. Thus, the review processed the information to determine whether the case was ready to rate. This is similar to the STAR methodology that assesses adequacy of development as part of the accuracy review of the rating decision. However, in this study, the reviewers were directed to reach their decision for claim grant or denial and indicate whether their decision was the same or different from the original rater's decision in the file regardless of readiness. The responses by the reviewers were not blind to the rater's decision. If the reviewer disagreed with the original rating decision, the reviewer was directed to complete the remaining items of the checklist.
Disagreement with the original rater decision ranges from 4-8%
Reviewers had very little overlap in their disagreements, resulting in low inter-reviewer reliability.
When cases were ready to rate, the disagreement rate was very low, about 2%
When cases were not ready to rate, the disagreement rate was high, ranging from 25% to 70%.
Single versus second signature was not a factor in inconsistency.
One versus both knees/ears was not a factor in percent disability evaluation inconsistency.
No conclusions could be reached regarding regional variation because of study design.
Incurrence, nexus and current disability reasons were the most popular reasons for inconsistency.
A debriefing conference, held with senior leaders of VBA, resulted in a final report for the VA Under Secretary of the Veterans Benefits Administration and strategic plans for further assessment of consistency.
None at this time.