A recent research paper “Finding Blind Spots in Evaluator LLMs with Interpretable Checklists” was released by the Indian Institute of Madras and AI4Bharat, an initiative for spearheading AI research in India. The paper reveals significant flaws in the current methods used by LLMs to evaluate text generation tasks.
Authored by researchers Sumanth Doddapaneni, Mohammed Safi Ur Rahman Khan, Sshubam Verma, and Mitesh M Khapra, FBI is a novel framework that is designed to assess how well Evaluator LLMs can gauge four critical abilities in other LLMs: factual accuracy, adherence to instructions, coherence in long-form writing, and reasoning proficiency.
The study involved introducing targeted alterations in answers generated by LLMs that impact these key capabilities, aiming to determine if Evaluator LLMs could detect drops in quality. A total of 2400 modified answers spanning 22 perturbation categories were created for the comprehensive study. Different evaluation strategies were applied to five prominent Evaluator LLMs frequently referenced in the literature.
Source: arxiv.org
Findings from the research revealed significant deficiencies in current Evaluator LLMs, which failed to identify declines in quality in over 50% of cases on average. Single-answer and pairwise evaluations exhibited notable limitations, while evaluations based on references demonstrated relatively better performance.
The study underscores the unreliable nature of current Evaluator LLMs and emphasises the necessity for cautious implementation in evaluating text generation capabilities. It is to be noted that Evaluator LLMs consistently missed basic errors, such as spelling and grammar mistakes.
Way Forward
Systems that require high-stakes decision-making, the reliability of their evaluations must be scrutinised. The study underscores the need for improved evaluation strategies and the potential risks of over-reliance on current LLM evaluators.
The FBI framework offers a path forward by providing a more interpretable and comprehensive method for testing evaluator capabilities. By revealing the prevalent failure modes and blind spots of existing models, this framework can guide the development of more robust and reliable AI evaluators.