Fact-check AI accuracy with Cleanlab Trustworthy Language Model | Alpha | PandaiTech

Fact-check AI accuracy with Cleanlab Trustworthy Language Model

How to use the TLM tool to detect LLM hallucinations and get confidence scores for RAG responses.

Learning Timeline
Key Insights

Access Without Registration

You can try the Cleanlab TLM demo directly on their website without having to go through a 'Sign Up' process first.

The Importance of Trustworthiness Score

This score is critical for business AI applications because it tells you whether the facts provided by the AI are reliable or simply hallucinations.

RAG Integration

This system is most effective when used with RAG, where the AI cross-references its answers with the reference documents you provide.
Prompts

Identifying Document Sources

Target: Cleanlab TLM
Identify the source of the following data document
Step by Step

How to Detect AI Hallucinations Using Cleanlab TLM

  1. Visit the Cleanlab Trustworthy Language Model (TLM) website.
  2. Prepare the external documents or files you want to use as a RAG (Retrieval-Augmented Generation) reference.
  3. Click on the 'Presets' dropdown menu to select a predefined test scenario.
  4. Select a sample scenario such as 'Identify the source of the following data document'.
  5. Enter a question or prompt related to the data in the attached document.
  6. Click the generate button to get a response from the LLM.
  7. Look at the 'Trustworthiness Score' section displayed alongside the answer to evaluate factual accuracy.
  8. Compare the score; a high score indicates the answer aligns with the RAG data, while a low score signals potential hallucinations.

More from Boost Productivity & Research with AI

View All