Skip to main content

RAFT - RAG based Finetunning

· 4 min read
Damon Lee
Fission Team

Why We Need RAFT: Adapting Language Models to Domain-Specific RAG​

The evolution of Retrieval-Augmented Generation (RAG) has unlocked unprecedented possibilities in AI, enabling generative models to retrieve and incorporate external data dynamically. However, as AI frameworks increasingly interface with domain-specific contexts like Web3, there is a growing need for a specialized adaptation mechanism—RAFT (Retrieval-Adapted Fine-Tuning). This blog explores why RAFT is essential for adapting language models to domain-specific RAG, enhancing real-time interactions with the Web3 community and its users.

The Challenge: Domain-Specificity in RAG​

Web3 ecosystems are inherently dynamic and domain-specific, characterized by:

  1. Unique Jargon and Concepts: Terms like "staking," "DAO," "NFT minting," and "gas fees" are ubiquitous in Web3 but rarely encountered in general-purpose datasets.
  2. Rapidly Evolving Information: Web3 platforms are continuously updated with new protocols, smart contracts, and token standards.
  3. Decentralized Data Sources: Information is dispersed across blockchains, decentralized file systems, and community-managed repositories.

While RAG frameworks excel in retrieving relevant data, they often struggle with adapting generative outputs to these domain-specific requirements. Without fine-tuning, language models risk producing generic or irrelevant responses that fail to meet the expectations of Web3 users.