Rasa Integration
Overview
Section titled “Overview”Rasa is a comprehensive open-source framework for building contextual AI assistants and chatbots. It provides the infrastructure for developing sophisticated conversational experiences that can be deployed across multiple channels.
Key Features
Section titled “Key Features”- Contextual Understanding: Maintains conversation state and user context
- Intent Classification: Accurately identifies user intents
- Entity Extraction: Recognizes and extracts key information
- Custom Actions: Execute backend operations based on user inputs
- Multi-Channel Support: Deploy across websites, messaging apps, and voice interfaces
Use Cases
Section titled “Use Cases”- Customer service automation with contextual understanding
- Internal enterprise assistants with secure data access
- Multi-turn conversational interfaces for complex tasks
- Voice-enabled assistants for hands-free operation
Note: Custom LLM integration is available only in Rasa Pro.
Setup Instructions
Section titled “Setup Instructions”-
Install Rasa and Create Project
Terminal window # Install Rasapip install rasa# Create a new Rasa projectrasa initFor more detailed setup instructions, refer to the Rasa documentation.
-
Install Rasa Pro (required for custom LLM integration)
Visit then Rasa Pro page to obtain a license. You will be prompted to fill out a form and will receive license information and installation instructions via email.
Add the Pro repository to your pip configuration.
Terminal window pip install rasa-pro -
Configure
config.ymlNavigate to your project directory and locate the
config.ymlfile (for more details, refer to the Rasa configuration documentation).Edit the
config.ymlto include the LLM configuration:recipe: default.v1language: enpipeline:- name: WhitespaceTokenizer- name: LexicalSyntacticFeaturizer- name: CountVectorsFeaturizer- name: CountVectorsFeaturizeranalyzer: char_wbmin_ngram: 1max_ngram: 4- name: RasaLLMmodel_name: "Deepseek" # Custom relaxAI model namebase_url: "https://api.relax.ai/v1/"llm_parameters:model: "DeepSeek-R1-0528"# Add your API key in credentials.yml, not herepolicies:- name: MemoizationPolicy- name: RulePolicy- name: TEDPolicymax_history: 5epochs: 100 -
Configure
credentials.ymlfile- In the same project directory, locate or create the
credentials.ymlfile. - Add your relaxAI API key to this file:
custom_llm:url: "https://api.relax.ai/v1/chat/completions"api_key: "RELAX_API_KEY" - In the same project directory, locate or create the
-
Create Custom Action Handler (optional)
Navigate to the
actionsdirectory in your project and edit theactions.pyfile.Add a custom action class that uses the relaxAI API for specific tasks
from rasa_sdk import Actionfrom rasa_sdk.events import SlotSetimport requestsclass ActionQueryrelaxAI(Action):def name(self):return "action_query_relaxai"def run(self, dispatcher, tracker, domain):# Custom code to call relaxAI API# ...return [] -
Train Your Model
Return to your terminal in the project directory and run the training command:
Terminal window rasa train- Training progress will be displayed in the terminal
-
Start Action Server (if using custom actions)
Open a new terminal window and navigate to your project directory
- Run the actions server:
Terminal window rasa run actions -
Test Your Setup
- In your original terminal window, run:
Terminal window rasa shell- Test your bot with sample conversations.
You can switch between available relaxAI models by changing the model parameter in config.yml:
DeepSeek-R1-0528Llama-4-Maverick-17B-128E