SEAS SearchKG-Based Course QA

Chat Interface

Test the fine-tuned models locally on Hugging Face

Inference Resources Currently Unavailable

We currently don't have the resources to host an inference setup for the model. However, you can test the models locally using Hugging Face. Both models are available on the Hugging Face Hub and can be loaded directly in your environment.

Click on the model cards below to access the repositories and follow the instructions to run inference locally.

KG-QA System Model

Knowledge Graph-based Question Answering system with multi-hop reasoning capabilities. This model can answer complex queries requiring prerequisite chain traversal and relationship understanding.

Features:
Multi-hop reasoningPrerequisite chain traversalGraph-based retrievalRAG training format

Optimized Fine-tuning Model

Optimized fine-tuned model for simple Q&A tasks. Best for straightforward questions about individual courses, instructors, and schedules.

Features:
Simple Q&ACourse information lookupInstructor queriesSchedule information

How to Use the Models Locally

Step 1: Install Dependencies

pip install transformers torch unsloth

Step 2: Load the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "itsmepraks/gwcourses_RAG"  # or "itsmepraks/gwcoursesfinetuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

Step 3: Run Inference

Use the model to answer questions about GWU courses. For the KG-QA model, you'll need to load the knowledge graph files (`kg_graph.pkl` and `graph_retriever.pkl`) from the repository as well.