Test the fine-tuned models locally on Hugging Face
We currently don't have the resources to host an inference setup for the model. However, you can test the models locally using Hugging Face. Both models are available on the Hugging Face Hub and can be loaded directly in your environment.
Click on the model cards below to access the repositories and follow the instructions to run inference locally.
Knowledge Graph-based Question Answering system with multi-hop reasoning capabilities. This model can answer complex queries requiring prerequisite chain traversal and relationship understanding.
Optimized fine-tuned model for simple Q&A tasks. Best for straightforward questions about individual courses, instructors, and schedules.
pip install transformers torch unslothfrom transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "itsmepraks/gwcourses_RAG" # or "itsmepraks/gwcoursesfinetuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)Use the model to answer questions about GWU courses. For the KG-QA model, you'll need to load the knowledge graph files (`kg_graph.pkl` and `graph_retriever.pkl`) from the repository as well.