Demo 1: GPTLocalhost & AnythingLLM
Demo 2: GPTLocalhost & LiteLLM (e.g., gemini-1.5-flash, if cloud-based models are still preferred)
Demo 3: GPTLocalhost & LM Studio (local model: Llama 3.2)
Demo 4: GPTLocalhost & Ollama (local model: Llama 3.2)
Demo 5: GPTLocalhost & llama.cpp (local model: gemma-2b)
Demo 6: GPTLocalhost & LocalAI (local model: Llama 3.2)
Demo 7: GPTLocalhost & KoboldCpp (local model: mistral 2.2)
Demo 8: GPTLocalhost & Xinference (local model: Llama 2)