Demo 1: GPTLocalhost & AnythingLLM

Demo 2: GPTLocalhost & LiteLLM (e.g., gemini-1.5-flash, if cloud-based models are still preferred)

Demo 3: GPTLocalhost & LM Studio (local model: Llama 3.2)

Demo 4: GPTLocalhost & Ollama (local model: Llama 3.2)

Demo 5: GPTLocalhost & llama.cpp (local model: gemma-2b)

Demo 6: GPTLocalhost & LocalAI (local model: Llama 3.2)

Demo 7: GPTLocalhost & KoboldCpp (local model: mistral 2.2)

Demo 8: GPTLocalhost & Xinference (local model: Llama 2)

Demo 9: GPTLocalhost & OpenLLM (local model: llama3.2:1b)

Demo 10: Using Microsoft Phi-4 in Microsoft Word

Demo 11: Empowering Your Team with Phi-4 in Microsoft Word within Your Intranet