About 50 results
Open links in new tab
  1. ollama - Reddit

    r/ollama How good is Ollama on Windows? I have a 4070Ti 16GB card, Ryzen 5 5600X, 32GB RAM. I want to run Stable Diffusion (already installed and working), Ollama with some 7B models, maybe a …

  2. How to make Ollama faster with an integrated GPU? : r/ollama - Reddit

    Mar 8, 2024 · How to make Ollama faster with an integrated GPU? I decided to try out ollama after watching a youtube video. The ability to run LLMs locally and which could give output faster amused …

  3. Local Ollama Text to Speech? : r/robotics - Reddit

    Apr 8, 2024 · Yes, I was able to run it on a RPi. Ollama works great. Mistral, and some of the smaller models work. Llava takes a bit of time, but works. For text to speech, you’ll have to run an API from …

  4. How to add web search to ollama model : r/ollama - Reddit

    How to add web search to ollama model Hello guys, does anyone know how to add an internet search option to ollama? I was thinking of using LangChain with a search tool like DuckDuckGo, what do …

  5. Request for Stop command for Ollama Server : r/ollama - Reddit

    Feb 15, 2024 · Ok so ollama doesn't Have a stop or exit command. We have to manually kill the process. And this is not very useful especially because the server respawns immediately. So there …

  6. r/ollama on Reddit: Does anyone know how to change where your …

    Apr 15, 2024 · I recently got ollama up and running, only thing is I want to change where my models are located as I have 2 SSDs and they're currently stored on the smaller one running the OS (currently …

  7. Ollama Server Setup Guide : r/LocalLLaMA - Reddit

    Mar 26, 2024 · I recently set up a language model server with Ollama on a box running Debian, a process that consisted of a pretty thorough crawl through many documentation sites and wiki forums.

  8. Training a model with my own data : r/LocalLLaMA - Reddit

    Dec 20, 2023 · I'm using ollama to run my models. I want to use the mistral model, but create a lora to act as an assistant that primarily references data I've supplied during training. This data will include …

  9. How does Ollama handle not having enough Vram? : r/ollama - Reddit

    How does Ollama handle not having enough Vram? I have been running phi3:3.8b on my GTX 1650 4GB and it's been great. I was just wondering if I were to use a more complex model, let's say …

  10. What is the best small (4b-14b) uncensored model you know and use?

    Hey guys, I am mainly using my models using Ollama and I am looking for suggestions when it comes to uncensored models that I can use with it. Since there are a lot already, I feel a bit overwhelmed. For …