Using all GPUs when running Ollama and OpenWebUI in docker
Case: Run both Ollama and OpenWebUI in a single docker container and use all available GPUs in the host machine TL;DR: # Installing nvidia-container-toolkit curl -fsSL https://nvidia.github....