AI \ LLM
- With all the hype around DeepSeek, I wanted to give it a try but wasn't interested in providing my Apple or Google accounts for sign-in, nor providing my phone number... so i just looked into running the model locally. I had run Ollama in the past but didn't do much with it, so I had to look into that again:
- One reason I was trying to run an AI model locally was to try & get something where I could put all of my personal build notes, & be able to ask questions against it if I came across similar problems in the future. So that will likely be the next project I start on...
Docker
- Since i setup my Proxmox install with a VM meant for Docker, I tried to find something simple to run out of there as a test. I decided to go with Watcharr & things seemed to have gone well. The VM took a little bit of setup because I wanted to get my nginx configs, certbot, etc running, but now that it's all setup, it was worth the work.
- After getting all my Docker instances into Homepage, I wanted to start looking into securing the API instead of leaving it open... I'm going to have to revisit this because it seems like it's more complicated than it should be, but here i am. Until I can get that secured, I'm using the Portainer Agent in places where that's possible.