Week in g33k: February 01, 2025
·
Sean P. McAdam
AI \ LLM
- With all the hype around DeepSeek, I wanted to give it a try but wasn’t interested in providing my Apple or Google accounts for sign-in, nor providing my phone number… so i just looked into running the model locally. I had run Ollama in the past but didn’t do much with it, so I had to look into that again:
- It’s FOSS: I Ran Deepseek R1 on Raspberry Pi 5 and No, it Wasn’t 200 tokens/s: I believe I ordered the Raspberry Pi AI HAT because why not… but don’t have it yet to test.
- It’s FOSS: Run LLMs Locally on Raspberry Pi Using Ollama AI
- One reason I was trying to run an AI model locally was to try & get something where I could put all of my personal build notes, & be able to ask questions against it if I came across similar problems in the future. So that will likely be the next project I start on…
- It’s FOSS: Setting Up PrivateGPT to Use AI Chat With Your Documents
- PrivateGPT: Quickstart
Docker
- Since i setup my Proxmox install with a VM meant for Docker, I tried to find something simple to run out of there as a test. I decided to go with Watcharr & things seemed to have gone well. The VM took a little bit of setup because I wanted to get my nginx configs, certbot, etc running, but now that it’s all setup, it was worth the work.
- After getting all my Docker instances into Homepage, I wanted to start looking into securing the API instead of leaving it open… I’m going to have to revisit this because it seems like it’s more complicated than it should be, but here i am. Until I can get that secured, I’m using the Portainer Agent in places where that’s possible.
- Linux Handbook: How to Set Up Remote Access to Docker Daemon [Detailed Guide]: The problem with this guide is that it looks like this is just to setup one remote host. What if I have multiple?
- GitHub: portainer / portainer: Support connecting to endpoint via integrated SSH client #431