I know for many of us every day is selfhosting day, but I liked the alliteration. Or do you have fixed dates for maintenance and tinkering?

Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.

This post is proudly sent from my very own Lemmy instance that runs at my homeserver since about ten days. So far, it’s been a very nice endeavor.

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I just spent a good few hours optimizing my LLM rig. Disabling the graphical interface to squeeze 150mb of vram from xorg, setting programs cpu niceness to highest priority, tweaking settings to find memory limits.

    I was able to increase the token speed by half a second while doubling context size. I don’t have the budget for any big vram upgrade so I’m trying to make the most of what ive got.

    I have two desktop computers. One has better ram+CPU+overclocking but worse GPU. The other has better GPU but worse ram, CPU, no overclocking. I’m contemplating whether its worth swapping GPUs to really make the most of available hardware. Its bee years since I took apart a PC and I’m scared of doing somthing wrong and damaging everything. I dunno if its worth the time, effort, and risk for the squeeze.

    Otherwise I’m loving my self hosting llm hobby. Ive been very into l learning computers and ML for the past year. Crazy advancements, exciting stuff.

  • metaStatic@kbin.earth
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    what’s maintenance? is that when an auto-update breaks everything and you spend an entire weeknight looking up tutorials because you forgot what you did to get this mess working in the first place?

    • daddycool@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I know you’re half joking. But nevertheless, I’m not missing this opportunity to share a little selfhosting wisdom.

      Never use auto update. Always schedule to do it manually.

      Virtualize as many services as possible and take a snapshot or backup before updating.

      And last, documentation, documentation, documentation!

      Happy selfhosting sunday.

      • tofu@lemmy.nocturnal.gardenOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I think auto update is perfectly fine, just check out what kind of versioning the devs are using and pin the part of the version that will introduce breaking changes.

        • daddycool@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I just like it when things break on scheduled maintenance and I have time to fix it or the possibility to roll back with minimal data loss, instead of an auto update forcing me spend a week night fixing it or running a broken system till I have the time.

          • tofu@lemmy.nocturnal.gardenOP
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            You can have the best of both worlds - scheduled auto updates on a time that usually works for you.

            With growing complexity, there are so many components to update, it’s too easy to miss some in my experience. I don’t have everything automated yet (in fact, most updates aren’t) but I definitely strive towards it.

            • daddycool@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              In my experience, the more complex a system is, the more auto updates can mess things up and make troubleshooting a nightmare. I’m not saying auto updates can’t be a good solution in some cases, but in general I think it’s a liability. Maybe I’m just at the point where I want my setup to work without the risk of it breaking unexpectedly and having to tinker with it when I’m not in the mood. :)

    • IronKrill@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve had this happen twice in two weeks since installing Watchtower and have since scheduled it to only run on Friday evening…

  • madeofpendletonwool@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Pinepods 0.7.4 is out! So as the Dev I’m going through new issues and knocking them out. Smart playlists, oidc logins and notifications on release are all a thing now on the self hosted podcast platform! We’re nearing a v1 release with features on par with some of the big time podcast apps.

  • BruisedMoose@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    After just about a month of hosting some things on a Raspberry Pi 4, I think it’s about time to work on repurposing this mini PC that hasn’t been doing much the last few years and keep growing my services.

    To that end, can anyone point me to a good, thorough guide to getting going with Sonarr? I installed it, but then realized I needed to add a client and Prowlarr and I feel like I just started in the middle.

    • lemmyingly@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Search for trash guides and servarr. Both have websites that are detailed in how to set up all of the arrs apps in what ever fashion you want. I think both have Discord servers too.

  • assaultpotato@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I need to migrate off Docker Desktop for Windows and Storage Spaces but I fear the process will be difficult due to my data volume and the stupidity of Windows. I should never have gone Windows, but I wanted to use Steam Big Picture off the media PC and didn’t want to deal with getting that functional on Linux.

    But Docker Desktop for Windows keeps crashing WSL and bricking the network devices randomly, and also continuously grows memory consumption until the machine reboots. Piece of shit.

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Windows Docker is so bad, I don’t even know why it’s a thing.

      Some good planning might make the migration less painful. I would recommend a ZFS or other COW storage solution under the docker host so you can do snapshot backups and not have to worry about quiesing databases, etc.

      • assaultpotato@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yea I’m gonna do zfs or something when I get set up properly again. I’ve got 2 16TB HDDs and Storage Spaces won’t let me pull a drive out :v

        I think I’m gonna have to make a new Storage Space and slowly grow that one and shrink the other as I basically shift the extra storage budget between the two until the data is on just one of my drives without redundancy, and then I’ll pull that drive, dual boot Ubuntu or something, format, get everything prepared, and then mount, copy, start services, and then go back and kill the old storage spaces and then never run Windows for anything meaningful again.

    • L_Acacia@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Try Podman Desktop if you want a GUI, and it is docker desktop causing the crashes. You can run docker images / container / kube through it as well as podman one.

  • quelsh@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I migrated my whole native service infrastructure to Docker services this weekend. I prepared for it the previous weeks; basically looking up information about details I wasn’t sure about. The services were mailing, file cloud, and traccar with modoboa, ownCloud respectively. I moved to mailcow and Nextcloud and replaced my feedly account with NextCloud News as a bonus. So far pretty happy with it, had a couple set-backs but also learned a lot in the process. This was the first time for me doing something productive with Docker

  • Little8Lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Yesterday i managed to successfully host a simple html safely (its more of a network test)
    The path is nginx->openwrt->router to internet Now i only need to:

    • backup
    • set up domain (managing via cloudflare)
    • set up certificates
    • properly documentbthe setup + some guides on stuff that i will repeat

    and then i can throw everything i want on it :D

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Migrating from proxmox to incus, continued.

    • got a manually-built wireguard instance rolling and tested, it’s now “production”
    • setting up and testing backups now
    • going to export some NFS and iscsi to host video files to test playback over the network from jellyfin
    • building ansible playbooks to rebuild instances
    • looking into ansible to add system monitoring, should be easy enough

    Lots of fun, actually!

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      What’s your motivation for the switch? Second time in a short while I’ve heard about people migrating to incus.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’ve moved to all containers and I’m gradually automating everything. The metaphor for orchestration and provisioning is much clearer in incus than it was in lxd, and makes way more sense than proxmox.

        Proxmox is fine, I’ve used it for going on 8 years now, I’m still using it, in fact. But it’s geared toward a “safe” view of abstraction that makes lxc containers seem like virtual machines, and they absolutely aren’t, they are much, much more flexible and powerful than vms.

        There are also really annoying deficiencies in proxmox that I’ve taken for granted for a long time as well:

        • horrible builtin resource usage metrics. And I’m happy to run my influxdb/grafana stack to monitor, but users should be able to access those metrics locally and natively, especially if they’re going to be exported by the default metrics export anyway.
        • weird hangovers from early proxmox versions on io delay. Proxmox is still making users go chase down iostat rabbit holes to figure out why io_wait and “io delay” are not the same metric, and why the root cause is almost always disk, yet proxmox shows the io_wait stat as if it could be “anything”
        • integration of pass through devices is a solved problem, even for lxc, yet the bulk of questions for noobs is about just that. Pass through is solved for so many platforms, why proxmox just doesn’t have that as a GUI option for lxc is baffling.
        • no install choices for zfs on root on single disk (why???)
        • etc

        Ultimately, I have more flexibility with a vanilla bookworm install with incus.

        • tofu@lemmy.nocturnal.gardenOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Thanks a lot for your response! I too was a bit misguided by the way Proxmox presents LXCs but I’m mostly on VMs and haven’t explored LXCs further so far.

          • non_burglar@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            No worries. And don’t misunderstand: I think proxmox is great, I’ve simply moved on to a different way of doing thing.

  • voklen@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This week I realised my Mastodon instance was severely out of date because I was using nix flakes and didn’t autoupdate but now that’s been fixed 😄

  • bananoidandroid@feddit.nu
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’ve set up a reverse proxy to try out hosting a few APIs but i’m curious about best practice and haven’t found any good way to do it. Anyway, i have them running dotnet 9 on debian, and hosting them on http ports and then reverse proxying to apache that serves them externally with certbot on 443 to some real hostnames. I would really want to host them on https internally as well, but is there a neat way to “cert” them without an internal CA-service? My experience with self-signed certs are mostly that they always force me to trust the server cert in my connection strings, which is also unsafe so i just don’t bother. Is it worth working on and which is the best approach here?

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Non SSL behind your ingress proxy is acceptable professionally in most circumstances, assuming your network is properly segmented it’s not really a big deal.

      Self-signing and adding the CA is a bit of a pain in the ass and adds another unnecessary layer for failure in a home network.

      If it really grinds your gears you could issue yourself a real wild card cert from lets encrypt then at DNS names with that wild card on your local DNS server with internal IPs, but to auto renew it you’re going to have to do some pretty decent DNS work.

      To be honest I’ve scrapped most of my reverse proxies for a nice tailscale network. Less moving parts, encrypted end-to-end.

      • bananoidandroid@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Thanks! I initially considered going the wildcard route until i saw the workload involved for my host! There does seem to exist autorenewal programs for the largest hosts out there but i’m trying to support my local businesses so it’s unfortunately out of of my scope at the moment, but i’ll checkout your suggestion and see what tailscale has to offer!

  • vfsh@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I spent two hours last night beating myself over the head with RAM sticks. Got an ewasted server that had the alarm misconfigured, figured I’d upgrade it and put in a valid configuration since it was just off my size. Slapped in some matching size sticks and it wouldn’t boot. It took my embarrassingly long to realize that the speeds werent the same and that the server really cared about the speeds being the same, more than it cared about sizes being the same incidentally.

    I work in IT that should have been the first fuckin thing I checked smh

    • almost1337@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I remember when I worked in a data center and there was a custom server order that needed something like 64 sticks per server, and procurement didn’t bother to make sure that we had sets that were the same speed, timing, or brand. Thankfully I caught it before we wasted a ton of time troubleshooting.

  • refreeze@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I just set up wanderer and workout-tracker. Along with installing gadgetbridge on my phone, I now have a completely self hosted fitness/workout stack with routes, equipment tracking, heatmaps, general health metrics like HRV, heart rate, etc through my Garmin watch, without having Garmin Connect installed. Awesome!

    • bluegandalf@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Wait, is that possible? I thought gadgetbridge didn’t work with Garmin! Nedd to check this out. Thanks for the inspiration!

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That sounds so cool! Not using any tracking/nav devices other than my phone but currently my routes just stay local without having any kind of management for them.

  • dishpanman@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I started hosting audiobookshelf since Jellyfin was pretty clunky for audiobooks.

  • Mubelotix@jlai.lu
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Had the intention of making a hidden TOR website version for all my websites but I’m sick