101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

481
active users

#rke2

0 posts0 participants0 posts today
Mika<p>I have finally caved in and dove into the rabbit hole of <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener" target="_blank">#Linux</a> Container (<a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener" target="_blank">#LXC</a>) on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a><span> during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.<br><br>I now have a </span><a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener" target="_blank">#Jellyfin</a> and <a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener" target="_blank">#ErsatzTV</a> setup running on LXCs with working iGPU passthrough of my server's <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener" target="_blank">#AMD</a> Ryzen 5600G APU. My <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a> GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a>, <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a><span> cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.<br><br>Anyway, I've updated my </span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener" target="_blank">#Homelab</a> Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an <a href="https://sakurajima.social/tags/SMB" rel="nofollow noopener" target="_blank">#SMB</a><span> share for the entire cluster and mounting them, also, on unprivileged LXC containers.<br><br></span>🔗 <a href="https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc" rel="nofollow noopener" target="_blank">https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc</a></p>
Mika<p><a href="https://sakurajima.social/tags/FediHire" rel="nofollow noopener" target="_blank">#FediHire</a> <a href="https://sakurajima.social/tags/GetFediHired" rel="nofollow noopener" target="_blank">#GetFediHired</a> 🥳<span><br><br>I'm a </span><a href="https://sakurajima.social/tags/Programmer" rel="nofollow noopener" target="_blank">#Programmer</a>/<a href="https://sakurajima.social/tags/SoftwareEngineer" rel="nofollow noopener" target="_blank">#SoftwareEngineer</a>. I'm most fluent in <a href="https://sakurajima.social/tags/Python" rel="nofollow noopener" target="_blank">#Python</a>, have some basics in <a href="https://sakurajima.social/tags/Java" rel="nofollow noopener" target="_blank">#Java</a> and <a href="https://sakurajima.social/tags/C++" rel="nofollow noopener" target="_blank">#C++</a>, but I'm also taking up new languages like <a href="https://sakurajima.social/tags/Javascript" rel="nofollow noopener" target="_blank">#Javascript</a> and others in my eternal journey of getting better and minimising the <i>impostor syndrome</i> that befalls pretty much all programmers (I feel). I'm also very experienced in <a href="https://sakurajima.social/tags/CloudNative" rel="nofollow noopener" target="_blank">#CloudNative</a>/<a href="https://sakurajima.social/tags/DevOps" rel="nofollow noopener" target="_blank">#DevOps</a><span> technologies, and have been the one devising solutions and maintaining infrastructure in a fast-paced startup environment in my previous employment.<br><br>I'm passionate in what I do and those that know me here or IRL would know that I'm always </span><i>yapping</i><span> about the things I'm learning or working on - I love discussing them, and I love helping people out - esp those on the same boat as me.<br><br>This passion has led me into writing and maintaining tons of </span><a href="https://sakurajima.social/tags/FOSS" rel="nofollow noopener" target="_blank">#FOSS</a> projects like <a href="https://github.com/irfanhakim-as/mango" rel="nofollow noopener" target="_blank">Mango</a>: a content distribution framework based on <a href="https://sakurajima.social/tags/Django" rel="nofollow noopener" target="_blank">#Django</a> for <a href="https://sakurajima.social/tags/Mastodon" rel="nofollow noopener" target="_blank">#Mastodon</a> and <a href="https://sakurajima.social/tags/Bluesky" rel="nofollow noopener" target="_blank">#Bluesky</a> that powers various bots of mine like <a href="https://mastodon.social/@lowyat" class="u-url mention" rel="nofollow noopener" target="_blank">@lowyat@mastodon.social</a> and <a href="https://mastodon.social/@waktusolat" class="u-url mention" rel="nofollow noopener" target="_blank">@waktusolat@mastodon.social</a>, <a href="https://github.com/irfanhakim-as/charts" rel="nofollow noopener" target="_blank">Charts</a>: a <a href="https://sakurajima.social/tags/Helm" rel="nofollow noopener" target="_blank">#Helm</a> chart repository for an easy and reproducible deployment strategy for all my projects and everything else I self-host on my <a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener" target="_blank">#homelab</a>, and <a href="https://github.com/irfanhakim-as/orked" rel="nofollow noopener" target="_blank">Orked</a>: O-tomated <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a> distribution, a collection of scripts I wrote that are comprehensively documented to enable everyone to self-host a production-grade <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a><span> cluster for absolutely free in their homes.<br><br>I'm based in Malaysia, but I'm open to just about any on-site, hybrid, or remote job opportunities anywhere. In the meantime though, I'm actively looking for a job in countries like </span><a href="https://sakurajima.social/tags/Japan" rel="nofollow noopener" target="_blank">#Japan</a> and <a href="https://sakurajima.social/tags/Singapore" rel="nofollow noopener" target="_blank">#Singapore</a>, in a bid for a <i>desperate</i> lifestyle change. I've linked below my <a href="https://gitlab.com/irfanhakim/portfolio" rel="nofollow noopener" target="_blank">Portfolio</a> (which you too, could self-host your own!), for those who'd wish to connect/learn more of me. Thank you ❤️<span><br><br></span>🔗 <a href="https://l.irfanhak.im/resume" rel="nofollow noopener" target="_blank">https://l.irfanhak.im/resume</a></p>
Mika<p><a href="https://sakurajima.social/tags/Rancher" rel="nofollow noopener" target="_blank">#Rancher</a>/<a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a> <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a> cluster question - I don't need Rancher, but in the past with my RKE2 clusters, I normally deploy Rancher on a single VM using <a href="https://sakurajima.social/tags/Docker" rel="nofollow noopener" target="_blank">#Docker</a><span> just for the sake of having some sort of UI for my cluster(s) if need be - with this setup, I'm relying on importing the downstream (RKE 2) cluster(s) onto said Rancher deployment. That worked well.<br><br>This time round though, I tried deploying Rancher on the cluster itself, instead of an external VM, using </span><a href="https://sakurajima.social/tags/Helm" rel="nofollow noopener" target="_blank">#Helm</a><span>. Rancher's pretty beefy and heavy to deploy even with a single replica, and from my limited testing I found that it's easier to deploy when your cluster's pretty new and not have much resources running just yet.<br><br>What I'm curious about tho are these errors - my cluster's fine, and I'm not seeing anything wrong with it, but ever since deploying it a few days ago, I'm constantly seeing these </span><code>Liveness/Readiness probe failed</code> error on all 3 of my Master nodes (periodically most of the time, not all at once) - the same error also seems to include <code>etcd failed: reason withheld</code>. What does it mean, and how do I "address" it?</p>
Mika<p><span>I'm wondering right now what to do - I'm outside rn and so fucking eager (and extremely worried) to inspect and see just what caused it, when I'm home. I did notice the power plug (attached to the wall/extension cord) wasn't seated tight.. could that have been the cause?<br><br>It's a </span><a href="https://sakurajima.social/tags/B550" rel="nofollow noopener" target="_blank">#B550</a> ITX board with a 500W Flex PSU - it doesn't have any GPU, just an <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener" target="_blank">#AMD</a> Ryzen 5 5600G APU. Besides that, just 2 sticks of DDR4 RAM (64GB) and 2x 1TB NVME SSDs. It all depends on what I'll find later when I inspect it tho, but I'm wondering if I should continue having the node in my <a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener" target="_blank">#homelab</a> with replaced PSU (found a 600W 80 Platinum rated Flex PSU by FSP), or ditch it completely and replace it with an MATX build instead - <a href="https://sakurajima.social/tags/AsRock" rel="nofollow noopener" target="_blank">#AsRock</a> B550M Pro4 mATX board, same APU (or prolly, replace with my spare Ryzen 7 3700X), and brand new <a href="https://sakurajima.social/tags/CoolerMaster" rel="nofollow noopener" target="_blank">#CoolerMaster</a> or <a href="https://sakurajima.social/tags/Corsair" rel="nofollow noopener" target="_blank">#Corsair</a><span> ATX PSU (80 Gold rated prolly, cos ATX PSUs are somehow more expensive than Flex ones?).<br><br>The latter route is def more expensive, but idk if running a </span><a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> node with an <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a> <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a> cluster 24/7 in a mini ITX setup is the most brilliant idea...</p>
Mika<p>I've successfully migrated my <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a> <a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener" target="_blank">#homelab</a> server over to <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> after surprisingly a little bit of (unexpected) trouble - haven't really even moved all of my old services or <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a> cluster back into it, but I'd say the most challenging part I was expecting which is <a href="https://sakurajima.social/tags/TrueNAS" rel="nofollow noopener" target="_blank">#TrueNAS</a><span> has not only been migrated, but also upgraded from TrueNAS Core 12 to TrueNAS Scale 24.10 (HUGE jump, I know).<br><br>Now then. I'm thinking what's the best way to move forward with this, now that I have 2 separate nodes running Proxmox. There are multiple things to consider. I suppose I could cluster 'em up, so I can manage both of them under one </span><i>roof</i> but from what I can tell, clustering on Proxmox works the same way as you would with Kubernetes clusters like <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a> or <a href="https://sakurajima.social/tags/K3s" rel="nofollow noopener" target="_blank">#K3s</a><span> whereby you'd want at least 3 nodes, if not just 1. I can build another server, I have the hardware parts for it, but I don't think I'd want to take up more space I already do and have 3 PCs running 24/7.<br><br>I'm also thinking of possibly joining my 2 RKE 2 Clusters (1 on each node) into 1... but I'm not sure how I'd go about it having only 2 physical nodes. Atm, each cluster has 1 Master node and 3 Worker nodes (VMs ofc). Having only 2 physical nodes, I'm not sure how I'd spread the number of master/worker nodes across the 2. Maintaining only 1 (joined) cluster would be helpful though, since it'd solve my current issue of not being able to use one of them to publish services online using </span><a href="https://sakurajima.social/tags/Ingress" rel="nofollow noopener" target="_blank">#Ingress</a> "effectively", since I could only port forward the standard HTTP/S ports to only a single endpoint (which means the <i>secondary</i> cluster will use a non-standard port instead i.e. <code>8443</code><span>).<br><br>This turned out pretty long - but yea... any ideas what'd be the "best" way of moving forward if I only plan to retain 2 Proxmox nodes - Proxmox wise, and perhaps even Kubernetes wise?</span></p>
Mika<p>For some reason, I feel like the <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a> cluster on my <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> node is more fragile than my cluster on my <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a> node. Like, on the latter, I can simply shutdown and boot the nodes however I want just like that and everything seems to just get in a working state on tis own. On the former, for some reason, things seem to boot in a non-running start with various status like <code>Unknown</code>, <code>CrashLoopBackOff</code>, etc. - some gets solved by me deleting/restarting the pods, some though will require me to run the <code>killall</code> script and reboot the entire node. Pretty weird, when both clusters were deployed/configured the exact same way and runs the exact same version.</p>
Mika<p>Twice now my secondary <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a> cluster running on my <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> node is giving me a bunch of useless errors, mostly seems to do with <a href="https://sakurajima.social/tags/Longhorn" rel="nofollow noopener" target="_blank">#Longhorn</a>, that are preventing my services from running 😴<span><br><br>Maybe it has to do with one of my SSD which shows that it had passed the </span><a href="https://sakurajima.social/tags/SMART" rel="nofollow noopener" target="_blank">#SMART</a> test and that it's "healthy", on Proxmox, yet shows a <code>Media and Data Integrity Errors</code> value of <code>609</code> which I assume is definitely concerning.</p>
Scott Williams 🐧<p>Yesterday, I deployed a <a href="https://mastodon.online/tags/Rancher" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Rancher</span></a> <a href="https://mastodon.online/tags/RKE2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RKE2</span></a> <a href="https://mastodon.online/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Kubernetes</span></a> cluster with <a href="https://mastodon.online/tags/Cilium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cilium</span></a> and removed kube-proxy in the process. Maybe I'm behind the times, but it's the first time I've done cilium (used to flannel, calico, canal, etc.). Looking forward to a simpler and faster Kubernetes!</p>
Scott Williams 🐧<p>Hello, fresh <a href="https://mastodon.online/tags/Rancher" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Rancher</span></a> <a href="https://mastodon.online/tags/RKE2" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RKE2</span></a> cluster. <a href="https://mastodon.online/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Kubernetes</span></a></p>