101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

496
active users

#esxi

0 posts0 participants0 posts today
The New Oil<p>Fake <a href="https://mastodon.thenewoil.org/tags/KeePass" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KeePass</span></a> password manager leads to <a href="https://mastodon.thenewoil.org/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a> <a href="https://mastodon.thenewoil.org/tags/ransomware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ransomware</span></a> attack</p><p><a href="https://www.bleepingcomputer.com/news/security/fake-keepass-password-manager-leads-to-esxi-ransomware-attack/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bleepingcomputer.com/news/secu</span><span class="invisible">rity/fake-keepass-password-manager-leads-to-esxi-ransomware-attack/</span></a></p><p><a href="https://mastodon.thenewoil.org/tags/cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cybersecurity</span></a> <a href="https://mastodon.thenewoil.org/tags/FOSS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FOSS</span></a> <a href="https://mastodon.thenewoil.org/tags/malware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>malware</span></a></p>
gyptazy<p>Want to get rid of license costs of your <a href="https://mastodon.gyptazy.com/tags/VMware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VMware</span></a> environment? Switching to <a href="https://mastodon.gyptazy.com/tags/Proxmox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Proxmox</span></a> &amp; looking for enterprise features like DRS? <a href="https://mastodon.gyptazy.com/tags/ProxLB" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProxLB</span></a> (opensource) has you covered!</p><p>With ProxLB you extend the features of your Proxmox cluster with DRS alike features including affinity &amp; anti-affinity support, maintenance mode and soon also power management (DPM alike) and automated security patching!</p><p><a href="https://github.com/gyptazy/ProxLB" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/gyptazy/ProxLB</span><span class="invisible"></span></a></p><p><a href="https://mastodon.gyptazy.com/tags/foss" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>foss</span></a> <a href="https://mastodon.gyptazy.com/tags/debian" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>debian</span></a> <a href="https://mastodon.gyptazy.com/tags/proxmoxve" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>proxmoxve</span></a> <a href="https://mastodon.gyptazy.com/tags/esx" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>esx</span></a> <a href="https://mastodon.gyptazy.com/tags/esxi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>esxi</span></a> <a href="https://mastodon.gyptazy.com/tags/vsphere" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vsphere</span></a> <a href="https://mastodon.gyptazy.com/tags/homelab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>homelab</span></a> <a href="https://mastodon.gyptazy.com/tags/enterprise" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>enterprise</span></a> <a href="https://mastodon.gyptazy.com/tags/virtualization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>virtualization</span></a> <a href="https://mastodon.gyptazy.com/tags/kvm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kvm</span></a></p>
The New Oil<p>Hackers exploit <a href="https://mastodon.thenewoil.org/tags/VMware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VMware</span></a> <a href="https://mastodon.thenewoil.org/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a>, <a href="https://mastodon.thenewoil.org/tags/Microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft</span></a> <a href="https://mastodon.thenewoil.org/tags/SharePoint" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SharePoint</span></a> zero-days at <a href="https://mastodon.thenewoil.org/tags/Pwn2Own" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Pwn2Own</span></a></p><p><a href="https://www.bleepingcomputer.com/news/security/hackers-exploit-vmware-esxi-microsoft-sharepoint-zero-days-at-pwn2own/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bleepingcomputer.com/news/secu</span><span class="invisible">rity/hackers-exploit-vmware-esxi-microsoft-sharepoint-zero-days-at-pwn2own/</span></a></p><p><a href="https://mastodon.thenewoil.org/tags/cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cybersecurity</span></a></p>
The New Oil<p>New <a href="https://mastodon.thenewoil.org/tags/VanHelsing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VanHelsing</span></a> <a href="https://mastodon.thenewoil.org/tags/ransomware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ransomware</span></a> targets <a href="https://mastodon.thenewoil.org/tags/Windows" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Windows</span></a>, <a href="https://mastodon.thenewoil.org/tags/ARM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ARM</span></a>, <a href="https://mastodon.thenewoil.org/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a> systems</p><p><a href="https://www.bleepingcomputer.com/news/security/new-vanhelsing-ransomware-targets-windows-arm-esxi-systems/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bleepingcomputer.com/news/secu</span><span class="invisible">rity/new-vanhelsing-ransomware-targets-windows-arm-esxi-systems/</span></a></p><p><a href="https://mastodon.thenewoil.org/tags/cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cybersecurity</span></a></p>
LinuxNews.de<p>Man kann <a href="https://social.anoxinon.de/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> kleinreden wie man will aber eine solche Uptime habe ich bei noch keinem <a href="https://social.anoxinon.de/tags/esxi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>esxi</span></a> hinbekommen! </p><p><a href="https://social.anoxinon.de/tags/uptimeporn" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>uptimeporn</span></a> <a href="https://social.anoxinon.de/tags/uptime" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>uptime</span></a> <a href="https://social.anoxinon.de/tags/kvm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kvm</span></a> <a href="https://social.anoxinon.de/tags/virtualmachine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>virtualmachine</span></a></p>
The New Oil<p>Over 37,000 <a href="https://mastodon.thenewoil.org/tags/VMware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VMware</span></a> <a href="https://mastodon.thenewoil.org/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a> servers vulnerable to ongoing attacks</p><p><a href="https://www.bleepingcomputer.com/news/security/over-37-000-vmware-esxi-servers-vulnerable-to-ongoing-attacks/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bleepingcomputer.com/news/secu</span><span class="invisible">rity/over-37-000-vmware-esxi-servers-vulnerable-to-ongoing-attacks/</span></a></p><p><a href="https://mastodon.thenewoil.org/tags/cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cybersecurity</span></a></p>
Sascha Stumpler<p>Check the alarms on all ESXi Hosts via Powershell <a href="http://dlvr.it/TJKSbF" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">http://</span><span class="">dlvr.it/TJKSbF</span><span class="invisible"></span></a> via PlanetPowerShell <a href="https://hessen.social/tags/PowerShell" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PowerShell</span></a> <a href="https://hessen.social/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a> <a href="https://hessen.social/tags/vCenter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vCenter</span></a> <a href="https://hessen.social/tags/Coding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Coding</span></a></p>
Mika<p>That's so freaking weird lol... on one of my <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> nodes, I had started a test VM (<a href="https://sakurajima.social/tags/RockyLinux" rel="nofollow noopener" target="_blank">#RockyLinux</a>/<a href="https://sakurajima.social/tags/RHEL" rel="nofollow noopener" target="_blank">#RHEL</a><span>), and I was installing a single package on it, and for wtv reason that ended up powering off ALL other VMs on that Proxmox node other than that one test node.<br><br>I've never had this happen before in my </span><a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener" target="_blank">#homelab</a>, on Proxmox or <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a>, but it's incredibly concerning for sure lol. I have some critical stuffs on it too like my <a href="https://sakurajima.social/tags/TrueNAS" rel="nofollow noopener" target="_blank">#TrueNAS</a> server, some of <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a> nodes, etc. and to have them just... power off like that without any alerts or logs explaining it, not on the GUI anyway, is insane.</p>
ck 👨‍💻<p>A new version of check_esxi_hardware, an <a href="https://noc.social/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> monitoring plugin to monitor the hardware of Broadcom <a href="https://noc.social/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a> servers, was just released.</p><p>The latest version improves exception handling from the pywbem Python module and also added HTTP exception handling.</p><p>To ensure backward compatibility for users using older and newer pywbem versions, the <a href="https://noc.social/tags/monitoring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>monitoring</span></a> plugin now requires the "packaging" Python module.</p><p>More details in the blog post. 👇 </p><p><a href="https://www.claudiokuenzler.com/blog/1473/check-esxi-hardware-20250221-release-pywbem-exception-improvements" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">claudiokuenzler.com/blog/1473/</span><span class="invisible">check-esxi-hardware-20250221-release-pywbem-exception-improvements</span></a></p>
bazcurtis<p>I have an old NUC 11 that I think is failing. I run ESXi on it, the free version. It has 32GB of ram, 1x 1TB SSD and 1x 2TB SSD. Can anyone recommend a good replacement?</p><p>I am sure a lot of people will also recommend Proxmox over ESXi, but I have to run some images that are only ESXi.</p><p>Any suggestions would be most welcome</p><p><a href="https://mastodon.social/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a> <a href="https://mastodon.social/tags/IntelNUC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IntelNUC</span></a> <a href="https://mastodon.social/tags/NUC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NUC</span></a> <a href="https://mastodon.social/tags/Proxmox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Proxmox</span></a></p>
Alexander Bochmann<p>Always fun when you're doing the same thing as always, and suddenly a new problem appears...</p><p>In this case: A Fortigate VM cluster on VMware. There's a dvSwitch portgroup for the HA network with the recommended configuration for such a setup, connected to a dedicated network adapter in either VM.</p><p>The Fortigates start talking on their HA network, and after a couple of minutes, both dvSwitch ports go into the blocked state (not at the same time). I have rarely even seen that happening at all? And not with any of the other Fortigate VM clusters we run on our infrastructure?</p><p>No idea what's up there, and I have not found any events that shed light on a reason, though ESXi logs do say the dvSwitch port is being blocked. Thanks, that's great?</p><p>Looks like some network tracing is in my near future...</p><p><a href="https://mastodon.infra.de/tags/vmware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vmware</span></a> <a href="https://mastodon.infra.de/tags/esxi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>esxi</span></a> <a href="https://mastodon.infra.de/tags/dvswitch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dvswitch</span></a> </p><p>(Note: I do know all about the shit that Broadcom is pulling, and if I was in a position to migrate our platform to a different hypervisor, I would. No need to tell me.)</p>
Mika<p>One thing I find surprisingly difficult to determine is just how much storage space left I have to allocate to my VMs on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a>. On <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a><span>, this thing is displayed quite clearly on the main dashboard of the web interface and the capacity used/allocated shown is clearly accurate or what I'd expect based on manual calculations on just how much space I've allocated for each disk.. but on Proxmox, I'm not quite sure where I'd find this?<br><br>There seems to be several places that show some form of storage space, but none of them seem to be what I'd expect? For example, I have a 100GB disk that I've assigned a total of 30GB of space to 3 VMs, I'd expect some place I can check that I have 70GB left of space I could supply to my VMs.. but I'm not seeing anything like that at all. Instead, what I could find is something that tells me I have ~90GB of space left rather than 70.</span></p>
Mika<p><a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> clustering question - I plan to set up a cluster with 2 nodes (<code>A</code> and <code>B</code>) and a Qdevice (<code>Q</code><span>). I don't plan to add shared storage, and would prefer to go the replication route with local disks instead for "HA".<br><br>As of right now, node A and B have a </span><i>somewhat</i><span> similar storage setup (somewhat bcos capacity may be just slightly different, due to different SSDs) - node A has a ZFS Mirror pool with 2x 1TB disks (1x NVME of vendor 1, 1x SATA of vendor 2). Node B has a ZFS Mirror pool with 2x 1TB disks (2x NVME of vendor 1).<br><br>^ I suspect, with this setup, my plan for replication should work just fine. However, I do plan on adding another disk; 1x 1TB SATA of vendor 3, to node A. I've never done this on Proxmox before, only on </span><a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a><span>, so I'm not too sure what the disk format will be (or rather, what kind of setup I should pick for it) - but I imagine it has to be a single disk ZFS, separate from the Mirror pool?<br><br>The idea is, some VMs will be on the ZFS Mirror pool, while some will be on that additional disk (on node A) - some might even be a combination of both (i.e. a VM with its primary disk on ZFS Mirror, but also a secondary disk on the additional disk). With that assumption, how would VM replication work in said cluster, or will it just not?<br><br>Would appreciate any insights on this as I try and devise a "plan" for my </span><a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener" target="_blank">#homelab</a>. Atm, I've set up node A but in the process of migrating some VMs over from node B as I intend to empty and reinstall Proxmox on node B, to prepare it to join node A in a cluster.</p>
Mika<p>I've successfully migrated my <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a> <a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener" target="_blank">#homelab</a> server over to <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> after surprisingly a little bit of (unexpected) trouble - haven't really even moved all of my old services or <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a> cluster back into it, but I'd say the most challenging part I was expecting which is <a href="https://sakurajima.social/tags/TrueNAS" rel="nofollow noopener" target="_blank">#TrueNAS</a><span> has not only been migrated, but also upgraded from TrueNAS Core 12 to TrueNAS Scale 24.10 (HUGE jump, I know).<br><br>Now then. I'm thinking what's the best way to move forward with this, now that I have 2 separate nodes running Proxmox. There are multiple things to consider. I suppose I could cluster 'em up, so I can manage both of them under one </span><i>roof</i> but from what I can tell, clustering on Proxmox works the same way as you would with Kubernetes clusters like <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a> or <a href="https://sakurajima.social/tags/K3s" rel="nofollow noopener" target="_blank">#K3s</a><span> whereby you'd want at least 3 nodes, if not just 1. I can build another server, I have the hardware parts for it, but I don't think I'd want to take up more space I already do and have 3 PCs running 24/7.<br><br>I'm also thinking of possibly joining my 2 RKE 2 Clusters (1 on each node) into 1... but I'm not sure how I'd go about it having only 2 physical nodes. Atm, each cluster has 1 Master node and 3 Worker nodes (VMs ofc). Having only 2 physical nodes, I'm not sure how I'd spread the number of master/worker nodes across the 2. Maintaining only 1 (joined) cluster would be helpful though, since it'd solve my current issue of not being able to use one of them to publish services online using </span><a href="https://sakurajima.social/tags/Ingress" rel="nofollow noopener" target="_blank">#Ingress</a> "effectively", since I could only port forward the standard HTTP/S ports to only a single endpoint (which means the <i>secondary</i> cluster will use a non-standard port instead i.e. <code>8443</code><span>).<br><br>This turned out pretty long - but yea... any ideas what'd be the "best" way of moving forward if I only plan to retain 2 Proxmox nodes - Proxmox wise, and perhaps even Kubernetes wise?</span></p>
Mika<p>Man I prolly have hard-rebooted my <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> server almost or over 10 times now probably - all bcos, somehow, after adding a PCI device (passthrough), particularly a SATA card to my <a href="https://sakurajima.social/tags/TrueNAS" rel="nofollow noopener" target="_blank">#TrueNAS</a><span> VM, each time I boot it up Proxmox would just... hang forever and become inaccessible despite still "running".<br><br>The error it spits out is:<br></span></p><pre><code>Pool 'rpool' has encountered an uncorrectable I/O failure and has been suspended</code></pre><span><br>This move away from </span><a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a><span> is a lot more painful than I imagined.<br><br>RE: </span><a href="https://sakurajima.social/notes/a1vvx0jo6u" rel="nofollow noopener" target="_blank">https://sakurajima.social/notes/a1vvx0jo6u</a><p></p>
Mika<p>After having deployed a <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> node for many months now, I'm convinced and in the process of migrating my main <a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener" target="_blank">#homelab</a> server that was previously running <a href="https://sakurajima.social/tags/VMware" rel="nofollow noopener" target="_blank">#VMware</a> <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a><span> over to Proxmox.<br><br>I've built it, and installed Proxmox, but for unknown reason - a concerning one definitely, eventho I'm yet to actually migrate more than a couple of VMs over (currently just in testing phase to figure things out), I've already had to hard reboot the new Proxmox server twice in the span of an hour or so.<br><br>Once after I had just finished migrating several disk (backups) over to the (new) server over SSH, and another right after shutting down a VM. In both cases, the server hardware is still on, but neither the web interface nor the server itself (via </span><code>ping</code><span> or SSH) was reachable. Entering anything on the keyboard, via its own display (so not remotely) did nothing - like it just froze.<br><br>No idea wth is causing it.</span></p>
Mika<p>I'm about to migrate my server physically (moving to a new case) and also software-wise (ditching <a href="https://sakurajima.social/tags/VMware" rel="nofollow noopener" target="_blank">#VMware</a> <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a> for <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a>) and ngl I'm kinda nervous about it lol, esp the physical part. Never built in a <a href="https://sakurajima.social/tags/Jonsbo" rel="nofollow noopener" target="_blank">#Jonsbo</a> or a server-minded PC case before.</p>
gyptazy<p>Virtualization &amp; Proxmox were trending topics in this year and the opportunity of <a href="https://mastodon.gyptazy.com/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> projects to rise! With <a href="https://mastodon.gyptazy.com/tags/ProxLB" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProxLB</span></a> you can get features like <a href="https://mastodon.gyptazy.com/tags/DRS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DRS</span></a> (known from <a href="https://mastodon.gyptazy.com/tags/VMware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VMware</span></a>) into <a href="https://mastodon.gyptazy.com/tags/Proxmox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Proxmox</span></a>, but also several other features that are still missing in Proxmox. Open-Source products provide us the possibility to extend it to our needs.</p><p>Project: <a href="https://github.com/gyptazy/ProxLB" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/gyptazy/ProxLB</span><span class="invisible"></span></a></p><p><a href="https://mastodon.gyptazy.com/tags/virtualization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>virtualization</span></a> <a href="https://mastodon.gyptazy.com/tags/vm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vm</span></a> <a href="https://mastodon.gyptazy.com/tags/virtualmachine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>virtualmachine</span></a> <a href="https://mastodon.gyptazy.com/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.gyptazy.com/tags/homelab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>homelab</span></a> <a href="https://mastodon.gyptazy.com/tags/python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python</span></a> <a href="https://mastodon.gyptazy.com/tags/fediverse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fediverse</span></a> <a href="https://mastodon.gyptazy.com/tags/vsphere" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vsphere</span></a> <a href="https://mastodon.gyptazy.com/tags/esxi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>esxi</span></a> <a href="https://mastodon.gyptazy.com/tags/alternatives" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>alternatives</span></a> <a href="https://mastodon.gyptazy.com/tags/free" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>free</span></a> <a href="https://mastodon.gyptazy.com/tags/storage" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>storage</span></a> <a href="https://mastodon.gyptazy.com/tags/nfs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nfs</span></a> <a href="https://mastodon.gyptazy.com/tags/community" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>community</span></a></p>
ck 👨‍💻<p>A new version of check_esxi_hardware, an <a href="https://noc.social/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://noc.social/tags/monitoring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>monitoring</span></a> plugin to monitor the hardware of VMware ESXi servers, is available.</p><p>The newest release fixes a <a href="https://noc.social/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> deprecation warning. More importantly the new version removes support for legacy and meanwhile EOL versions of Python2 and pywbem 0.7.0.</p><p>As the purpose of check_esxi_hardware is to query the <a href="https://noc.social/tags/CIM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CIM</span></a> server on an <a href="https://noc.social/tags/ESXi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ESXi</span></a> server - which will be removed - this may be the last and final release of the plugin. 😿 </p><p><a href="https://www.claudiokuenzler.com/blog/1455/check_esxi_hardware-20241129-python2-support-removed" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">claudiokuenzler.com/blog/1455/</span><span class="invisible">check_esxi_hardware-20241129-python2-support-removed</span></a></p>
Mika<p>I've been performing some migration tests before fully migrating away from <a href="https://sakurajima.social/tags/ESXi" rel="nofollow noopener" target="_blank">#ESXi</a> to <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a><span> - and while the in-built migration tool (on Proxmox, from ESXi) is nice, my god is it terrible for migrating huge VMs.<br><br>I tested it many times and they were fine for all my small-medium VMs, but when I was testing migrating a VM with multiple disks or even a single one but huge and filled up, one time it crashed my browser, but another it straight up froze my entire </span><a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener" target="_blank">#Linux</a> <a href="https://sakurajima.social/tags/KDE" rel="nofollow noopener" target="_blank">#KDE</a><span> Plasma desktop PC for over an hour until I decide to just hard reboot the damn thing.<br><br>So yea - how this is even possible, I'm not sure - I'd think the "taxing" part shd be only on the Proxmox/ESXi side of things and not on the client end like my PC, but that's just how it is. I'll totally rely on </span><b>ONLY</b><span> local migrations from now on (like what I wrote on my wiki, linked below) but I still haven't been successful yet on that end when it comes to EFI-based VMs (BIOS-based ones work perfectly).<br><br></span>🔗 <a href="https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#manual" rel="nofollow noopener" target="_blank">https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#manual</a></p>