I wrote up some notes on how to approach I/O and storage benchmarks in RFPs. I normally don't post here about updates to my digital garden, but I think this page is tidy and useful.

I wrote up some notes on how to approach I/O and storage benchmarks in RFPs. I normally don't post here about updates to my digital garden, but I think this page is tidy and useful.
Google Cloud Managed Lustre, a managed, high-performance parallel file system service, announced by Google and DDN
https://www.admin-magazine.com/News/Google-Cloud-Managed-Lustre-Now-Generally-Available?utm_source=mam
#HPC #EXAscaler #GoogleCloud #data #simulation #research #DDN
I am a sucker for photos of cool #HPC infrastructure, and here is a dense GB200 NVL72 cluster going up somewhere in Canada (I think). Impressive to see this many racks in row; the DC must have facility water which is still uncommon in hyperscale. Source: https://www.linkedin.com/posts/5cai_heres-a-peek-behind-the-curtain-at-the-early-activity-7350949703842189313-X3p_?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAU98N0BKzpkHNnW4i2sDGnIDTwgK7pQHXc
#JeRecrute, pas moi mais CNRS/Inria, pour faire du #Guix en #HPC dans une équipe de rêve
du service public.
https://recrutement.inria.fr/public/classic/fr/offres/2025-09146
Some #HPC fun for the week ahead:
a quiz on HPC leadership topics
Europe’s first federated AI platform, #Fact8ra, has picked #openSUSE as a core component of its sovereign AI stack! 8 #EU nations knowing the benefit of sovereign #opensource #AI #HPC https://news.opensuse.org/2025/07/11/sovereign-ai-platform-picks-opensuse/
Want to join a dream team
to work with #Guix in #HPC? Let’s talk!
https://recrutement.inria.fr/public/classic/fr/offres/2025-09146
Shorter @glennklockwood : Academics care about knowledge creation and sharing, and when building computing systems tend to build them to maximize these values. Commercial enterprises care more about profits and build systems to maximize these. Are these approaches incompatible? No, but the window for overlap in #HPC and #AI for academia and hyperscale commerce was brief and may have closed. Also you have to decide how to spend your time in this world, and he is moving on. https://mast.hpc.social/@glennklockwood/114832950520089114
In the few days I have between jobs, I wanted to share an unvarnished perspective on what I've learned after spending three years working on supercomputing in the cloud. It's hastily written and lightly edited, but I hope others find it interesting: https://blog.glennklockwood.com/2025/07/lessons-learned-from-three-years-in.html
So now that parallel processing in purrr is officially out I wanted to test mirai on an #HPC. So far, it seems to work relatively painlessly. Just set up your PBS config and mirai::daemons(n) will spin up n jobs.
Now, is it optimal to have number of "workers" = number of jobs? If each job has 48 CPUs, then each job could easily handle several jobs themselves. Does anyone know if it's possible to run something like "100 workers distributed between 3 remote jobs"?
#RStats
https://www.tidyverse.org/blog/2025/07/purrr-1-1-0-parallel/
Embedding training in digital skills in the university curriculum is what will help to spread the use of #HPC in their future.
But also, how the machines have been designed so that it becomes more accessible to new users (snellius vs ARCHER2).
We didn't learn to use the command line in one day.
#stepup2025
I’ve also always been fascinated with and know nothing about #hpc infra and programming. It just seems real cool. When I was a kid my friends and I always thought that have a beowulf cluster of dec alpha’s to.. compile kernels or some other meaningless stuff.
I still think that’s cool and just reading some docs for things like #openmp just sounds NEAT.
Also a picture of the redwoods and ferns and moss because I miss home.
The new #GraceHopper #Superchip = faster, smarter, more efficient #OBS pipelines for #Armv9 No more memory copy overhead
Unified CPU-GPU memory
Optimized #Tumbleweed builds
This is #opensource innovation at its best! #openSUSE #HPC #Linux #nvidia #SUSE #arm https://news.opensuse.org/2025/06/20/grace-hopper-to-boost-tw-armv9-builds/
NERSC just announced that IBM and VAST have been selected as the storage providers for the upcoming Doudna #HPC system. Strong statement since NERSC had long invested in Lustre (scratch) and GPFS (community). Very cool to see NERSC not settling for the status quo.
https://www.nersc.gov/news-and-events/news/doudna-storage-solutions
Scott Atchley, who co-keynoted #ISC25, posted a really meaningful response to my ISC25 recap blog post on LinkedIn (https://www.linkedin.com/posts/scottatchley_isc25-olcf-frontier-activity-7345786995765395457-lGoq). He specifically offered additional perspective on the 20 MW exascale milestone and the pitfalls of Ozaki. It's short but very valuable context.
@hpcnotes My 2c is that #HPC has *not* changed, mostly (which can be viewed as good or bad) apart from normal technology evolution, but the way that many large companies make money from activities that involve computing has changed and most of the consequences of that change are still being worked out. One major consequence is massive energy usage that is pushing back climate goals - not an #HPC topic per se, but suppressing progress on HPC energy usage. And #HPC still cannot afford #AI scales.
Has HPC Changed Forever? And What Is Next?
Looking forward to giving this keynote talk at the HPDC conference at Notre Dame later this month