101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

482
active users

#hpc

3 posts3 participants0 posts today

I am a sucker for photos of cool #HPC infrastructure, and here is a dense GB200 NVL72 cluster going up somewhere in Canada (I think). Impressive to see this many racks in row; the DC must have facility water which is still uncommon in hyperscale.

Source: linkedin.com/posts/5cai_heres-

Shorter @glennklockwood : Academics care about knowledge creation and sharing, and when building computing systems tend to build them to maximize these values. Commercial enterprises care more about profits and build systems to maximize these. Are these approaches incompatible? No, but the window for overlap in #HPC and #AI for academia and hyperscale commerce was brief and may have closed. Also you have to decide how to spend your time in this world, and he is moving on. mast.hpc.social/@glennklockwoo

HPC.social MastodonGlenn K. Lockwood (@glennklockwood@mast.hpc.social)In the few days I have between jobs, I wanted to share an unvarnished perspective on what I've learned after spending three years working on supercomputing in the cloud. It's hastily written and lightly edited, but I hope others find it interesting: https://blog.glennklockwood.com/2025/07/lessons-learned-from-three-years-in.html #HPC

In the few days I have between jobs, I wanted to share an unvarnished perspective on what I've learned after spending three years working on supercomputing in the cloud. It's hastily written and lightly edited, but I hope others find it interesting: blog.glennklockwood.com/2025/0

blog.glennklockwood.comLessons learned from three years in cloud supercomputingI recently decided to leave Microsoft after having spent just over three years there, first as a storage product manager, then as a compute ...

So now that parallel processing in purrr is officially out I wanted to test mirai on an #HPC. So far, it seems to work relatively painlessly. Just set up your PBS config and mirai::daemons(n) will spin up n jobs.

Now, is it optimal to have number of "workers" = number of jobs? If each job has 48 CPUs, then each job could easily handle several jobs themselves. Does anyone know if it's possible to run something like "100 workers distributed between 3 remote jobs"?
#RStats

tidyverse.org/blog/2025/07/pur

www.tidyverse.orgParallel processing in purrr 1.1.0The functional programming toolkit for R gains new capabilities for parallel processing and distributed computing using mirai.
Continued thread

Embedding training in digital skills in the university curriculum is what will help to spread the use of #HPC in their future.
But also, how the machines have been designed so that it becomes more accessible to new users (snellius vs ARCHER2).
We didn't learn to use the command line in one day.
#stepup2025

I’ve also always been fascinated with and know nothing about #hpc infra and programming. It just seems real cool. When I was a kid my friends and I always thought that have a beowulf cluster of dec alpha’s to.. compile kernels or some other meaningless stuff.

I still think that’s cool and just reading some docs for things like #openmp just sounds NEAT.

Also a picture of the redwoods and ferns and moss because I miss home.

Scott Atchley, who co-keynoted #ISC25, posted a really meaningful response to my ISC25 recap blog post on LinkedIn (linkedin.com/posts/scottatchle). He specifically offered additional perspective on the 20 MW exascale milestone and the pitfalls of Ozaki. It's short but very valuable context.

www.linkedin.comI always enjoy reading Glenn K. | Scott AtchleyI always enjoy reading Glenn K. Lockwood's conference recaps with #ISC25 recap being his latest. First, I was honored that AMD's Mark Papermaster invited me to share some science highlights from OLCF's Frontier. I shared that Frontier has grown slightly in the last year with the integration of the test and development system into the full system. My science highlights included: • GE Aerospace's efforts to reduce noise generated by their RISE engine that will allow GE Aero to bring it to market sooner, • NASA's work to understand how to use retro-propulsion to land humans and their gear on Mars, • Researchers refining the phase diagram for carbon by identifying the narrow region in pressure and temperature that would allow body-centric cubic (BC8) carbon to exist. This material is expected to be 30% harder than diamond, and • Efforts to understand how drug candidates interact with proteins. Unlike AI efforts such as AlphaFold that approximate protein docking, this effort uses Molecular Dynamics to got the two items close together then it switches to quantum mechanics to get an exact docking. This application actually used over 1 exaflop (1 EF) of full precision (FP64) on Frontier. I also highlighted what Frontier's replacement, Discovery, will need to support modeling/simulation as well as artificial intelligence. It will need bandwidth everywhere from processors to scale-up bandwidth between processors to scale-out bandwidth across the whole system in addition to lots of high-precision and low-precision FLOPS. I will reply with more comments. 🧵 #OLCF #Frontier

@hpcnotes My 2c is that #HPC has *not* changed, mostly (which can be viewed as good or bad) apart from normal technology evolution, but the way that many large companies make money from activities that involve computing has changed and most of the consequences of that change are still being worked out. One major consequence is massive energy usage that is pushing back climate goals - not an #HPC topic per se, but suppressing progress on HPC energy usage. And #HPC still cannot afford #AI scales.