101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

518
active users

#gpgpu

0 posts0 participants0 posts today
Replied in thread

@enigmatico @lispi314 @kimapr @bunnybeam case in point:

  • #Bloatedness was the original post topic and yes, due to #TechBros "#BuildFastBreakThings" mentality, #Bloatware is increasing given that a shitty bloated 50+MB "#WebApp" with like nw.js is easy to slap together (and yes I did so myself!) than to put in way more thought and effort (as you can see on the slow progression of OS/1337...

  • Yes, #Accessibility is something that needs to be taken more seriously and it's good to see that there's at least some attemots at making #accessibility mandatory (at least in #Germany, where I know from some insider that a big telco is investing a lot in that!) for a growng number of industries and websites...

  • And whilst one can slap an #RTX5090 on any laptop that has a fully-functional #ExpressCard slot (with #PCIe interface, using some janky adaptors!) that'll certainly not make sense beyond some #CUDA or other #GPGPU-style workloads as it's bottlenecked to a single PCIe lane of 2.0 (500MB/s) or just 1.0a(250MB/s) speeds.

Needless to say there is a need to THINN DOWN things cuz the current speed of #Enshittifcation and bloatedness combined with #AntiRepairDesign and overpriced yet worse #tech in general makes it unsustainable for an ever increasing population!

  • Not everyone wants (or even can!) indebt themselves just to have a phone or laptop!

Should we aim for more "#FrugslComputing"?

  • Abdolutely!

Is it realistic to expect things to be in a perfectly accessible TUI that ebery screenreader can handle?

  • No!

That being said the apathy of consumers is real, and very frustrating:

People get nudged into accepting all the bs and it really pisses me off because they want me to look like ab outsider / asshole for not submitting to #consumerism and #unsustainable shite...

ぷにすきーENIGMATICO :flag_bisexual: :flag_nonbinary: (@enigmatico)I get this is a joke, but here is the thing (aside of the joke). People doesnt use crappy laptops anymore. People moves on to phones/tablets, or if they want something more serious, something like a gamer PC. Most people will buy a console if they want to play games though. In that context, nobody cares anymore about bloat. If you are a developer its easier for you to use some bloaty framework that gets the job done in a couple days, because at the end of the day, if you're going to be exploited and crunched to death, you might as well make it as short as possible. And as a consumer, nobody really cares. You buy whatever allows you to do what you wwant and thats it. Or whatever your pocket allows you. And to be completely honest with you all, this has always been like this. You have to do with what you have. Could the world be better if everyone used pure C and assembly? Maybe... if companies had the intention to spend years developing ttheir products and fixing critical bugs before launch. By the time of the launch they would be obsolete. Kinda what happen to Duke Nukem Forever. RN: (📎1)

Uploaded a new demo/example showing how to perform GPU-side data reductions using thi.ng/shader-ast & thi.ng/webgl multi-pass pipeline. Arbitrary reduction functions supported. If there's interest, this could be expanded & packaged up as library... 90% of this example is boiler plate, 9.9% benchmarking & debug outputs...

Demo:
demo.thi.ng/umbrella/gpgpu-red

Source code:
github.com/thi-ng/umbrella/blo

Readme w/ benchmark results:
github.com/thi-ng/umbrella/tre

Related discussion:
github.com/thi-ng/umbrella/iss

OK so I'm ready for today's #GPGPU lesson with the new laptop. My only gripe for the lesson will be that #Rusticl in #Mesa 23.2 doesn't support #profiling information. Apparently the feature was merged at a later commit
gitlab.freedesktop.org/mesa/me
and I even tried upgrading to my distro's experimental 23.3-rc1 packages, but trying to use rusticl on those packages segfaults. So either I've messed up something with this mixed upgrade, or I've hit an actual bug.

GitLabrusticl: Add profiling support (!24101) · Merge requests · Mesa / mesa · GitLab What does this MR do and why? Add profiling support; I think in a pretty similar way to clover.

James Reinders et al. have released the second edition of their SYCL book "Data Parallel C++", available for free in PDF and EPUB: link.springer.com/book/10.1007

"SYCL is a royalty-free open standard developed by the Khronos Group that allows developers to program heterogeneous architectures [such as CPUs, GPUs, and FPGAs] in standard C++."

SpringerLinkData Parallel C++This open access book teaches data-parallel programming using C++ with SYCL and walks through everything needed to program accelerated systems.
#SYCL#Cpp#HPC

Intel's Codeplay Announces oneAPI Construction Kit For Bringing SYCL To New Hardware
phoronix.com/news/oneAPI-Const

"This open-source project aims to help ease bringing up SYCL on new processor/accelerator architectures, particularly around HPC and AI. The oneAPI Construction Kit also has a reference implementation for RISC-V."

www.phoronix.comIntel's Codeplay Announces oneAPI Construction Kit For Bringing SYCL To New Hardware

Zero in on Level Zero: An Open, Backend Approach to Compute Anywhere
intel.com/content/www/us/en/de

"Intel’s first implementation of Level Zero targets Intel GPUs. However, the vision and potential of Level Zero goes far beyond that. [...] The API is designed to work across a variety of compute devices, including CPUs, GPUs, Field Programmable Gate Arrays (FPGAs), and other accelerator architectures."

IntelGet Started Using Level Zero API Backend to Manage Offload DevicesGet an overview of the Level Zero hardware abstraction layer -- how it makes oneAPI incredibly versatile and how to use it across compute resources.

One of the reasons why I still haven't done my long-promised mini-thread on #GPGPU is that every time I find some time to brainstorm the thing I get indescribably angry at vendors, and I realize that the tone of the thread would really take a bad turn I would rather not have it have.
Mind you, I love the field, but I have a lot of pent-up frustration due to how much vendors keep (intentionally or not) making things much harder than they should be.

Now that our instance has a higher size limit for toots, time for a re-#introduction. This time with more hashtags!

Hi! I'm Jeff. :blobcatwave:

I've been a software engineer since around 1999 I guess. I started with #WebDev back in the early days of applets, DHTML, and Flash. I've since moved on to #FullStack work on just about anything that has a compiler or an interpreter. I've even recently dabbled in #PCB design and #3DPrinting.

My software specialties are in high performance computing #HPC, #GPGPU, and #ComputationalChemistry. Although I usually enjoy any programming problem with a good challenge to it. I spent waaay too much time in school and got all the degrees in computer science. I still work in #academia part-time writing research software.

My favorite programming languages at the moment are #Rust and #Kotlin. Although, I've spent a lot of time writing #Javascript lately. With the right tooling it's not completely terrible.

More recently, I've been interested in online #privacy, #cryptography, and #SocialNetworks.

Replied in thread

These two properties give #SPH some advantages over more traditional methods (finite differences, finite element, finite volume), such as automatic mass conservation and natural (often implicit) handling of interfaces, large deformations or fragmentation.

In addition, the standard weakly-compressible formulation is embarrassingly parallel in nature, making it fit for implementation on high-performance parallel computing hardware. #GPGPU in particular has been a boon for SPH.

#introduction

I work at the Osservatorio Etneo, Catania section of the Italian National Institute for Geophysics and Volcanology #INGV.
Mathematician by formation, scientific software developer by necessity, I work on #lava flow #simulation, #hazard assessment, #risk mitigation.
Much of my work revolves around #ComputationalFluidDynamics (#CFD), w/ a preference for #SmoothedParticleHydrodynamics (#SPH).
I should probably mention my interest in #HPC and #GPGPU, but I ran out of characters …

In case anyone wants to check it out, this is my dev computer. I work on massively parallel algorithms a lot of the time so I need some really heavy duty GPUs for GPGPU (those are 4 watercooled Radeon Vega Frontier Edition GPUs) and a 32 core Ryzen 3.7 Ghz threadRipper CPU, and 64 gigs of the fastest ram around. this baby is a **beast**

inb4 why the Radeon GPUs and not NVIDIA or an Intel chip... simple, they are better for many/most applications but not all. The CPU and motherboard supports the full number of channels to maximize throughput into and out of the GPUs. So while they have fewer cores compared to a Tesla the cores dont tend to be the bottleneck, the I/O is, so i sacrificed cores to max out the I/O channels where the bottleneck tends to be. AMD and Ryzen ThreadRipper CPU were the only ones doing that.

#GPGPU #Parallel #Parallelism #GPU #programming #AI #MachineLearning #Algo #algos #Algorithms @Science

Spent a good part of the day yesterday writing up some neat Docker images to run GPU enabled OpenCL out of. Wasnt too hard to figure out how to get the docker image the correct access to the GPU to enable the GPU acceleration or anything. In fact the tricker part was figuring out how to write the .gitlab-ci.yml file and the respective Dockerfile to be parameterized to minimize work.

Its a cool little trick I used, it basically looks at the branch or tag name to figure out how to tag the docker images. If the branch is develop it is tagged as "aparapi/aparapi-nvidia:git" and does another one as "aparapi/aparapi-amdgpu:git". Similarly if the branch is master then it will be "latest" instead of "git". However if its a tag then it uses the tag in place of it. So when using aparapi version 2.0.0 with amdgpu, and the version revision of the dockerfile, it would look like "aparapi/aparapi-amdgpu:2.0.0-1". This means minimal work for me, when I want to use a new version I just change the aparapi version, and push it to a new tag and it does all the work to compile it.

It even automatically pushes it to docker hub for me!

git.qoto.org/aparapi/aparapi-d

git.qoto.orgProjects · Aparapi / aparapi-dockerA Docker image for easily testing Aparapi based applications in a OpenCL and GPU enabled environment. Provided the host system has the appropriate hardware Aparapi applications can utilize GPU acceleration...