101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

572
active users

#codegeneration

0 posts0 participants0 posts today
Frontend Dogma<p>Tool: WordPress Child Theme Generator, by @wpmarmite_en@x.com:</p><p><a href="https://wpmarmite.com/en/child-theme-generator/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">wpmarmite.com/en/child-theme-g</span><span class="invisible">enerator/</span></a></p><p><a href="https://mas.to/tags/tools" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tools</span></a> <a href="https://mas.to/tags/exploration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>exploration</span></a> <a href="https://mas.to/tags/codegeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>codegeneration</span></a> <a href="https://mas.to/tags/wordpress" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>wordpress</span></a> <a href="https://mas.to/tags/themes" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>themes</span></a></p>
Kevin Darty<p>Getting Started With Claude Code</p><p><a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://hachyderm.io/tags/Coding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Coding</span></a> <a href="https://hachyderm.io/tags/Programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Programming</span></a> <a href="https://hachyderm.io/tags/CodingWithAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodingWithAI</span></a> <a href="https://hachyderm.io/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://hachyderm.io/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://hachyderm.io/tags/Refactoring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Refactoring</span></a> <a href="https://hachyderm.io/tags/generativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeAI</span></a> <a href="https://hachyderm.io/tags/aiagents" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiagents</span></a> </p><p><a href="https://youtu.be/z37_rONQof8?si=1Iee0KNevN1E4FRx" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">youtu.be/z37_rONQof8?si=1Iee0K</span><span class="invisible">NevN1E4FRx</span></a></p>
HGPU group<p>TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://mast.hpc.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mast.hpc.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://mast.hpc.social/tags/DL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DL</span></a> <a href="https://mast.hpc.social/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29794" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29794</span><span class="invisible"></span></a></p>
Caiming XiongGenerating code with LLMs poses risks like security vulnerabilities, logical errors, and context misinterpretations. Critical for developers to scrutinize and validate AI-generated code to ensure safety and correctness. We introduce #INDICT, a novel multi-agent cooperative framework that enhances #LLMs for secure &amp; helpful code generation. Utilizing dual critics for safety and helpfulness, INDICT leverages external tools for grounded feedback, significantly improving code security across diverse programming languages. #AI #CyberSecurity #CodeGeneration For more details, read the full blog: https://t.co/nSKZ59w4Lu paper: https://t.co/TW8z7H8ngC code: https://t.co/1qa5lkXM7B
Karsten Schmidt<p>Some previews of recent <a href="https://mastodon.thi.ng/tags/ThingUmbrella" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ThingUmbrella</span></a> updates/additions:</p><p>1) The declarative &amp; fully typed CLI arg parser <a href="https://thi.ng/args" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">thi.ng/args</span><span class="invisible"></span></a> now has a nice `cliApp()` wrapper (also largely declarative), supporting multiple sub-commands (with shared and command-specific args/options), automated usage generation, <a href="https://no-color.org" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">no-color.org</span><span class="invisible"></span></a> support/detection. Still doing more testing with various CLI tools of mine (from which this all has been extracted), but planning to release in next couple of days...</p><p>2) People who've been following the project will know that for years I've been a big fan of Tachyons CSS[1], which is pretty much used for all ~150 examples in the repo. As nice as it is, it's also unmaintained by now and there're various more modern features missing (e.g. grids) and there're also general issues with the overall approach. Switching to Tailwind would mean having to install a whole boatload of additional tooling so is anathema and also doesn't address some of the features I've been wanting to explore: E.g. Generating entire CSS frameworks from a bunch of wildly combinatorial rules, options &amp; lookup tables, keeping literally _everything_ customizable, combinable and purely data-driven (i.e. generated from a JSON file). Similar to Tachyons CSS, these custom generated frameworks are based on standalone CSS utility classes (hence the original particle-inspired naming). However, I'm aiming for a different usage and instead of assigning them directly to an HTML element's `class` attrib, here we can assign them to (nested) CSS selectors to define fully inlined declarations. The additional killer feature is that each of these classes can be prefixed with an arbitrary number of freely defined media queries and thus making it trivial to add additional responsive/accesible features and _without_ requiring megabytes of raw CSS to cover the combinatorial explosion!</p><p>For the past few days I've been trialling this new approach and I very much like where this is going... Take a look at the basic example in the new <a href="https://thi.ng/meta-css" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">thi.ng/meta-css</span><span class="invisible"></span></a> package &amp; readme. I will also write more about this in coming days. All in all, it's another example where code generation and a domain specific language approach is super powerful again... limits of my world and such...</p><p>Also, speaking of bloatware (earlier above), the entire toolchain (incl. CLI &amp; all dependent packages) is a mere 21KB (minified) and it already can do a ton!</p><p><a href="https://mastodon.thi.ng/tags/ThingUmbrella" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ThingUmbrella</span></a> <a href="https://mastodon.thi.ng/tags/CSS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CSS</span></a> <a href="https://mastodon.thi.ng/tags/CLI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CLI</span></a> <a href="https://mastodon.thi.ng/tags/DSL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DSL</span></a> <a href="https://mastodon.thi.ng/tags/TypeScript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TypeScript</span></a> <a href="https://mastodon.thi.ng/tags/JavaScript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JavaScript</span></a> <a href="https://mastodon.thi.ng/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a></p>
Karsten Schmidt<p><a href="https://mastodon.thi.ng/tags/HowToThing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HowToThing</span></a> #026 — Shader meta-programming techniques (functional composition, higher-order functions, compile-time evaluation, dynamic code generation etc.) to generate animated plots/graphs of 16 functions (incl. dynamic grid layout generation) within a single WebGL fragment shader.</p><p>Today's key packages:</p><p>- <a href="https://thi.ng/shader-ast" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">thi.ng/shader-ast</span><span class="invisible"></span></a>: DSL to write (fully type-checked) shaders directly in TypeScript and later compile them to GLSL, JS (and other target languages, i.e. there's partial support for Houdini VEX and [very] early stage WGSL...)<br>- <a href="https://thi.ng/shader-ast-stdlib" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">thi.ng/shader-ast-stdlib</span><span class="invisible"></span></a>: Collection of ~220 re-usable shader functions &amp; configurable building blocks (incl. SDFs primitives/ops, raymarching, lighting, matrix ops, etc.)<br>- <a href="https://thi.ng/webgl-shadertoy" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">thi.ng/webgl-shadertoy</span><span class="invisible"></span></a>: Minimal scaffolding for experimenting with fragment shaders (supports both normal GLSL or shader-ast flavors/compilation)</p><p>If you're new to the Shader-AST approach (highly likely!), this example will again introduce a lot of new concepts, hopefully in digestible manner! Please also always consult the package readmes (and other linked examples) for more background info... There're numerous benefits to this approach (incl. targetting different target langs and compositional &amp; optimization aspects which are impossible to achieve (at least not elegantly) via just string concatenation/interpolation of shader code, as is much more commonplace...)</p><p>This example comes fresh off the back of yesterday's new easing function additions (by <span class="h-card" translate="no"><a href="https://mastodon.world/@Yura" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Yura</span></a></span>), though we're only showing a subset here...</p><p>Demo:<br><a href="https://demo.thi.ng/umbrella/shader-ast-easings/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">demo.thi.ng/umbrella/shader-as</span><span class="invisible">t-easings/</span></a><br>(Check the console to view the generated GLSL shader)</p><p>Source code:<br><a href="https://github.com/thi-ng/umbrella/tree/develop/examples/shader-ast-easings/src/index.ts" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/thi-ng/umbrella/tre</span><span class="invisible">e/develop/examples/shader-ast-easings/src/index.ts</span></a></p><p>If you have any questions about this topic or the packages used here, please reply in thread or use the discussion forum (or issue tracker):</p><p>github.com/thi-ng/umbrella/discussions</p><p><a href="https://mastodon.thi.ng/tags/ThingUmbrella" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ThingUmbrella</span></a> <a href="https://mastodon.thi.ng/tags/WebGL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WebGL</span></a> <a href="https://mastodon.thi.ng/tags/Shader" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Shader</span></a> <a href="https://mastodon.thi.ng/tags/GLSL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GLSL</span></a> <a href="https://mastodon.thi.ng/tags/FunctionalProgramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FunctionalProgramming</span></a> <a href="https://mastodon.thi.ng/tags/GraphicsProgramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GraphicsProgramming</span></a> <a href="https://mastodon.thi.ng/tags/Plot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Plot</span></a> <a href="https://mastodon.thi.ng/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://mastodon.thi.ng/tags/DSL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DSL</span></a> <a href="https://mastodon.thi.ng/tags/TypeScript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TypeScript</span></a> <a href="https://mastodon.thi.ng/tags/JavaScript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JavaScript</span></a> <a href="https://mastodon.thi.ng/tags/Tutorial" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tutorial</span></a></p>
Benjamin Han<p>10/end</p><p>[4] <a href="https://www.linkedin.com/posts/benjaminhan_reasoning-gpt-gpt4-activity-7060428182910373888-JnGQ" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">linkedin.com/posts/benjaminhan</span><span class="invisible">_reasoning-gpt-gpt4-activity-7060428182910373888-JnGQ</span></a></p><p>[5] Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, and Bernhard Schölkopf. 2023. Can Large Language Models Infer Causation from Correlation? <a href="http://arxiv.org/abs/2306.05836" rel="nofollow noopener" target="_blank"><span class="invisible">http://</span><span class="">arxiv.org/abs/2306.05836</span><span class="invisible"></span></a> </p><p><a href="https://sigmoid.social/tags/Paper" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paper</span></a> <a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/NLProc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLProc</span></a> <a href="https://sigmoid.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://sigmoid.social/tags/Causation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Causation</span></a> <a href="https://sigmoid.social/tags/CausalReasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CausalReasoning</span></a> <a href="https://sigmoid.social/tags/reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reasoning</span></a> <a href="https://sigmoid.social/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a></p>
Benjamin Han<p>9/</p><p>[2] Antonio Valerio Miceli-Barone, Fazl Barez, Ioannis Konstas, and Shay B. Cohen. 2023. The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python. <a href="http://arxiv.org/abs/2305.15507" rel="nofollow noopener" target="_blank"><span class="invisible">http://</span><span class="">arxiv.org/abs/2305.15507</span><span class="invisible"></span></a> </p><p>[3] Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. 2023. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. <a href="http://arxiv.org/abs/2305.00050" rel="nofollow noopener" target="_blank"><span class="invisible">http://</span><span class="">arxiv.org/abs/2305.00050</span><span class="invisible"></span></a> </p><p><a href="https://sigmoid.social/tags/Paper" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paper</span></a> <a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/NLProc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLProc</span></a> <a href="https://sigmoid.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://sigmoid.social/tags/Causation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Causation</span></a> <a href="https://sigmoid.social/tags/CausalReasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CausalReasoning</span></a> <a href="https://sigmoid.social/tags/reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reasoning</span></a> <a href="https://sigmoid.social/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a></p>
Benjamin Han<p>8/</p><p>[1] Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, and Muhan Zhang. 2023. Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners. <a href="http://arxiv.org/abs/2305.14825" rel="nofollow noopener" target="_blank"><span class="invisible">http://</span><span class="">arxiv.org/abs/2305.14825</span><span class="invisible"></span></a> </p><p><a href="https://sigmoid.social/tags/Paper" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paper</span></a> <a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/NLProc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLProc</span></a> <a href="https://sigmoid.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://sigmoid.social/tags/Causation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Causation</span></a> <a href="https://sigmoid.social/tags/CausalReasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CausalReasoning</span></a> <a href="https://sigmoid.social/tags/reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reasoning</span></a> <a href="https://sigmoid.social/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a></p>
Benjamin Han<p>7/</p><p>The results? Both <a href="https://sigmoid.social/tags/GPT4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT4</span></a> and <a href="https://sigmoid.social/tags/Alpaca" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Alpaca</span></a> perform worse than BART fine-tuned with MNLI, and not much better than the uniform random baseline (screenshot).</p><p><a href="https://sigmoid.social/tags/Paper" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paper</span></a> <a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/NLProc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLProc</span></a> <a href="https://sigmoid.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://sigmoid.social/tags/Causation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Causation</span></a> <a href="https://sigmoid.social/tags/CausalReasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CausalReasoning</span></a> <a href="https://sigmoid.social/tags/reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reasoning</span></a> <a href="https://sigmoid.social/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a></p>
Benjamin Han<p>4/</p><p>The same tendency is borne out by another paper focusing on testing code-generating LLMs when function names are *swapped* in the input [2] (screenshot 1). They not only found almost all models failed completely, but also most of them exhibit an “inverse scaling” effect: the larger a model is, the worse it gets (screenshot 2).</p><p><a href="https://sigmoid.social/tags/Paper" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paper</span></a> <a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/NLProc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLProc</span></a> <a href="https://sigmoid.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CodeGeneration</span></a> <a href="https://sigmoid.social/tags/Causation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Causation</span></a> <a href="https://sigmoid.social/tags/CausalReasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CausalReasoning</span></a> <a href="https://sigmoid.social/tags/reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reasoning</span></a> <a href="https://sigmoid.social/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a></p>
Continued thread

2/

They use a symbolic dataset and a semantic dataset to test models’ abilities on memorization and reasoning (screenshot 1). For each dataset they created a corresponding one in the other modality, e.g., they replace natural language labels for the relations and the entities with abstract symbols to create a symbolic version of a semantic dataset (screenshot 2).

#Paper#NLP#NLProc

1/

When performing reasoning or generating code, do #LLMs really understand what they’re doing, or do they just memorize? Several new results seem to have painted a not-so-rosy picture.

The authors in [1] are interested in testing LLMs on “semantic” vs. “symbolic” reasoning: the former involves reasoning with language-like input, and the latter is reasoning with abstract symbols.

#Paper#NLP#NLProc

From #API specifications to code with OpenAPI: generating client and server source code.

Last 'piece' of the year diving into what I presented at #apidays Paris. The role of #OpenAPI, the power of #CodeGeneration, the way to manage SDKs and deliver valuable #DeveloperExperience.

medium.com/geekculture/from-ap

Geek CultureFrom API specifications to code with OpenAPI - Geek Culture - MediumBy Beppe Catanese

Mostly, software interfaces are only defined by their signature and without a formal description of the admissible behavior and timing assumptions.

#ComMA provides a family of domain-specific languages that integrate existing techniques from formal behavioral and time modeling and is easily extensible.

youtu.be/-bbJTg7pJ-k

#SoftwareEngineering
#Interfaces
#Modelling
#ModelChecking
#CodeGeneration