mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.8K
active users

#vibecoding

34 posts34 participants0 posts today

here's my "end-of-the-week" hot take on genAI-driven coding assistants, vibeCoding, etc.:

it looks to me right now like an IT-bent version of the movie "How to Train Your Dragon"

as one summary puts it:

"DreamWorks' How to Train Your Dragon is an animated coming-of-age story in which the hero uses behavioral techniques to befriend and then to train an adversary."

see-whuda-mean<g>?

#api360#genAI#LLMg

So I've recenty been made aware of #vibecoding -- this notion that you can use "AI" to somehow forego any understanding of programming techniques.

What a load of utter fucking bollocks.

This is not how programming is done.

Programming is an exercise in logic, and undestanding, not about accepting some "AI"'s understanding of something.

Think for yourselves...

You'll be better off for it.

Yes, AI has its place, maybe to help reduce boilerplate code like unit tests, etc...

But be savvy!

I'm not sure if there's a place for auto admin panels (active_admin in Ruby or kaffy in Elixir or the OG Django Admin) anymore given that LLMs can *easily* generate kick-ass and fully dedicated admin panels in minutes 🤔

I've built two pretty advanced admin panels for my Phoenix apps (so, still kinda niche tech stack) with little to no effort. Apart from regular CRUD stuff, I've got advanced features like syncing data with Stripe, or lately I built a mailing list sync with MailerLite in like 2 hours.

Given this experience I really don't see why I would need a solution like Kaffy (I used it initially in @justcrosspost and then rebuilt the whole admin panel in literally less than an hour with much better end result).

What are your thoughts on this topic? 👍🏻 or 👎🏻?

Unsolicited advice to VibraCoders;

- do not get emotionally attached to output

- pay a professional to check your thing before launch

- what you think you're saving in time/money building that toy, is no where near what you'll spend to make a real product

> [there is a] new type of threat to the software supply chain: package hallucinations. These hallucinations, which arise from fact-conflicting errors when generating code using LLMs, represent a novel form of package confusion attack that poses a critical threat to the integrity of the software supply chain.

arxiv.org/abs/2406.10279
theregister.com/2025/04/12/ai_

arXiv logo
arXiv.orgWe Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMsThe reliance of popular programming languages such as Python and JavaScript on centralized package repositories and open-source software, combined with the emergence of code-generating Large Language Models (LLMs), has created a new type of threat to the software supply chain: package hallucinations. These hallucinations, which arise from fact-conflicting errors when generating code using LLMs, represent a novel form of package confusion attack that poses a critical threat to the integrity of the software supply chain. This paper conducts a rigorous and comprehensive evaluation of package hallucinations across different programming languages, settings, and parameters, exploring how a diverse set of models and configurations affect the likelihood of generating erroneous package recommendations and identifying the root causes of this phenomenon. Using 16 popular LLMs for code generation and two unique prompt datasets, we generate 576,000 code samples in two programming languages that we analyze for package hallucinations. Our findings reveal that that the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat. To overcome this problem, we implement several hallucination mitigation strategies and show that they are able to significantly reduce the number of package hallucinations while maintaining code quality. Our experiments and findings highlight package hallucinations as a persistent and systemic phenomenon while using state-of-the-art LLMs for code generation, and a significant challenge which deserves the research community's urgent attention.