mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.7K
active users

#generativeai

100 posts70 participants13 posts today
My half-baked deep thought of the day is that we are living through a time in which imagination has been de-legitimatized generally, and we are reaching the necessarily absurd crescendo of this process. It is taking physical form in technologies such as generative AI, literal Anti-Imagination generators.

Let me be clear what I (don't) mean by "imagination". I do not mean "creative", "fictional", "imaginary", or "artistic"--those words can be related, but they've also been co-opted into exactly the trend I'm calling out. I also don't mean interesting images that appear in your mind or dreams but are then dismissed as lacking significance. I do mean truly imaginative acts arising within and from the mind, and not subjected to editorial scrutiny by logic, empiricism, or other instrumentalized forms of reason. Dreams are one way to access imagination; active imagination--stream of consciousness directed but not edited by the conscious mind--can be a way to explore it. Religious, magical, mystical, or meditative practices can too if that's your jam. So can psychoanalysis and some other forms of therapy. There are countless other ways and I don't pretend to have any special knowledge of this, I'm just riffing on an idea.

More and more I believe that we have to rediscover and exercise this aspect of ourselves if we're to navigate the current crisis. (*) Our collective imagination lacks force at a time when generative Anti-Imagination is reaching industrial scale.

#AI #GenAI #GenerativeAI #imagination #ActiveImagination

(*) "Crisis" has a medical definition: "that change in a disease which indicates whether the result is to be recovery or death" (Webster's dictionary). Its Greek root can also mean "decision", which I like to think about when considering "crises". They are decisions that need to be made.

"When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

So earlier this week, when members of a popular subreddit learned that their community had been infiltrated by undercover researchers posting AI-written comments and passing them off as human thoughts, the Redditors were predictably incensed. They called the experiment “violating,” “shameful,” “infuriating,” and “very disturbing.” As the backlash intensified, the researchers went silent, refusing to reveal their identity or answer questions about their methodology. The university that employs them has announced that it’s investigating. Meanwhile, Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

Joining the chorus of disapproval were fellow internet researchers, who condemned what they saw as a plainly unethical experiment. Amy Bruckman, a professor at the Georgia Institute of Technology who has studied online communities for more than two decades, told me the Reddit fiasco is “the worst internet-research ethics violation I have ever seen, no contest.” What’s more, she and others worry that the uproar could undermine the work of scholars who are using more conventional methods to study a crucial problem: how AI influences the way humans think and relate to one another."

theatlantic.com/technology/arc

The Atlantic · The Secret AI Experiment That Sent Reddit Into a FrenzyBy Tom Bartlett
Please stop saying "AI powered". It's a mixed-up way to talk that allows for a lot of mischief:

1. It's a mixed metaphor. AI is reactive, not propulsive (think through what the word "power" means)
2. AI is a mixed bag of a large number of different technologies, making the term imprecise at best ("AI" in video games frequently leans on old school A*, quite different from the LLMs that make the news)
3. LLM-based AI mixes up other people's words into a slurry that is full of content-free phrases like "AI powered". Why ape that?

#AI #GenAI #GenerativeAI

"Apple Inc. is teaming up with startup Anthropic PBC on a new “vibe-coding” software platform that will use artificial intelligence to write, edit and test code on behalf of programmers.

The system is a new version of Xcode, Apple’s programming software, that will integrate Anthropic’s Claude Sonnet model, according to people with knowledge of the matter. Apple will roll out the software internally and hasn’t yet decided whether to launch it publicly, said the people, who asked not to be identified because the initiative hasn’t been announced.

The work shows how Apple is using AI to improve its internal workflow, aiming to speed up and modernize product development. The approach is similar to one used by companies such as Windsurf and Cursor maker Anysphere, which offer advanced AI coding assistants popular with software developers."

bloomberg.com/news/articles/20

"So this is why you keep invoking AI by accident, and why the AI that is so easy to invoke is so hard to dispel. Like a demon, a chatbot is much easier to summon than it is to rid yourself of.

Google is an especially grievous offender here. Familiar buttons in Gmail, Gdocs, and the Android message apps have been replaced with AI-summoning fatfinger traps. Android is filled with these pitfalls – for example, the bottom-of-screen swipe gesture used to switch between open apps now summons an AI, while ridding yourself of that AI takes multiple clicks.

This is an entirely material phenomenon. Google doesn't necessarily believe that you will ever want to use AI, but they must convince investors that their AI offerings are "getting traction." Google – like other tech companies – gets to invent metrics to prove this proposition, like "how many times did a user click on the AI button" and "how long did the user spend with the AI after clicking?" The fact that your entire "AI use" consisted of hunting for a way to get rid of the AI doesn't matter – at least, not for the purposes of maintaining Google's growth story.

Goodhart's Law holds that "When a measure becomes a target, it ceases to be a good measure." For Google and other AI narrative-pushers, every measure is designed to be a target, a line that can be made to go up, as managers and product teams align to sell the company's growth story, lest we all sell off the company's shares."

pluralistic.net/2025/05/02/kpi

pluralistic.netPluralistic: AI and the fatfinger economy (02 May 2025) – Pluralistic: Daily links from Cory Doctorow

🧠 Durante il Google Cloud Next è stato mostrato un esempio di utilizzo di #Gemini + #Imagen + #Veo su #VertexAI per dare vita ad una esperienza creativa.

✨ Un potenziale enorme. 

___  

✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

Artificial intelligence is becoming a game-changer across nearly every industry, and fashion is no exception. A recent industry report highlights how the fashion sector is evolving quickly thanks to globalization, the boom in online shopping, and the rise of advanced tech like AI.
retroworldnews.com/generative-

Generative AI in Fashion: The Tech That’s Changing Everything
Retroworldnews · Generative AI In Fashion: The Tech That's Changing Everything | RetroworldnewsArtificial intelligence is becoming a game-changer across nearly every industry, and fashion is no exception. A recent industry report highlights how the

🧠 #Google ha appena pubblicato un secondo whitepaper sugli #AIAgents: una lettura fondamentale per chi vuole approfondire l'applicazione concreta degli agenti #AI in contesti complessi.

👉 I dettagli: linkedin.com/posts/alessiopoma

___ 

✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

"After poring through a century of varied conceptualizations, I’ll write out my current stance, half-baked as it is:

I think “AGI” is better understood through the lenses of faith, field-building, and ingroup signaling than as a concrete technical milestone. AGI represents an ambition and an aspiration; a Schelling point, a shibboleth.

The AGI-pilled share the belief that we will soon build machines more cognitively capable than ourselves—that humans won’t retain our species hegemony on intelligence for long. Many AGI researchers view their project as something like raising a genius alien child: We have an obligation to be the best parents we can, instilling the model with knowledge and moral guidance, yet understanding the limits of our understanding and control. The specific milestones aren’t important: it’s a feeling of existential weight.

However, the definition debates suggest that we won’t know AGI when we see it. Instead, it’ll play out more like this: Some company will declare that it reached AGI first, maybe an upstart trying to make a splash or raise a round, maybe after acing a slate of benchmarks. We’ll all argue on Twitter over whether it counts, and the argument will be fiercer if the model is internal-only and/or not open-weights. Regulators will take a second look. Enterprise software will be sold. All the while, the outside world will look basically the same as the day before.

I’d like to accept this anti-climactic outcome sooner than later. Decades of contention will not be resolved next year. AGI is not like nuclear weapons, where you either have it or you don’t; even electricity took decades to diffuse. Current LLMs have already surpassed the first two levels on OpenAI and DeepMind’s progress ladders. A(G)I does matter, but it will arrive—no, is already arriving—in fits and starts."

jasmi.news/p/agi

@jasmine · AGI (disambiguation)By Jasmine Sun

"In the Patel interview, Zuckerberg cites a statistic “from working on social media for a long time” that “the average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more. I think it's something like 15 friends or something.” The closest source I could find where he could be pulling this statistic from is a study commissioned by virtual therapy company Talkspace in 2024, which specifically surveyed men, and found that men have five “general” friends, three close friends and two best friends, on average.

Zuckerberg goes on to say:

“But the average person wants more connection than they have. There's a lot of concern people raise like, ’Is this going to replace real-world, physical, in-person connections?’ And my default is that the answer to that is probably not. There are all these things that are better about physical connections when you can have them. But the reality is that people just don't have as much connection as they want. They feel more alone a lot of the time than they would like.”

He said he thinks things like AI companions have a “stigma” around them now, but that society will eventually “find the vocabulary” to describe why people who turn to chatbots for socialization are “rational” for doing so.

His view of real-world connections seems to have shifted a lot in recent years, after lighting billions of dollars on fire for a failed metaverse gambit. Patel asked Zuckerberg about his role as CEO, and he said—among things like managing across projects and infrastructure—that he sees his place in the company as a tastemaker. “Then there's this question around taste and quality. When is something good enough that we want to ship it? In general, I'm the steward of that for the company,” he said. "

404media.co/mark-zuckerberg-ai

404 Media · Mark Zuckerberg Thinks You Don't Have Enough Friends and His Chatbots Are the AnswerThe CEO of Meta says "the average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more.”

"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.

The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.

To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?

Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."

quantamagazine.org/when-chatgp

A hand with letters on it reaching out to symbols
Quanta Magazine · When ChatGPT Broke an Entire Field: An Oral History | Quanta MagazineResearchers in “natural language processing” tried to tame human language. Then came the transformer.

"Usually, AdSense ads appear in search results and are scattered around websites. Google ran a small test of chatbot ads late last year, partnering with select AI startups, including AI search apps iAsk and Liner.

The testing must have gone well because Google is now allowing more chatbot makers to sign up for AdSense. "AdSense for Search is available for websites that want to show relevant ads in their conversational AI experiences," said a Google spokesperson.

If people continue shifting to using AI chatbots to find information, this expansion of AdSense could help prop up profits. There's no hint of advertising in Google's own Gemini chatbot or AI Mode search, but the day may be coming when you won't get the clean, ad-free experience at no cost."

arstechnica.com/ai/2025/05/goo

A large Google logo in the shape of a multi-colored G is seen outside Google's Mountain View offices.
Ars Technica · Google is quietly testing ads in AI chatbotsBy Ryan Whitwam

"I think there is a real need for a book on actual vibe coding: helping people who are not software developers—and who don’t want to become developers—learn how to use vibe coding techniques safely, effectively and responsibly to solve their problems.

This is a rich, deep topic! Most of the population of the world are never going to learn to code, but thanks to vibe coding tools those people now have a path to building custom software.

Everyone deserves the right to automate tedious things in their lives with a computer. They shouldn’t have to learn programming in order to do that. That is who vibe coding is for. It’s not for people who are software engineers already!

There are so many questions to be answered here. What kind of projects can be built in this way? How can you avoid the traps around security, privacy, reliability and a risk of over-spending? How can you navigate the jagged frontier of things that can be achieved in this way versus things that are completely impossible?

A book for people like that could be a genuine bestseller! But because three authors and the staff of two publishers didn’t read to the end of the tweet we now need to find a new buzzy term for that, despite having the perfect term for it already."

simonwillison.net/2025/May/1/n

Simon Willison’s WeblogTwo publishers and three authors fail to understand what “vibe coding” meansVibe coding does not mean “using AI tools to help write code”. It means “generating code with AI without caring about the code that is produced”. See Not all AI-assisted …

The thesis that each unauthorized use of a copyrighted work amounts to a lost sale going down the drain...

"At times, it sounded like the case was the authors’ to lose, with Chhabria noting that Meta was “destined to fail” if the plaintiffs could prove that Meta’s tools created similar works that cratered how much money they could make from their work. But Chhabria also stressed that he was unconvinced the authors would be able to show the necessary evidence. When he turned to the authors’ legal team, led by high-profile attorney David Boies, Chhabria repeatedly asked whether the plaintiffs could actually substantiate accusations that Meta’s AI tools were likely to hurt their commercial prospects. “It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected,” he told Boies. “It’s not obvious to me that is the case.”

When defendants invoke the fair use doctrine, the burden of proof shifts to them to demonstrate that their use of copyrighted works is legal. Boies stressed this point during the hearing, but Chhabria remained skeptical that the authors’ legal team would be able to successfully argue that Meta could plausibly crater their sales. He also appeared lukewarm about whether Meta’s decision to download books from places like LibGen was as central to the fair use issue as the plaintiffs argued it was. “It seems kind of messed up,” he said. “The question, as the courts tell us over and over again, is not whether something is messed up but whether it’s copyright infringement.”

A ruling in the Kadrey case could play a pivotal role in the outcomes of the ongoing legal battles over generative AI and copyright."

wired.com/story/meta-lawsuit-c

WIRED · A Judge Says Meta’s AI Copyright Case Is About ‘the Next Taylor Swift’By Kate Knibbs

"The strong interpretation of this graph is that it’s exactly what one would expect to see if firms replaced young workers with machines. As law firms leaned on AI for more paralegal work, and consulting firms realized that five 22-year-olds with ChatGPT could do the work of 20 recent grads, and tech firms turned over their software programming to a handful of superstars working with AI co-pilots, the entry level of America’s white-collar economy would contract. The chaotic Trump economy could make things worse. Recessions can accelerate technological change, as firms use the downturn to cut less efficient workers and squeeze productivity from whatever technology is available. And even if employers aren’t directly substituting AI for human workers, high spending on AI infrastructure may be crowding out spending on new hires.

Luckily for humans, though, skepticism of the strong interpretation is warranted. For one thing, supercharged productivity growth, which an intelligence explosion would likely produce, is hard to find in the data. For another, a New York Fed survey of firms released last year found that AI was having a negligible effect on hiring. Karin Kimbrough, the chief economist at LinkedIn, told me she’s not seeing clear evidence of job displacement due to AI just yet. Instead, she said, today’s grads are entering an uncertain economy where some businesses are so focused on tomorrow’s profit margin that they’re less willing to hire large numbers of entry-level workers, who “often take time to learn on the job.”"

theatlantic.com/economy/archiv

The Atlantic · Something Alarming Is Happening to the Job MarketBy Derek Thompson