mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

3K
active users

#LLMs

76 posts66 participants7 posts today

🔍 Large-Scale Text Analysis & Cultural Change

In their talk at the workshop “Large Language Models for the HPSS” @tuberlin Pierluigi Cassotti and Nina Tahmasebi presented a multi-method approach to studying cultural and societal change through large-scale text analysis.

By combining close reading with computational techniques, including but not limited to #LLMs , they demonstrate how diverse tools can be integrated to uncover shifts in language. #DigitalHumanities

The insatiable hunger to feed #LLMs and #AI is parasitically draining the commons and public internet. Bandwidth costs are spiking as crawlers take data for training and information. For Wikipedia, the lack of attribution means no visitors, no donors, just cost. The #ethics of AI are failing here.

I saw Tim Karr on bluesky suggest that AIs should pay fees or a tax (should that be tariffs?) into a fund that supports public content. Services like Cloudflare and Fastly that defend against bots are evolving for crawlers. In #identity, the implications for #AgenticAI, #AI, and #NHI are vast.

diff.wikimedia.org/2025/04/01/

Diff · How crawlers impact the operations of the Wikimedia projectsSince the beginning of 2024, the demand for the content created by the Wikimedia volunteer community – especially for the 144 million images, videos, and other files on Wikimedia Commons – has grow…

"Prompt Engineering" for AI is this today's version of "Don't hold it that way" for the iPhone 4.

Users are misassigned blame for fundamental flaws in the technology, and are instructed to adopt behavioural workarounds. These improvised habits lack the causal power to fix underlying problems in the tech, but they serve to reinforce the notion that this new tech is superior to the tech it's trying to replace or "disrupt". Furthermore, users are taught, "Just keep trying and you'll get it right," without questioning whether the new tech is the problem, or to ask if the new tech has the potential to ever deliver on its promises.

A crucial difference between early smartphones and wishing that LLMs are a route to "Thinking Machines" is: later models of phones successfully matured the engineering of antennas and improved mobile reception, but LLMs are a dead-end that can never lead to real Artificial Intelligence.

This can be summarised by the AM/FM Principal: Actual Machines in contrast to Fucking Magic.

#Prompt -> #Skript -> Zuletzt verwendet

Ein Erfahrungsbericht über #Vibe-Coding, Vertrauen in #LLMs und wie man mit einem #Prompt echten Mehrwert schafft.

Unter Windows kann man im Kontextmenü der Taskleiste schnell auf zuletzt verwendete Dateien zugreifen – eine Funktion, die ich oft nutze. Unter GNOME gibt es so etwas leider nicht. Also habe ich mir dieses Feature einfach selbst gebaut – mit #Bash und etwas Unterstützung von #ChatGPT.

gnulinux.ch/prompt-skript-zule

GNU/Linux.chPrompt -> Skript -> Zuletzt verwendetEin Erfahrungsbericht über Vibe-Coding, Vertrauen in LLMs und wie man mit einem Prompt echten Mehrwert schafft.

"In a new joint study, researchers with OpenAI and the MIT Media Lab found that this small subset of ChatGPT users engaged in more "problematic use," defined in the paper as "indicators of addiction... including preoccupation, withdrawal symptoms, loss of control, and mood modification."

To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of "affective cues," which was defined in a joint summary of the research as "aspects of interactions that indicate empathy, affection, or support," they used when chatting with it.

Though the vast majority of people surveyed didn't engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a "friend." The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model's behavior, too."

futurism.com/the-byte/chatgpt-

Futurism · Something Bizarre Is Happening to People Who Use ChatGPT a LotBy Noor Al-Sibai

🔴 💻 **Are chatbots reliable text annotators? Sometimes**

“_Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks._”

Ross Deans Kristensen-McLachlan, Miceal Canavan, Marton Kárdos, Mia Jacobsen, Lene Aarøe, Are chatbots reliable text annotators? Sometimes, PNAS Nexus, Volume 4, Issue 4, April 2025, pgaf069, doi.org/10.1093/pnasnexus/pgaf.

#OpenAccess #OA #Article #AI #ArtificialIntelligence #LargeLanguageModels #LLMS #Chatbots #Technology #Tech #Data #Annotation #Academia #Academics @ai

Happy birthday to Cognitive Design for Artificial Minds (lnkd.in/gZtzwDn3) that was released 4 years ago!

Since then its ideas have been presented and discussed widely in the research fields of AI/Cognitive Science/Robotics and - nowadays - both the possibilities and the limitations of: #LLMs, #GenerativeAI and #ReinforcementLearning (already envisioned and discussed in the book) have become a common topic of research interests in the AI community and beyond.
Similarly also the topic concerning the evaluation - in human-like and human-level terms - of the current AI systems has become a critical theme related to the problem Anthropomorphic interpretation of AI output (see e.g. lnkd.in/dVi9Qf_k ).
Book reviews have been published on ACM Computing Reviews (2021) lnkd.in/dWQpJdkV and on Argumenta (2023): lnkd.in/derH3VKN

I have been invited to present the content of the book in over 20 official scientific events in international conferences, Ph.D Schools in US, China, Japan, Finland, Germany, Sweden, France, Brazil, Poland, Austria and, of course, Italy.

A news I am happy to share is that Routledge/Taylor & Francis contacted me few weeks ago for a second edition! Stay tuned!

The #book is available in many webstores:
- Routledge: lnkd.in/dPrC26p
- Taylor & Francis: lnkd.in/dprVF2w
- Amazon: lnkd.in/dC8rEzPi

@academicchatter @cognition
#AI #minimalcognitivegrid #CognitiveAI #cognitivescience #cognitivesystems

STP: Self-play LLM theorem provers with iterative conjecturing and proving. ~ Kefan Dong, Tengyu Ma. arxiv.org/abs/2502.00212

arXiv logo
arXiv.orgSTP: Self-play LLM Theorem Provers with Iterative Conjecturing and ProvingA fundamental challenge in formal theorem proving by LLMs is the lack of high-quality training data. Although reinforcement learning or expert iteration partially mitigates this issue by alternating between LLM generating proofs and finetuning them on correctly generated ones, performance quickly plateaus due to the scarcity of correct proofs (sparse rewards). To keep improving the models with limited data, we draw inspiration from mathematicians, who continuously develop new results, partly by proposing novel conjectures or exercises (which are often variants of known results) and attempting to solve them. We design the Self-play Theorem Prover (STP) that simultaneously takes on two roles, conjecturer and prover, each providing training signals to the other. The conjecturer is trained iteratively on previously generated conjectures that are barely provable by the current prover, which incentivizes it to generate increasingly challenging conjectures over time. The prover attempts to prove the conjectures with standard expert iteration. We evaluate STP with both Lean and Isabelle formal versifiers. With 51.3 billion tokens generated during the training in Lean, STP proves 28.5% of the statements in the LeanWorkbook dataset, doubling the previous best result of 13.2% achieved through expert iteration. The final model achieves state-of-the-art performance among whole-proof generation methods on miniF2F-test (65.0%, pass@3200), Proofnet-test (23.9%, pass@3200) and PutnamBench (8/644, pass@3200). We release our code, model, and dataset in this URL: https://github.com/kfdong/STP.