mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

2.7K
active users

#aisearch

2 posts2 participants0 posts today

"It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

As a disclaimer at the bottom of every AI Overview notes, Google uses “experimental” generative AI to power its results. Generative AI is a powerful tool with all kinds of legitimate practical applications. But two of its defining characteristics come into play when it explains these invented phrases. First is that it’s ultimately a probability machine; while it may seem like a large-language-model-based system has thoughts or even feelings, at a base level it’s simply placing one most-likely word after another, laying the track as the train chugs forward. That makes it very good at coming up with an explanation of what these phrases would mean if they meant anything, which again, they don’t."

wired.com/story/google-ai-over

WIRED · ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI FlawBy Brian Barrett

"The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along.

And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Even OpenAI's latest reasoning models, dubbed o3 and o4-mini, tend to hallucinate even more than their predecessors, showing that the company is actually headed in the wrong direction.

Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users."

futurism.com/google-ai-overvie

Futurism · "You Can’t Lick a Badger Twice": Google's AI Is Making Up Explanations for Nonexistent Folksy SayingsBy Victor Tangermann

"For the past two and a half years the feature I’ve most wanted from LLMs is the ability to take on search-based research tasks on my behalf. We saw the first glimpses of this back in early 2023, with Perplexity (first launched December 2022, first prompt leak in January 2023) and then the GPT-4 powered Microsoft Bing (which launched/cratered spectacularly in February 2023). Since then a whole bunch of people have taken a swing at this problem, most notably Google Gemini and ChatGPT Search.

Those 2023-era versions were promising but very disappointing. They had a strong tendency to hallucinate details that weren’t present in the search results, to the point that you couldn’t trust anything they told you.

In this first half of 2025 I think these systems have finally crossed the line into being genuinely useful."

simonwillison.net/2025/Apr/21/

Simon Willison’s WeblogAI assisted search-based research actually works nowFor the past two and a half years the feature I’ve most wanted from LLMs is the ability to take on search-based research tasks on my behalf. We saw the …
Continued thread

#PennedPossibilities 649 2/2 — What research did you conduct for your WIP, and did you uncover anything surprising or fascinating?

This answer, however, should interest any authors wanting to learn something from another authors' search behavior. I finally got completely feed up with the substandard results from DuckDuckGo and the on again off again AI search creeping into Google results, even with &udm=14.

A few days ago I decided to research paid search. All FREE search, it goes without saying, monetizes your behavior, time, or attention, so I understand it isn't free. How much do you make per hour? When I search for anything that could be construed as a product or service someone could SELL, it's impossible to find answers. Look for words for describing how to rock a baby, for example. I'm sure you've a slew of searches you've given up on.

I am trialing kagi.com. I am NOT advertising it; I'm not endorsing it. I've only tried two searches of the 100 allocated me so far. However, those two have been so full of useful results that I'm still mining them the next day.

I'll report back after I use it more.

[Author retains copyright (c)2025 R.S.]

#BoostingIsSharing

#gender #fiction #writer #author
#writing #writingcommunity #writersOfMastodon #writers
#RSdiscussion
#seach #kagi #google #duckDuckGo #ai #aisearch

"In March 2024, website owner Morgan McBride was posing for photos in her half-renovated kitchen for a Google ad celebrating the ways the search giant had helped her family’s business grow.

But by the time the ad ran about a month later, traffic from Google had fallen more than 70%, McBride said. Charleston Crafted, which features guides on do-it-yourself home improvement projects, had weathered algorithm changes and updates in the past; this time, it didn’t recover. McBride suspected people were getting more of their renovation advice from the artificial intelligence answers at the top of Google search.

The now-ubiquitous AI-generated answers — and the way Google has changed its search algorithm to support them — have caused traffic to independent websites to plummet, according to Bloomberg interviews with 25 publishers and people who work with them. That’s disrupting a delicate symbiotic relationship that’s existed for years: if businesses create good content, Google sends them traffic.

Many of the publishers said they have to either shut down or reinvent their distribution strategy, a cycle experts say could eventually degrade the quality of information Google can access for its search results — and to feed its AI answers, which have still at times contained inaccuracies that have made them a poor substitute for publishers’ content. For home-renovation questions, Google’s AI may give advice that’s unsafe or simply inaccurate, such as recommending specific products that don’t exist, McBride said."

bloomberg.com/news/articles/20

Continued thread

🧵 …the AI (as mentioned several times above) is a tool for miss information, is slowly being scientifically viewed.

»AI search engines cite incorrect sources at an alarming 60% rate, study says:
CJR study shows AI search services misinform users and ignore publisher exclusion requests.«

🎯 arstechnica.com/ai/2025/03/ai-

A dartboard with only a few darts hitting it, with many misses beside it.
Ars Technica · AI search engines cite incorrect news sources at an alarming 60% rate, study saysBy Benj Edwards

"Building on our previous research, the Tow Center for Digital Journalism conducted tests on eight generative search tools with live search features to assess their abilities to accurately retrieve and cite news content, as well as how they behave when they cannot.

We found that…

- Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
- Premium chatbots provided more confidently incorrect answers than their free counterparts.
- Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
- Generative search tools fabricated links and cited syndicated and copied versions of articles.
- Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.

Our findings were consistent with our previous study, proving that our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested."

cjr.org/tow_center/we-compared

Columbia Journalism ReviewAI Search Has A Citation ProblemWe Compared Eight AI Search Engines. They’re All Bad at Citing News.