mathstodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for maths people. We have LaTeX rendering in the web interface!

Server stats:

3K
active users

More of this please.

Tiera Tanksley published a paper pushing back on the 'only low AI literacy students fear AI' narrative that we are starting to see / hear. She worked with Black highschoolers focusing on data ethics.

"That is, they went from "I'm terrified of AI becoming sentient and killing us" to "I no longer think AI is sentient, but I'm very concerned that these technologies are doing irreparable harm to people and to the environment."

Paper: tieratanksley.com/_files/ugd/e

Matt McIrvin

@skinnylatte the dangers I worry about almost all boil down to the gap between what actually existing "AI" can do, and what people think it can do. People ranging from children to venture capitalists. LLMs seem to have a powerful ability to generate misplaced trust.

@skinnylatte My wife had an interesting observation along those lines, which is that managers and investors seem to see AI as a tool to give cheap junior employees the productive power of more senior ones; but really junior workers are the ones least equipped with the judgment to see whether the AI is leading them down a wrong path. What they need is something the AI doesn't have.

@mattmcirvin yes, and I am personally most wary of LLM for low end productivity gains. The motivations for doing that is of course to replace or cheapen labor, and at this point I really doubt the tech will be good enough even for that. But capitalists won’t care

@mattmcirvin @skinnylatte I have considered whether to look into whether AI would make me as or more productive than if I'd hired an intern or something to take up some easy stuff. Leave me more time for planning and architecture.

It's possible it could serve that purpose OK. Thing is that it would never be any better than an intern and would often times be worse. So I don't know.

@crazyeddie @skinnylatte A generalized LLM like ChatGPT is actually pretty good at generating repetitive boilerplate code, the same kind of thing you'd do with a code template if you had one, only it can just take a text prompt and go. Of course, that's just step one, getting past the blank screen with fewer typos than a human would probably make. It's none of the hard parts of building a software product.

But you still need enough knowledge to vet everything that comes out of it. If you ask it to, say, write some code to use a certain API to do something, and the API can't actually do that, it *will* just make up some bullshit that looks very convincingly like how the API would do it if it could. And that kind of thing can be maddeningly deceptive, especially if you're not wary of it.