I tried https://notebooklm.google on two papers of mine. It's advertised as "Your Personalized AI Research Assistant". The short summary is that the tool is exactly as good as an insolent incompetent science journalist. When confronted with its own factual mistakes, it tries to blame the paper instead. (1/3)
@andrejbauer But even if it was good, why would I speak to it instead of my coworkers or colleagues? Maybe I am just growing old and jaded against AI tools, but I don't see the use cases of most of them.
@antopatriarca Because if it were good you could tell it to read 10000 papers and tell you which ones are relevant for the problem you're trying to solve.
@andrejbauer @antopatriarca I am very concerned about the problem of what happens when time-poor reviewers use this as a tool to help them. My studens have already had to deal with a number of AI generated reviews and the responses from editors/area chairs so far can best be described as crickets...
@andrejbauer Is that a real or hypothetical scenario? Do you really need to read so many papers to see which ones are related to your problem? In my experience the number of people working on the same problem domain are usually quite limited in number and they all know each other. But maybe I'm biased and it depends on the field. The AI field is surely overcrowded right now. Let's assume we really have that many papers. How many of them are really worth publishing? How many of them are actually saying the same thing with different words? Is this tool actually helping with the real problem (IMHO too many papers) or making it worse?
@antopatriarca It's a realistic scenario, except I won't be the one doing the reading. It's essentially a better Google search engine. The AI should read all papers, and then just tell me which ones I should read.
@andrejbauer yes, as a search engine could be useful.