Eudaimon ꙮ🤖 🧠 This is a VERY succulent post about <a class="hashtag" href="https://fe.disroot.org/tag/ai" rel="nofollow noopener noreferrer" target="_blank">#AI</a> and future trends, and dangers. First, an entire web page dedicated to current status and more-than-probable future developements and achievements of AI, by Leopold Aschenbrenner (who used to work in the Superalignment team at OpenAI, so he's someone who knows deeply about the issue). The web page is <a href="https://situational-awareness.ai/" rel="nofollow noopener noreferrer" target="_blank">https://situational-awareness.ai/</a> and, actually I've only read chapter 1: "From GPT-4 to AGI: Counting the OOMs" (Orders Of Magnitude), and it is already an shocker and eye-opener. tl;dr: there is a good chance that by 2027 we have the so-called <a class="hashtag" href="https://fe.disroot.org/tag/agi" rel="nofollow noopener noreferrer" target="_blank">#AGI</a>, or Artificial General Intelligence. Like in real intelligence, way beyond <a class="hashtag" href="https://fe.disroot.org/tag/chatgpt4" rel="nofollow noopener noreferrer" target="_blank">#ChatGPT4</a>. This could look like "yay, unicorns!", but there are grave problems behind this. One of the main ones: <a class="hashtag" href="https://fe.disroot.org/tag/alignment" rel="nofollow noopener noreferrer" target="_blank">#Alignment</a>, or "restrict the AI to do what we would like it to do and not, say, exterminate humans or any other catastrophic decision". This article says it is, directly, impossible: <a href="https://www.mindprison.cc/p/ai-alignment-why-solving-it-is-impossible" rel="nofollow noopener noreferrer" target="_blank">https://www.mindprison.cc/p/ai-alignment-why-solving-it-is-impossible</a> Not just hard, but impossible. As in: <br><br>«“Alignment, which we cannot define, will be solved by rules on which none of us agree, based on values that exist in conflict, for a future technology that we do not know how to build, which we could never fully understand, must be provably perfect to prevent unpredictable and untestable scenarios for failure, of a machine whose entire purpose is to outsmart all of us and think of all possibilities that we did not.”»<br><br>This is deeply analyzed in this article (which I haven't fully read, I felt the urge to write this post first). <br><br>Now, it is also very interesting and fearsome to read, from the first page I mentioned (<a href="https://situational-awareness.ai/" rel="nofollow noopener noreferrer" target="_blank">https://situational-awareness.ai/</a>), the articles called «Lock Down the Labs: Security for AGI». He himself says "We’re counting way too much on luck here.". This not to be taken lightly, I'd say. <br><br>All this said, I think he lays a very naive view of the world in of of this web's articles: "The Free World Must Prevail". He seems to think "liberal democracies" (what I'd say "global North" states) are a model of freedom and human-rights respect, and I don't think so at all. That there are worse places, sure. But, also: these "liberal democracies" have a very heavy externalization of criminal power abuses, which would seem have nothing to do with them, but I'd say it has everything to do with them: from slavery, natural resource exploitation, pollution and trash. And progressively this externalization is coming home, where more and more people are being destituted, where the fraction of miserable and exploited people is growing larger and larger. At the same time, there exists a very powerful propaganda machine that generates a very comforting discourse and story to the citizen of these countries, so we remain oblivious to the real pillars of our system (who is aware of the revoltingly horrendous conditions of most animals in industrial farming, for example? Most of us just get so see nicely packaged stuff in the supermarkets, and that's the image we extrapolate to the whole chain of production). I guess that despite his brilliant intelligence, he has fallen pray to such propaganda (which, notably, uses emotional levers and other cognitive biases which bypass reason).<br><br>Finally, Robert Miles has published a new video after more than a year in silence (I feared depression!): <a href="https://www.youtube.com/watch?v=2ziuPUeewK0" rel="nofollow noopener noreferrer" target="_blank">https://www.youtube.com/watch?v=2ziuPUeewK0</a> which is yet another call to SERIOUSLY CONSIDER AI SAFETY FFS. If you haven't checked his channel, he's got very funny and also bright, informative and concise videos about <a class="hashtag" href="https://fe.disroot.org/tag/aisafety" rel="nofollow noopener noreferrer" target="_blank">#AIsafety</a> and, in particular, <a class="hashtag" href="https://fe.disroot.org/tag/aialignment" rel="nofollow noopener noreferrer" target="_blank">#AIAlignment</a>. Despite being somewhat humorous, he is clear about the enormous dangers of AI misalignment.<br><br>There you go, this has been a long post, but I think it's important that we all see where this is going. As for the "what could I do?" part... shit. I really can't tell. As Leopold says, AI research is (unlike some years ago) currently run by private, opaque and proprietary AI labs, funded by big capital which will only increase inequality and shift even more power balance to as tinier elite (tiny in numbers, huge in power). I can't see how this might end well, I'm sorry. Maybe the only things that might stop this evolution are natural or man-made disasters such as the four horsemen of the apocalypse. Am I being too pessimist here? Is there no hope? Well, still, I'll continue to correct my student's exams now, after writing this, and I'll continue to try to make their life and that of people that surround me more wonderful (despite the exams XD). I refuse to capitulate: I want to continue sending signals of the wonderfulness of human life, and all life, despite the state of the world (part of it: it is much more than just wars and extinction threats, which there are).<br><br>Please boost if you found this long post interesting and worthy