Wulfy<p><span class="h-card" translate="no"><a href="https://tech.lgbt/@gnat" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>gnat</span></a></span> </p><p>So I code with ChatGpt/Claude.</p><p>First, it's not like ordinary coding.<br>If you expect to vibe code, you are going to have a very bad time.</p><p>Second. The more definitions you give the <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a>, the better.<br>Give parameters what you want to expect.</p><p>Third, spec it. Give as much specifications as you can. You want that text window to scroll?<br>Propose an array or a list structure.<br>Leave as little to imagination as possible, the thing has very little of it and it will try to please you hard, it will make shit up.</p><p>Fourth. Give overall instructions. I usually say something along the lines of "Do not code unless clear instructions are given". Else the thing will launch into code at the first prompt.</p><p>Fifth, I used to get it to Pseudocode. Now I just usually say "Restate the problem". Just to make sure the machine understands what it's doing.</p><p>Checkpoint. When you have code that works, designated it as "Version X.1" because inevitably the machine will fuck it, esp if you're introducing a notable change.</p><p>Seventh, learn <a href="https://infosec.exchange/tags/promptengineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>promptengineering</span></a>, most people have NFI how to use the <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> esp. if they are naturally hostile towards the tech.<br>E.g. If I really want the model to pay attention, I will say something like: DIRECTIVE: Blah blah.</p><p>Lastly, this should go without saying, the free models suck, pay the broligarch tax for the smarter engine.</p><p>It helps if you understand a little how LLMs work, today for example I gave a prompt to just keep latest checkpoint and specs and flush everything else from the session context as it tied itself into knots </p><p>There are other tips.</p><p><a href="https://infosec.exchange/tags/aiprogramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>aiprogramming</span></a> </p><p>P.S. If this is not your sport, just mute and move on, don't be rude</p>