Paco Hope #resist<p>I have seen a lot of efforts to use an <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> to create a <a href="https://infosec.exchange/tags/ThreatModel" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ThreatModel</span></a>. I have some insights. </p><p>Attempts at <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/ThreatModeling" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ThreatModeling</span></a> tend to do 3 things wrong:</p><ol><li>They assume that the user's input is both complete and correct. The LLM (in the implementations I've seen) never questions "are you sure?" and it never prompts the user like "you haven't told me X, what about X?"</li><li>Lots of teams treat a threat model as a deliverable. Like we go build our code, get ready to ship, and then "oh, shit! Security wants a threat model. Quick, go make one." So it's not this thing that informs any development choices <em>during development</em>. It's an afterthought that gets built just prior to <a href="https://infosec.exchange/tags/AppSec" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AppSec</span></a> review.</li><li>Lots of people think you can do an adequate threat model with only technical artifacts (code, architectuer, data flow, documentation, etc.). There's business context that needs to be part of every decision, and teams are just ignoring that.</li></ol><p>1/n</p>