Wulfy<p><a href="https://infosec.exchange/tags/o3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>o3</span></a> model from Open <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> is now <a href="https://infosec.exchange/tags/superhuman" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>superhuman</span></a> (albeit by "only") a couple percent.</p><p>As evaluated by the ARC AI benchmark, expressly designed to be harder for AI and easier for humans.</p><p>So if you belong to the poorly informed group of <a href="https://infosec.exchange/tags/aihate" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>aihate</span></a>, who believe that AI is a scam or that it's reached its peak or that it will never be useful...</p><p>...you need to muscle up... Because we are likely in the final years if not months of human supremacy.<br>Better start burning those data centres because soon you will be as smart as a chicken is comparing to a human in cognitive power 🤡</p><p>There is an upside, if you can call it that, the model "only" performs so well with massive compute...<br>This is only a problem (IMHO) If you think a F1 engine only outperforms a family sedan because of expensive engineering.</p><p><a href="https://infosec.exchange/tags/agi" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>agi</span></a> <a href="https://infosec.exchange/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a></p>