mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.1K
active users

#RTX3090

0 posts0 participants0 posts today
Winbuzzer<p>Google Releases Gemma 3 QAT AI Models for Consumer GPUs</p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/AIModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIModels</span></a> <a href="https://mastodon.social/tags/GoogleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GoogleAI</span></a> <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/OpenSourceAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSourceAI</span></a> <a href="https://mastodon.social/tags/GPUs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUs</span></a> <a href="https://mastodon.social/tags/QAT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>QAT</span></a> <a href="https://mastodon.social/tags/Quantization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Quantization</span></a> <a href="https://mastodon.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NVIDIA</span></a> <a href="https://mastodon.social/tags/RTX3090" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RTX3090</span></a> <a href="https://mastodon.social/tags/Kaggle" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kaggle</span></a></p><p><a href="https://winbuzzer.com/2025/04/20/google-releases-gemma-3-qat-ai-models-for-consumer-gpus-xcxwbn/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">winbuzzer.com/2025/04/20/googl</span><span class="invisible">e-releases-gemma-3-qat-ai-models-for-consumer-gpus-xcxwbn/</span></a></p>
WinFuture.de<p>Mit etwas Glück könnte Frame Generation bald auch auf der <a href="https://mastodon.social/tags/GeForce" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GeForce</span></a> <a href="https://mastodon.social/tags/RTX3090" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RTX3090</span></a> und den anderen <a href="https://mastodon.social/tags/GPUs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUs</span></a> der 3000-Serie von <a href="https://mastodon.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> landen. Aussagen des Unternehmens im Rahmen eines Interviews machen jetzt Hoffnung. <a href="https://winfuture.de/news,148274.html?utm_source=Mastodon&amp;utm_medium=ManualStatus&amp;utm_campaign=SocialMedia" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">winfuture.de/news,148274.html?</span><span class="invisible">utm_source=Mastodon&amp;utm_medium=ManualStatus&amp;utm_campaign=SocialMedia</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/Benchmarks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarks</span></a> show even an old <a href="https://hachyderm.io/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> <a href="https://hachyderm.io/tags/RTX3090" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RTX3090</span></a> is enough to serve <a href="https://hachyderm.io/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> to thousands<br>For 100 concurrent users, the card delivered 12.88 tokens per second—just slightly faster than average human reading speed<br><a href="https://www.theregister.com/2024/08/23/3090_ai_benchmark/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theregister.com/2024/08/23/309</span><span class="invisible">0_ai_benchmark/</span></a> <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://hachyderm.io/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a></p>