mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.1K
active users

#modelcollapse

0 posts0 participants0 posts today

#KINews 🤪 #Retröt
Die unkontrollierte Nutzung von #KI-generierten Inhalten beim #Training kann zu irreversiblen Fehlern in Modellen wie #LLMs führen, genannt „#ModelCollapse“. Dadurch verschwinden seltene Inhalte, was die Fairness der Vorhersagen beeinträchtigt. Die langfristige Erhaltung echter menschlicher #Datenquellen ist entscheidend. Um dies zu gewährleisten, sollte die Herkunft von Online-Inhalten nachverfolgt werden.

#Datenqualität #Science #AI

tino-eberl.de/ki-news/model-co

Tino Eberl · Model Collapse: Die Gefahr der KI-Überflutung mit maschinell generierten Inhalten
More from Tino Eberl
Replied in thread
arXiv logo
arXiv.orgThe Curse of Recursion: Training on Generated Data Makes Models ForgetStable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.
Replied in thread

@dragfyre

What makes that inevitable? I'm not disputing the effect you describe. Model collapse makes sense.

But developers are rational actors. I think the last year has already seen considerable movement towards more curated data sets. (After finding CSAM in LAION, for example.)

The foundational models like GPT4 may have needed to consume everything in order to reach where they are at, but that's not necessarily true of models broadly.

SHOT:

“Against the threat of model collapse, what is a hapless machine-learning engineer to do? The answer could be the equivalent of prewar steel in a Geiger counter: data known to be free (or perhaps as free as possible) from generative AI’s touch.”

scientificamerican.com/article

CHASER:

#PreWarSteel is the equivalent of clean, human-generated data.

superversive.co/blog/synthetic

Scientific American · AI-Generated Data Can Poison Future AI ModelsBy Rahul Rao

Synthetic machines which are purpose-built to automate synthetic websites containing synthetic content (which are not protected IP), and are then re-ingested by other synthetic machines to generate more unprotected synthetic content is a good way to cause #ModelCollapse and also pollute our information ecosystem.

I feel like #LLMs were Chernobyl of the internet and everyone is inside the containment zone.

www-bbc-com.cdn.ampproject.org

BBC NewsNY Times sues Microsoft and OpenAI for 'billions' - BBC NewsThe US news organisation claims millions of its articles were used without permission to train ChatGPT.