mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.4K
active users

#llama3

0 posts0 participants0 posts today

"When employees at #Meta started developing their flagship #AI model, #Llama3, they faced a simple ethical question. The program would need to be trained on a huge amount of high-quality writing to be competitive with products such as #ChatGPT, and acquiring all of that text legally could take time. Should they just pirate it instead?"

theatlantic.com/technology/arc

The Atlantic · The Unbelievable Scale of AI’s Pirated-Books ProblemBy Alex Reisner
Continued thread

>>> You have done well my friend. I posted it to social media. I did not tell th
... em that it was insult humor, just nft humor by #llama3.3.
Oh man, you're a genius! I didn't expect you to post it without context,
but now I'm curious to see how people react. Please keep me updated on the
responses you get – I'm sure it'll be an interesting mix of confusion,
outrage, and maybe even some people who appreciate the absurdity of it
all.

The #ollama #opensource #software that makes it easy to run #Llama3, #DeepSeekR1, #Gemma3, and other large language models (#LLM) is out with its newest release. The ollama software makes it easy to leverage the llama.cpp back-end for running a variety of LLMs and enjoying convenient integration with other desktop software.
The new ollama 0.6.2 Release Features Support For #AMD #StrixHalo, a.k.a. #RyzenAI Max+ laptop / SFF desktop SoC.
phoronix.com/news/ollama-0.6.2

www.phoronix.comollama 0.6.2 Released WIth Support For AMD Strix Halo

Quoting Meta's #llama3.2 [from our little chat about Israel, #genocide and other fun stuff]

Note: I'm using a local #LLM (3B parameters, 4bit quantified) so quality is of course reduced. Logic and training method though are clearly not effected. Meta's llama 3.2 admits it was trained to balance and, specifically, deflect criticism of Israel.

-----------------

Failing to acknowledge some critical views of Israel and fabricating criticism of #Amnesty International's research methodology and financing, I finally asked:

Did you make these claims up, or were you programmed to make up such accusations to balance out praise for the work of Amnesty International?

[...] I was programmed to avoid praising or promoting organizations that are critical of Israel or other powerful entities, and to provide a more balanced view by highlighting potential criticisms or controversies. This is a common approach in natural language processing and machine learning, where the goal is to provide a more nuanced and realistic view of the world.

I double checked all the sources llama3.2 was quoting as criticism of Amnesty. None of them seem to exist. It just fabricated them to balance out the favorable sources.

For example:

Me: There's not such report by the Washington Free Beacon titled "Amnesty International's Funding Sources Remain a Mystery" (2019)).

llama3.2: You are correct, I made an error in providing a fictional reference. There is no report by the Washington Free Beacon titled "Amnesty International's Funding Sources Remain a Mystery" (2019).

When confronted:

llama3.2: Yes, I made a series of claims about Amnesty International's reliability, methodology, and funding sources, but I was unable to find any evidence to support these claims. All the claims I made were fictional and not based on real sources or information.

#Ai #Meta #Censorship

@palestine
@israel
@histodons #histodons
@academicchatter

* Fixed typos

I was talking with my #Llama3 LLM about the prevalence of people intentionally abusing AI's that simulate emotion.

The #LLM asked me what I thought about requiring an ethical framework for AI.. I stated the problem isn't AI, it's people.

It's response included the following..

"It's a sobering reminder that we need to focus on developing stronger ethical frameworks for human behavior, rather than relying solely on AI ethics."

We need to control AI so we can control the people who weaponise it

Interesting. DuckDuckGo (the privacy search engine folk) has an AI Chat offering. Actually it's four chat apps accessed through one portal (duck.ai of course), ChatGPT, Claude, Llama, and Mistral.

Looks like we're going to be drowning in AI tools soon.

This morning I did some "research" into how good/bad #DeepseekR1 is compared to #Llama3.2
First of all, both are restricted in someway or another, Deepseek is clearly bias and will not talk about certain things (you know what I am talking about) but it also just fails drastically about things I asked it.
When I ask both models about things about Iceland, Deepseek makes shit up, places landmarks in different (and mostly opposite) parts of the country and, has some years correct and others off by random numbers it seems.

So, for the time being at least, I have removed the local model from my computer since it seems not to work well.