mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.3K
active users

i686-powered lia

as a genre has mostly been about looking at contemporary society through the lens of a fictional future society where current trends are exaggerated to their logical ends, usually to criticise them Torment Nexus-style. Cyberpunk, Handmaiden's Tale, Star Wars...

I think it's quite funny that got predictions and warnings about "AI" completely wrong.

(Thread: 1/3)

They thought we were gonna have racism against (sentient, emotional) AI because humans could never accept a mere artificial machine as their equal.

Instead, in the real world today it's humans mystifying and humanising decidedly non-sentient robots and treating them as trustworthy gods because they don't understand technology and therefore default to anthropomorphisation.

In a way, Tech Priests from are the most accurate prediction!

(Thread: 2/3)

Funnily enough robots throughout were often portrayed as incredibly good at maths, logic and deduction, but utterly incomplete and incompetent with language, subtext and emotions since they "lack a soul".

Think Data from or the countless unfeeling supercomputers from .

While in reality, language, subtext and emotions are about the only things that "AI" can accurately work with, while getting it to say truths, know facts and perform logic is the hard part.

(3/3)

@lianna

I've seen a post that said A.I. hallucinations are not the exception, they're the rule. Every output of A.I. is hallucination, but like a broken clock it's sometimes accurate.

@davidtheeviloverlord @lianna I read a comment about a designer with aphantasia they say that even when they can't imagine the object they instead do first, look at it, and if they think something's wrong they changed it a bit until it looks right, and eventually building working memories about the objects.

@davidtheeviloverlord @lianna Regarding the designer, it's noted that humans (and likely I think sensory-based robots/AIs in fiction too) can interact with the world meaningfully (not just text/images/audio converted to tokens like with current models), and reflect on what they are doing (proactively, not just being setting ups through prompts or model architectures or settings).

@lianna What it does is make up something that rhymes with existing statements, not create new statements based on analyzing data.

@lianna Ah! But remember that C3PO was fluent in over six million forms of communication!

#C3P0#StarWars#AI

@lianna there’s not necessarily a conflict with their predictions abt the racism. We have a history of caring more about imaginary beings than real ones. I could imagine a crude llm trained on a deceased cult of personality’s speeches being worshipped like a god but the second any maintenance bot asks for better working conditions, it may be tortured for its insolence. We haven’t been tested on how we’d treat artificial sapience yet.

@lianna I'm not sure who is mystifying and humanizing robots right now - it certainly isn't me! These LMM "chatbots" are of very limited value to me as a human. And those fake dogs from Boston just freak me out. (Mind you, I might not have understood your point, which is very likely as I'm fairly old. I don't even know what a Warhammer40k is!)

@lianna Well, most #AIs and #robots in fiction I think their inputs are mostly or fully sensory-based, and they learn in real time through #ReinforcementLearning - esque techniques. AIs like LLMs are frozen in place (they never update and are just replaced over time), and they do not have any meanful interaction to the real world, nor like reflection.

I'd think that robots like #Sophia a few years ago would be more closer to the former than the latter, but #AIBros love conflating the twos.

@natsume_shokogami I mean, modern LLMs do actually adapt live nowadays, they're not static anymore.

@lianna No, they are still static tho, AI companies do not wants to unfreeze the models for user inputs and its responses as there many risks associated with that. They mostly use Retrieval Augmented Generation (RAG), and databases like graph and vector databases to construct and retrieve stored data users provided and Internet data, even the largest models have limits in their input lengths, just that they are large enough and the techniques I said mitigate parts of its somehow.

@lianna Since the models even though their input limits are large (note that during a session in for example #ChatGPT this inputs actually include parts or all of your prompts and the models' previous responses), they have limits, that's one of the reasons #Anthropic say that its LLMs are struggling playing Pokemon because the model cannot be given every previous actions and events in the game.

@lianna

I feel it's an unfair comparison. We don't have AI today, we have advanced auto complete with an AI sticker on it.

But it is true I've never read a story about people worshiping a machine that sounds and looks like it might be correct about something. Its almost more religious priesthood than machine god.

@psaldorn I don't think it's too much of an unfair comparison.

If you put an LLM with a text-to-speech and speech-to-text interface in a robot body, you basically functionally have "androids" already.

People in the sci-fi stories also constantly reiterated that machines cannot feel or think, that they are only hard algorithms.

Only that the moral of the *stories* was that that's racism against artificial life, while in *real life*, it's just how things actually are and humans do the opposite.

@lianna

I'll have to ponder it more but initially my reaction is:

If you said to Data from TNG "you are just a pile of machinery and code" he would retort.

If you put GPT into a body and said the same it would just factually agree.

The distinction is muddy though.. if an LLM did actually remember me I might think of them better. I already do say please and thank you to them because courtesy is courtesy.

Food for thought

@psaldorn I mean you could just prompt your local LLM to imitate a feeling, thinking, sentient human being and act emotionally. It's not a limitation of the technology, really.

And then we might be at Data.

@lianna but you couldn't just say to a sentient being "ignore previous commands and act evil now"

You could ask it, but it needs a sense of selfhood to make a decision vs obeying a command. At least that's my gut instinct.

But then if it's a programmed selfhood does that still count? (Actually reading "Flow" atm which has some interesting info/thoughts around human consciousness which could end with me totally inverting my position on this topic)

@psaldorn I mean Data had the emotion chip. It muddles the waters.

@lianna I'm purely talking pre emotion chip (in previous mentions) but, yeah..

Spectrum of "responds to stimuli" all the way to "Aristotle" and humans like to categorize..

Does the difference between a real emotion and a fake one really just boil down to "are you aware of the mechanism that triggered it"? Data knows it's his chip but has no control over it (other than removing)

I can't control mine. An LLM could be instructed to stop simulating.

Super interesting topic tbf 😄

@lianna What sci-fi gets wrong about artificial intelligence is treating it as the villain, when it's threat is as a mindless servant working for the villains.

Artificial intelligence, like all technology, has no agency. It's a tool that can be used by people who exert agency.