KI-Blackbox geknackt: Anthropic enthüllt, wie Claude wirklich denkt – und es ist bizarr - t3n – digital pioneers
https://t3n.de/news/ki-blackbox-anthropic-geknackt-1680603/ #Sprachmodell #LargeLanguageModel #LLM #Anthropic #Claude
KI-Blackbox geknackt: Anthropic enthüllt, wie Claude wirklich denkt – und es ist bizarr - t3n – digital pioneers
https://t3n.de/news/ki-blackbox-anthropic-geknackt-1680603/ #Sprachmodell #LargeLanguageModel #LLM #Anthropic #Claude
@skribe Conversely, the cost of printing, distribution, and storage puts up a barrier to spamming people on other continents with mass quantities of low value slop.
Just think through the logistics of a hostile Eurasian state sending a mass quantity of printed materials to Australia or North America.
Or, for that matter, a hostile North American state sending a mass quantity of printed materials to Europe or Asia.
You would either need:–
a) At least one printing press on each continent;
b) You could try shipping the magazines, but they'd be a month out of date when they arrive; or
c) You could try flying them overseas, but that would be very expensive very quickly.
That's before you worry about things like delivery drivers (or postage), and warehouses.
These are less of an issue for books than they are for newspapers or magazines.
And if a particular newspaper or magazine is known to be reliable, written by humans, researched offline, and the articles are not available online, then there's potentially value in people buying a physical copy.
Artificial Intelligence Then and Now | Communications of the ACM
https://dl.acm.org/doi/10.1145/3708554
Interesting summary of the current AI hype, how it compares with the previous one in the 80s, and whether we are that close to AGI. tl;dr: no.
Including an amusing example where ChatGPT is unable to differentiate a real Monty Hall problem https://en.wikipedia.org/wiki/Monty_Hall_problem from lookalikes, and offers the same counter-intuitive solution to all, even if the actual solution is obvious. No logical reasoning at all here. Fine or otherwis.
AI just brought us a new programming style: "Bug Oriented Programming " #BoP
Had a very insightful conversation about the limitations on AI with a marketing copywriter.
Her comment was that actually writing marketing materials is a small part of her job.
If it was just about writing something that persuades a customer to buy a product, it would be a cakewalk.
What takes time is the stakeholder management.
It's navigating conflicting and contradictory demands of different departments.
Legal wants to say one thing. Sales something different. Legal something else entirely.
There's higher-up managers who need their egos soothed.
There's different managers with different views about what the customers want and what their needs are.
And there's a big difference in big bureaucratic organisations between writing marketing collateral, and writing something that gets signed off by everyone who needs to.
She's tried using AI for some tasks, and what that typically involves is getting multiple AI responses, and splicing them together into a cohesive whole.
Because it turns out there's a big difference in the real world between generating a statistically probable output, and having the emotional intelligence to navigate humans.
New Essay
"The Intelligent AI Coin: A Thought Experiment"
Open Access here: https://seanfobbe.com/posts/2025-02-21_intelligent-ai-coin-thought-experiment/
Recent years have seen a concerning trend towards normalizing decisionmaking by Large Language Models (LLM), including in the adoption of legislation, the writing of judicial opinions and the routine administration of the rule of law. AI agents acting on behalf of human principals are supposed to lead us into a new age of productivity and convenience. The eloquence of AI-generated text and the narrative of super-human intelligence invite us to trust these systems more than we have trusted any human or algorithm ever before.
It is difficult to know whether a machine is actually intelligent because of problems with construct validity, plagiarism, reproducibility and transferability in AI benchmarks. Most people will either have to personally evaluate the usefulness of AI tools against the benchmark of their own lived experience or be forced to trust an expert.
To explain this conundrum I propose the Intelligent AI Coin Thought Experiment and discuss four objections: the restriction of agents to low-value decisions, making AI decisionmakers open source, adding a human-in-the-loop and the general limits of trust in human agents.
@drtcombs.bsky.social For the people following on Mastodon, here's a screenshot of the Mark Cuban post that Tab was referring to (full text in the caption):
Chinese sex doll maker sees jump in 2025 sales as AI boosts adult toys’ user experience https://www.byteseu.com/748815/ #AI #ArtificialIntelligence #baidu #ChatGPT #DataCentres #DeepSeek #Europe #Guangdong #iFlyTek #Italy #LargeLanguageModel #LiuJiangxia #Llama #MetaPlatform #MetaPlatforms #MetaBoxSeries #MindWithHeartRobotics #NorthAmerica #OpenSource #OpenAI #PrivacyConcerns #SexDollIndustry #Shenzhen #StarperyTechnology #StartUp #UnitedStates #WMDoll #Zhongshan
Segment Anything Model Can Not Segment Anything - Assessing AI Foundation Model’s Generalizability In Permafrost Mapping
--
https://doi.org/10.3390/rs16050797 <-- shared paper
--
#GIS #spatial #mapping #remotesensing #foundationmodel #AI #artificialintelligence #zeroshot #segmentation #GeoAI #spatialanalysis #LargeLanguageModel #LLM #SAM #performance #metrics #permafrost #visionmodel #icewedge #Arctic #warming #climatechange #thawslumps #landform #terrainmapping #EuroCrops #agriculture
DeepSeek Has Ripped Away AI’s Veil Of Mystique. That’s The Real Reason The Tech Bros Fear It [opinion piece]
--
https://www.theguardian.com/commentisfree/2025/feb/02/deepseek-ai-veil-of-mystique-tech-bros-fear <-- shared media article
--
[a interesting take that I think has some merit...]
"While privacy fears are justified, the main beef Silicon Valley has is that China’s chatbot is democratising the technology...
No, it was not a 'sputnik moment'..."
#DeepSeek #AI #deeplearning #China #risk #SputnikMoment #disruption #technology #largelanguagemodel #LLM #ChatGPT #Claude #chatbot #opensource
New Open Source DeepSeek V3 Language Model Making Waves - In the world of large language models (LLMs) there tend to be relatively few upset... - https://hackaday.com/2025/01/27/new-open-source-deepseek-v3-language-model-making-waves/ #artificialintelligence #largelanguagemodel #ai
American companies lost $1 trillion in a single day due to DeepSeek, a new Chinese AI.
@paninid I draw great optimism from a study finding that use if AI (aka LLI) reduces people's conviction to conspiracy theories. Sure AI makes mistakes, but it's more important that AI is modeling fact-based learning, reasoning, and decision making. I literally believe that AI could be the tech to save American democracy.
Trap Naughty Web Crawlers in Digestive Juices with Nepenthes - In the olden days of the WWW you could just put a robots.txt file in the root of y... - https://hackaday.com/2025/01/23/trap-naughty-web-crawlers-in-digestive-juices-with-nepenthes/ #largelanguagemodel #internethacks #webcrawler
What can a #LargeLanguageModel reveal about Fodorian modularity?
Some argue current #LLMs vindicate associationism/connectionism (contrary to Fodor/modularity): https://doi.org/10.3389/fpsyg.2023.1279317
Why? Associative LLMs do what Fodor thought associations couldn't.
Iteration of Thought: Leveraging Inner Dialogue for Autonomous #LargeLanguageModel Reasoning https://arxiv.org/abs/2409.12618 #PromptEngineering
Sophos stellt Tuning-Tool für große Sprachmodelle als Open-Source-Programm zur Verfügung
#Cybersecurity #Cybersicherheit #GenAI #generativeKI #IncidentResponder #künstlicheIntelligenz #LargeLanguageModel #LLM #OpenSource #Security @Sophos @Sophos_Info
@lyndamerry484 Ah. OK, that's a different question. A #LargeLanguageModel, although it is an example of a neural network system, is certainly not 'intelligent' in this sense. It has no semantic layer and no concept of truth or falsity. All it does is string together symbols (which it does not understand the meanings of) into sequences which represent plausible responses to the sequence of symbols that it was fed.
There is no semantic significance to its answer.
• Perfect for quick #ContentAnalysis
• Streamlined #MLOps pipeline
- Nova Pro:
• Advanced #DeepLearning capabilities
• #LargeLanguageModel with 300,000 token support
• Excels in #FinTech document analysis
• Optimal #AI performance balance