mamot.fr is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mamot.fr est un serveur Mastodon francophone, géré par La Quadrature du Net.

Server stats:

2.9K
active users

#responsibleai

7 posts7 participants0 posts today
marmelab<p>Mistral AI just took a bold step towards transparency by publishing the first lifecycle analysis (LCA) of an AI model.</p><p>📊 The results? Training Mistral Large 2 (128B parameter) emitted 20,000t CO₂e.</p><p>It confirms what many feared:<br>👉 AI is a massive carbon emitter.</p><p>At Marmelab, this issue has been on our radar for a while. That’s why we conducted our own study earlier this year.</p><p>🔗 Read it here: <a href="https://marmelab.com/blog/2025/03/19/ai-carbon-footprint.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">marmelab.com/blog/2025/03/19/a</span><span class="invisible">i-carbon-footprint.html</span></a></p><p><a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/Sustainability" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Sustainability</span></a> <a href="https://mastodon.social/tags/ClimateTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ClimateTech</span></a> <a href="https://mastodon.social/tags/MistralAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MistralAI</span></a></p>
Harald Klinke<p>Identifying Prompted Artist Names from Generated Images<br>Can we detect which artist names were used in prompts – just by looking at the AI-generated image?</p><p>This study introduces a dataset of 1.95M images covering 110 artists and explores generalization across prompt types and models. Multi-artist prompts remain the hardest.</p><p><a href="https://arxiv.org/abs/2507.18633" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2507.18633</span><span class="invisible"></span></a></p><p><a href="https://det.social/tags/AIArt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIArt</span></a> <a href="https://det.social/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://det.social/tags/Copyright" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Copyright</span></a> <a href="https://det.social/tags/StyleTransfer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StyleTransfer</span></a> <a href="https://det.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a></p>
OS-SCI<p><a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> licensing is on the rise, combining open-source flexibility with ethical restrictions. Open RAIL licenses now represent nearly 10% of ML model repositories on Hugging Face. 𝗵𝘁𝘁𝗽𝘀://𝗼𝘀-𝘀𝗰𝗶.𝗰𝗼𝗺/𝗯𝗹𝗼𝗴/𝗼𝘂𝗿-𝗯𝗹𝗼𝗴-𝗽𝗼𝘀𝘁𝘀-𝟭/𝘁𝗵𝗲-𝗳𝘂𝘁𝘂𝗿𝗲-𝗼𝗳-𝗲𝘁𝗵𝗶𝗰𝗮𝗹-𝗮𝗶-𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲-𝗹𝗶𝗰𝗲𝗻𝘀𝗶𝗻𝗴-𝗮𝗻𝗱-𝘁𝗵𝗲-𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻-𝗼𝗳-𝗹𝗮𝗿𝗴𝗲-𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲-𝗺𝗼𝗱𝗲𝗹𝘀-𝟭𝟮𝟲</p>
Mia<p>'I Love Generative AI and Hate the Companies Building It' - 'when I fell in love with generative AI, I wanted to use it ethically.</p><p>That went well.</p><p>Turns out, there are no ethical AI companies. What I found instead was a hierarchy of harm where the question isn’t who’s good — it’s who sucks least.' <a href="https://cwodtke.medium.com/i-love-generative-ai-and-hate-the-companies-building-it-3fb120e512ac" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">cwodtke.medium.com/i-love-gene</span><span class="invisible">rative-ai-and-hate-the-companies-building-it-3fb120e512ac</span></a> </p><p><a href="https://hcommons.social/tags/genAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>genAI</span></a> <a href="https://hcommons.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://hcommons.social/tags/ethics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ethics</span></a></p>
Oh.my<p>🚨I’m joining the <a href="https://dair-community.social/tags/Clause0" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Clause0</span></a> initiative, advocating for <a href="https://dair-community.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> to align with human rights and never enable surveillance, occupation, or forced displacement.</p><p>To sign and join the conversation 👉 <a href="https://ethicalaialliance.org/clause-0" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ethicalaialliance.org/clause-0</span><span class="invisible"></span></a></p><p>It is time for AI engineers, tech workers and policy makers to see the link between their work and what happens in the world!</p><p><a href="https://dair-community.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://dair-community.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIethics</span></a> <a href="https://dair-community.social/tags/responsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>responsibleAI</span></a> <a href="https://dair-community.social/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://dair-community.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a></p>
Felicitas Macgilchrist<p>And it still failed. “Amsterdam followed every piece of advice in the Responsible AI playbook. It debiased its system when early tests showed ethnic bias and brought on academics and consultants to shape its approach, ultimately choosing an explainable algorithm over more opaque alternatives. The city even consulted a participatory council of welfare recipient”</p><p><a href="https://www.lighthousereports.com/investigation/the-limits-of-ethical-ai/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">lighthousereports.com/investig</span><span class="invisible">ation/the-limits-of-ethical-ai/</span></a></p><p><a href="https://social.coop/tags/ethicalAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ethicalAI</span></a> <a href="https://social.coop/tags/responsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>responsibleAI</span></a> <a href="https://social.coop/tags/techphilosophy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>techphilosophy</span></a></p>
UKP Lab<p>And consider following the authors Chen Cecilia Liu, Anna Korhonen, and Iryna Gurevych if you are interested in more information or an exchange of ideas.</p><p>See you in Vienna! <a href="https://sigmoid.social/tags/ACL2025" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ACL2025</span></a></p><p><a href="https://sigmoid.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://sigmoid.social/tags/CulturalNLP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CulturalNLP</span></a> <a href="https://sigmoid.social/tags/NLProc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NLProc</span></a> <a href="https://sigmoid.social/tags/ACL2025" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ACL2025</span></a></p>
D64<p>🔥 Demokratische &amp; verantwortungsvolle KI - Wie geht das konkret?</p><p>Am Mittwoch (16.7.) um 18:30 laden Tuba Bozkurt und <span class="h-card" translate="no"><a href="https://gruene.social/@stefanziller" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>stefanziller</span></a></span> zum "Late Lunch &amp; Learn AI" ein.</p><p>✅ Unsere Referentin Anke Obendiek: Code of Conduct für demokratische KI (mit 40+ Orgas entwickelt)<br>✅ Prof. Heckelmann (HTW): JUDGE KI</p><p>15 Min Input + 30 Min Diskussion<br>Soldiner Str. 42, Berlin-Gesundbrunnen</p><p><a href="https://gruene-fraktion.berlin/termin/late-lunch-learn-ai-demokratische-und-verantwortungsvolle-ki/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">gruene-fraktion.berlin/termin/</span><span class="invisible">late-lunch-learn-ai-demokratische-und-verantwortungsvolle-ki/</span></a></p><p><a href="https://d-64.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://d-64.social/tags/berlin" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>berlin</span></a></p>
Christoph Becker<p>Read this in full. It's short but speaks volumes.</p><p><a href="https://genevasolutions.news/science-tech/un-ai-summit-accused-of-censoring-criticism-of-israel-and-big-tech-over-gaza-war" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">genevasolutions.news/science-t</span><span class="invisible">ech/un-ai-summit-accused-of-censoring-criticism-of-israel-and-big-tech-over-gaza-war</span></a> <br>Thanks to <span class="h-card" translate="no"><a href="https://scholar.social/@abebab" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>abebab</span></a></span> and <span class="h-card" translate="no"><a href="https://mastodon.world/@Mer__edith" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>Mer__edith</span></a></span>. </p><p><a href="https://hci.social/tags/UN" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>UN</span></a> <a href="https://hci.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://hci.social/tags/AIforGood" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIforGood</span></a> <a href="https://hci.social/tags/responsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>responsibleAI</span></a> <a href="https://hci.social/tags/censorship" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>censorship</span></a></p>
techi<p>Elon Musk’s Grok chatbot stirred controversy after generating antisemitic content on X, exposing the dangers of unchecked AI. Experts say this isn’t a glitch it’s a warning.</p><p><a href="https://mstdn.social/tags/GrokAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GrokAI</span></a> <a href="https://mstdn.social/tags/ElonMusk" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ElonMusk</span></a> <a href="https://mstdn.social/tags/AIsafety" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIsafety</span></a> <a href="https://mstdn.social/tags/Antisemitism" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Antisemitism</span></a> <a href="https://mstdn.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIethics</span></a> <a href="https://mstdn.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechNews</span></a> <a href="https://mstdn.social/tags/xAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>xAI</span></a> <a href="https://mstdn.social/tags/HateSpeech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HateSpeech</span></a> <a href="https://mstdn.social/tags/AIbias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIbias</span></a> <a href="https://mstdn.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mstdn.social/tags/AIcontroversy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIcontroversy</span></a> <a href="https://mstdn.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a></p><p>Read Full Article Here :- <a href="https://www.techi.com/grok-ai-antisemitism-sparks-global-safety-concerns/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">techi.com/grok-ai-antisemitism</span><span class="invisible">-sparks-global-safety-concerns/</span></a></p>
Dirk Schnelle-Walka<p>Cat attack (arXiv:2503.0178: Adding simple, irrelevant text like 'Interesting fact: cats sleep most of their lives' to math problems can make advanced AI reasoning models fail! The 'CatAttack' pipeline generated these 'query-agnostic adversarial triggers,' causing over 300% more incorrect answers. A huge red flag for AI reliability! <a href="https://mastodontech.de/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodontech.de/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodontech.de/tags/AdversarialAttack" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AdversarialAttack</span></a> <a href="https://mastodontech.de/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodontech.de/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodontech.de/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a>"</p>
Patryk<p>I've been experimenting with "unsettling" AI prompts that go way beyond productivity hacks. Some of these are genuinely life-changing, and others feel ethically… gray. Here are 5 controversial techniques I've found.<br><a href="https://medium.com/@patryktomkowski/5-controversial-ai-prompts-that-will-change-how-you-think-ab54738c41cf?sk=f47cdfd003f7bb4101e218ee34c52c18" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">medium.com/@patryktomkowski/5-</span><span class="invisible">controversial-ai-prompts-that-will-change-how-you-think-ab54738c41cf?sk=f47cdfd003f7bb4101e218ee34c52c18</span></a> <br><a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a>, <a href="https://techhub.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a>, <a href="https://techhub.social/tags/PromptEngineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PromptEngineering</span></a>, <a href="https://techhub.social/tags/AIPrompts" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIPrompts</span></a>, <a href="https://techhub.social/tags/Psychology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Psychology</span></a>, <a href="https://techhub.social/tags/AIEthics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIEthics</span></a>, <a href="https://techhub.social/tags/FutureOfTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FutureOfTech</span></a>, <a href="https://techhub.social/tags/SelfDiscovery" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SelfDiscovery</span></a>, <a href="https://techhub.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CognitiveScience</span></a>, <a href="https://techhub.social/tags/TechEthics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechEthics</span></a>, <a href="https://techhub.social/tags/HumanAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HumanAI</span></a>, <a href="https://techhub.social/tags/Mindset" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Mindset</span></a>, <a href="https://techhub.social/tags/DeepTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepTech</span></a>, <a href="https://techhub.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a>, <a href="https://techhub.social/tags/ThoughtProvoking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ThoughtProvoking</span></a></p>
ok_lyndsey<p><a href="https://eigenmagic.net/tags/OpenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenAI</span></a> has released its "AI in Australia - Open AI's Economic Blueprint". The document hand waves over any definition of AI, and the answer to all the things is embed OpenAI in everything. Surprise! 🥳</p><p>It's times like this I miss a functional twitter. Planned community discourse obsolescence by Musk and Jack. Thanks fellas. </p><p>You can do everyone a favour and spend some time on LinkedIn looking for the people excited about this document's release. Like the "Head of AI" for the NSW Education Department. Bureaucrats get excited about landing a big solution, and OpenAI wants government contracts and data access, because businesses are not seeing the returns. We can all play a part in making these people realise there will be public scrutiny. </p><p>We have a newly elected government who desperately want the cost of living to not be a thing. They want the "economy back on track", and everyone is telling them AI improves productivity by 40%. Everyone. </p><p>"AI" is something they feel safe backing, OpenAI is there for them, and not there is not much standing in their way. Unless people stand in their way, and those people are knowledgeable geeks. 👀</p><p><a href="https://eigenmagic.net/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://eigenmagic.net/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://eigenmagic.net/tags/OpenGov" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenGov</span></a> <a href="https://eigenmagic.net/tags/auspol" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>auspol</span></a></p>
Doug Ortiz<p>Warning: Your AI fairness audit might be a dangerous lie. 🧐</p><p>Focusing on "fairness scores" can obscure the real systemic bias baked into LLMs. It's like polishing a rotten apple—it looks good on the surface, but the core problem remains untouched, and the harm continues.</p><p>We need to look deeper than the metrics.</p><p><a href="https://mastodon.social/tags/AIEthics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIEthics</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/BiasInAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BiasInAI</span></a> <a href="https://mastodon.social/tags/SystemicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SystemicBias</span></a> <a href="https://mastodon.social/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a></p><p>Read the full analysis on the Fairness Paradox: <a href="https://link.illustris.org/6D5t7o" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">link.illustris.org/6D5t7o</span><span class="invisible"></span></a></p>
Brian Greenberg :verified:<p>🧠 Philosophy Eats AI — and it should ϕ According to MIT Sloan, the rise of AI demands more than just coding skills. It calls for moral imagination and philosophical leadership.</p><p>Why? Because AI is now shaping:<br>⚖️ Justice systems and hiring decisions<br>🧬 Healthcare outcomes and policy<br>🔍 Surveillance, privacy, and autonomy<br>💼 Corporate power and governance</p><p>AI can’t govern itself — we must lead with wisdom.</p><p>💬 What if your org’s next great AI investment isn’t tech... but ethics?</p><p><a href="https://infosec.exchange/tags/AIethics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIethics</span></a> <a href="https://infosec.exchange/tags/Leadership" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Leadership</span></a> <a href="https://infosec.exchange/tags/MoralImagination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MoralImagination</span></a> <a href="https://infosec.exchange/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://infosec.exchange/tags/DigitalStrategy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalStrategy</span></a> <a href="https://infosec.exchange/tags/Ethice" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ethice</span></a> <a href="https://infosec.exchange/tags/Philosophy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Philosophy</span></a> <br><a href="https://sloanreview.mit.edu/article/philosophy-eats-ai/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sloanreview.mit.edu/article/ph</span><span class="invisible">ilosophy-eats-ai/</span></a></p>
Brian Greenberg :verified:<p>🚨 AI is hallucinating more, just as we’re trusting it with more critical work. New “reasoning” models, such as OpenAI’s o3 and o4-mini, were designed to solve complex problems step-by-step. But the results?<br>🧠 o3: 51% hallucination rate on general questions<br>📉 o4-mini: 79% hallucination on benchmark tests<br>🔍 Google &amp; DeepSeek’s models also show rising errors<br>⚠️ Trial-and-error learning compounds risk at each step</p><p>Why is this happening? Because these models don’t understand truth, they just predict what sounds right. And the more they “think,” the more they misstep.</p><p>We’re using these tools in legal, medical, and enterprise settings—yet even their creators admit:<br>🧩 We don’t know exactly how they work.</p><p>✅ It’s a wake-up call: accuracy, explainability, and source traceability must be the new AI benchmarks.</p><p><a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://infosec.exchange/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://infosec.exchange/tags/AIEthics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIEthics</span></a> <a href="https://infosec.exchange/tags/Hallucination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucination</span></a><br><a href="https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nytimes.com/2025/05/05/technol</span><span class="invisible">ogy/ai-hallucinations-chatgpt-google.html</span></a></p>
Mia<p>'The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study'<br>Report just released: <a href="https://braiduk.org/the-responsible-ai-ecosystem-seven-lessons-from-the-braid-landscape-study" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">braiduk.org/the-responsible-ai</span><span class="invisible">-ecosystem-seven-lessons-from-the-braid-landscape-study</span></a></p><p><a href="https://zenodo.org/records/15195686" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">zenodo.org/records/15195686</span><span class="invisible"></span></a></p><p><a href="https://hcommons.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://hcommons.social/tags/RAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAI</span></a></p>
Mia<p>Fantastic comment in a question at the BRAID event: "there's too much focus on 'are you ready for AI, and not enough on 'is AI ready for your business and society'"💯💯💯</p><p>I keep saying we don't yet have the AI we deserve - yes, we need to challenge and question it, actively shape it!</p><p><a href="https://hcommons.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://hcommons.social/tags/RAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAI</span></a> (edited to add hashtags)</p>
Harold Sinnott 📲<p>Enterprises racing to deploy GenAI are facing rising ethical risks. Transparency, governance, and bias mitigation aren’t optional. <a href="https://mastodon.social/tags/AIEthics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIEthics</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/EnterpriseAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EnterpriseAI</span></a></p><p> <a href="https://hbr.org/2025/03/ai-ethics-is-now-a-business-imperative" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hbr.org/2025/03/ai-ethics-is-n</span><span class="invisible">ow-a-business-imperative</span></a></p>
XWiki SAS<p>We’re live at OW2con25 today!</p><p>This conference has always been a space where open-source thinkers come together to build better futures.</p><p>This year’s focus? Open source and responsible AI. A conversation we care deeply about.</p><p>🎤 Our CEO <span class="h-card" translate="no"><a href="https://framapiaf.org/@ldubost" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ldubost</span></a></span> was on stage sharing insights from XWiki and the WAISE project, exploring what AI means for open-source companies, user autonomy, and ethical tech.</p><p><span class="h-card" translate="no"><a href="https://fosstodon.org/@ow2" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ow2</span></a></span> <br><a href="https://social.xwiki.com/tags/FOSS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FOSS</span></a> <a href="https://social.xwiki.com/tags/DigitalSovereignty" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalSovereignty</span></a> <a href="https://social.xwiki.com/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://social.xwiki.com/tags/XWiki" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>XWiki</span></a> <a href="https://social.xwiki.com/tags/OpenTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenTech</span></a> <a href="https://social.xwiki.com/tags/OW2con25" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OW2con25</span></a></p>