mamot.fr is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mamot.fr est un serveur Mastodon francophone, géré par La Quadrature du Net.

Server stats:

2.9K
active users

#neuralnetworks

6 posts6 participants1 post today

Your Wi-Fi may know who you are, literally. “WhoFi,” a new system from Rome’s La Sapienza University, identifies people with 95.5% accuracy using signals bouncing off their bodies. No cameras. No lights. Just basic routers and neural networks. It even works through walls. Groundbreaking tech, or surveillance nightmare? The line just got blurrier.

Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years.

"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.

This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.

And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.

It is also an essay about sociology.

The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."

garymarcus.substack.com/p/how-

Marcus on AI · How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AIBy Gary Marcus
Continued thread

If we ever see a real artificial mind, some kind of LLM will probably be a small but significant component of that, but the current wave of machine learning will most likely come to a grinding halt very soon because of a lack of cheap training data. The reason why all of this is happening now is simple: The technologies behind machine learning have been around for decades, but computers weren't fast enough and didn't have enough memory for those tools to become really powerful until the early 2000s, and around the same time, the Internet went mainstream and got filled with all kinds of data that could be datamined for training sets. Now there is so much synthetic content out there that automated data mining won't work much longer, you need humans to curate and clean the training data, which makes the process slow and expensive. I expect to see another decades long AI winter after the commercial hype is over.

If you look for real intelligence, look at autonomous robots and computer game NPCs. There you can find machine learning and artificial neural networks applied to actual cognitive tasks in which an agent interacts with its environment. Those things may not even be as intelligent as a rat yet, but they are actually intelligent, unlike LLMs.

#llm#LLMs#ai

Transfer Learning in Machine Learning

Transfer learning is a technique in machine learning where a model developed for one task is reused as the starting point for a model on a second task. Rather than training a model entirely from scratch, which often requires large amounts of labeled data and computational resources, transfer learning enables a more efficient approach by leveraging previously learned features

ml-nn.eu/a1/86.html

ml-nn.euTransfer Learning in Machine LearningMachine Learning & Neural Networks Blog

🧠 Just wrapped up my incredible journey at the International Joint Conference on Neural Networks 2025! An exceptional experience that reinforces why this conference is a cornerstone of AI research.
🎯 Outstanding organization
🚀 World-class content🤝 Invaluable networking with global researchers
Neural networks remain one of the most dynamic areas of AI. Events like #IJCNN2025 are essential for advancing the field.
Already looking forward to Maastricht 2026! 🔬✨

Looks interesting.

"[N]eural networks are compositions of differentiable primitives, and studying them means learning how to program and how to interact with these models, a particular example of what is called differentiable programming.

This primer is an introduction to this fascinating field imagined for someone, like Alice, who has just ventured into this strange differentiable wonderland."

arxiv.org/abs/2404.17625

Via Hacker News [ news.ycombinator.com/item?id=4 ]

arXiv.orgAlice's Adventures in a Differentiable Wonderland -- Volume I, A Tour of the LandNeural networks surround us, in the form of large language models, speech transcription systems, molecular discovery algorithms, robotics, and much more. Stripped of anything else, neural networks are compositions of differentiable primitives, and studying them means learning how to program and how to interact with these models, a particular example of what is called differentiable programming. This primer is an introduction to this fascinating field imagined for someone, like Alice, who has just ventured into this strange differentiable wonderland. I overview the basics of optimizing a function via automatic differentiation, and a selection of the most common designs for handling sequences, graphs, texts, and audios. The focus is on a intuitive, self-contained introduction to the most important design techniques, including convolutional, attentional, and recurrent blocks, hoping to bridge the gap between theory and code (PyTorch and JAX) and leaving the reader capable of understanding some of the most advanced models out there, such as large language models (LLMs) and multimodal architectures.