• 0 Posts
  • 62 Comments
Joined 5 months ago
cake
Cake day: September 2nd, 2025

help-circle

  • Before Facebook etc, forums were really small and plenty. They might be about a certain topic, but you were there for the few loud people that kept it going. Following them, pretty much. Didn’t like them, you went to another forum of that topic, same deal there.

    It’s nothing like the forums today full of new accounts asking one question and moving on.

    Lemmy is also not really anything like those forums. The reason why you call Lemmy a forum but Facebook not is more accidental, maybe because of marketing, definitely not because of rigorous definitions.

    In any case, @toiletobserver@lemmy.world’s observation applies to social media, forums, news sites with a comment section and any other site.




  • As you can learn from reading the article, they do also store the information itself.

    They learn and store a compression algorithm that fits the data, then use it to store that data. The former part of this is not new, AI and compression theory go back decades. What’s new and surprising is that you can get the original work out of attention transformers. Even in traditional overfit models that isn’t a given. And attention transformers shine at generality, so it’s not evident that they should do this, but all models tested do it, so maybe it is even necessary?

    Storing data isn’t a theoretical failure, some very useful AI algorithms do it by design. It’s a legal and ethical failure because openai etc have been claiming from the beginning that this isn’t happening, and it also provides proof of the pirated work it’s been trained on.