

Ramble about something for long enough that people should be able to tell is how I do it.


Ramble about something for long enough that people should be able to tell is how I do it.
Well, at least the advertising companies will lose money this way


That kind of painting seems more likely to come alive
eating grass will destroy your teeth
Quickly and effortlessly get some music playing that can act as a backdrop for your real activity such as working, driving, cooking, hosting friends, etc. Keep it rolling indefinitely.
“Discover” new music by statistical means based on your average tastes.
This is the main thing I want out of music software tbh.


I think for some people the only way they can think of to help is attempting to bully someone over the internet, and it ends up applying to whoever happens to be around that disagrees with them, even though that makes zero sense as a strategy.
Do you care about having decent enough devices to enjoy it or do you just buy the cheapest pair of earbuds to silence the world around you?
I have adopted the standard for headphones that it’s not good enough unless this album sounds ok


I think maybe they wouldn’t if they are trying to scale their operations to scanning through millions of sites and your site is just one of them


If there’s one person who knows their applied zk proofs, it’s that guy.


There are some pretty strong arguments that even zk proof is a flawed way of preserving privacy though, in a variety of ways. It prevents pseudonymity by enabling one-user-one-account, and it leaves users vulnerable to being coerced to reveal their full online activities by handing over cryptographic keys.


That’s literally what the comment above it was doing too though. It’s a very common anti-AI argument to appeal to social proof.


We can’t afford to make any of this. We don’t have the money for the compute required or to pay for the lawyers to make the law work for us
I don’t think this is entirely true; yeah, large foundational models have training costs that are beyond the reach of individuals, but plenty can be done that is not, or can be done by a relatively small organization. I can’t find a direct price estimate for Apertus, and it looks like they used their own hardware, but it’s mentioned they used ten million gpu hours, and GH200 gpus; I found a source online claiming a rental cost of $1.50 per hour for that hardware, so I think the cost of training this could be loosely estimated to be something around 20 million dollars.
That is a lot of money if you are one person, but it’s an order of magnitude smaller than the settlements of billions of dollars being paid so far by the biggest AI companies for their hasty unauthorized use of copyrighted materials. It’s easy to see how copyright and legal costs could potentially be the bottleneck here preventing smaller actors from participating.
It should benefit the people, so it needs to change. It needs to be “expanded” (I wouldn’t call it that, rather “modified” but I’ll use your word) in that it currently only protects the wealthy and binds the poor. It should be the opposite.
How would that even work though? Yes, copyright currently favors the wealthy, but that’s because the whole concept of applying property rights to ideas inherently favors the wealthy. I can’t imagine how it could be the opposite even in theory, but in practice, it seems clear that any legislation codifying limitations on use and compensation for AI training will be drafted by lobbyists of large corporate rightsholders, at the obvious expense of everyone with an interest in free public ownership and use of AI technology.


But we can’t afford to pay. I don’t think open models like the one in the OP article would be developed and released for free to the public if there was a complex process of paying billions of dollars to rightsholders in order to do so. That sort of model would favor a monopoly of centralized services run only by the biggest companies.


I am thankful for the safety feature where locking the lid also depresses a button that allows the food processor to operate, but I also keep it unplugged when the lid is off for an extra layer of redundancy.


TikTok
I think you’re always going to have problems with a lack of authenticity on platforms where opaque algorithms do all the work of deciding what gets popular and what gets shown to who.
but the kinds of people who grape others generally don’t feel shame
I think this is probably not true.
the primary tool society uses to respond to grape, assault, prison, ostracizing or murder is, so like, so what is there less shame?
Those tools aren’t equally available to everyone, they are expressions of power, which some people have access to more than others.
Stuff like this makes me wonder, at what point is it bad enough that the truisms about leaving medical advice to licensed healthcare professionals become wrong, and everyone would be better off turning to anything else instead of engaging with the system? Are we not there yet? How much further would there be to go?