

CEO complains that his blatant circular investment infinite money glitch isn’t convincing people that these perpetually unprofitable businesses are actually worth further investment.


CEO complains that his blatant circular investment infinite money glitch isn’t convincing people that these perpetually unprofitable businesses are actually worth further investment.
Like, there’s no way in hell these files haven’t been doctored, right? Months of obfuscation and deflection, and then suddenly Trump’s fine to sign their release? There’s no way.


Why the hell did The Guardian include comment from an Amazon Spokesperson? ‘‘Nuh uh, that’s not true’’ no fucking duh that’s their response.


Got a secondhand Pixel phone and installed GrapheneOS. I love it.


I’m so confused by this common sentiment in the community. I’ve been gaming on Arch / NixOS for the past several months with an NVIDIA card after I switched earlier this year. Basically no issues.
Meanwhile, my buddy converted to Manjaro, and has a Radeon. He’s been having awful issues. Several of the games he plays crash constantly, especially if they are multiplayer. He tried switching to openSUSE recently; no real improvements.
I wanted to buy AMD for my eventual next card, but now I’m terrified of doing so, and deeply confused why everyone says AMD is better for Linux.

Precisely this. Leftist rhetoric about wages is often framed for other leftists, without addressing the core arguments underpinning centrist and conservative views on why the rich “deserve” their wealth. People say “theft” without making arguments for why our definition of theft needs to change.

I’m in complete agreement with this perspective, but rarely do I see discussions like this address the sticking point centrists and conservatives get hung up on: they don’t believe this is “theft.”
When I told my coworker about the historic productivity-to-wages gap, she argued (paraphrasing), “Could it not be that gap is reflective of the CEOs innovating ways to make their workers increasingly productive, while the value of those workers’ labor hasn’t actually increased, therefore explaining why the minds behind those innovations deserve the wealth?”
This conversation will go nowhere if we keep throwing around terms like “wage theft” and skipping step 1 where we argue the moral determination as to why that is true.


To say “that feeling” of indignation (at the letter’s inclusion in a gallery) is the same as other things that make him roll his eyes, is reductionist. We regard things as stupid for different reasons; they’re not all the “same feeling.” As others have said, the artist’s intentionality in presenting something is part of its message. So the indignation he felt about a piece being put in a gallery is part of that piece’s effect on him, born from the artist’s choices. That feeling is different than hearing a moron say something dumb and thinking it’s stupid.
Intentionality is the key. Case in point, “language evolves” is a silly thing to say after a mistake, but many subcultures start misspelling things on purpose, and that intentionality is how language evolves.


I just gave up and pre-ordered the Light Phone 3. Anytime I truly need a mobile app, I can just use an old iPhone and a WiFi connection.


I feel conflicted. On one hand, people can regulate themselves, and Facebook becoming a bigoted cesspit may bring more people to a moderated Fediverse.
On the other hand, these major platforms having such user monopoly and influence can cause unfettered hate speech to breed violence.
I’m conflicted about the idea that an insidious for-profit megacorporation should be expected to uphold a moral responsibility to prevent violence; their failure to do so might be a necessary wake-up call that ultimately strips them of that problematic influence. Thoughts?


Agreed. The problem is that so many (including in this thread) argue that training AI models is no different than training humans—that a human brain inspired by what it sees is functionally the same thing.
My response to why there is still an ethical difference revolves around two arguments: scale, and profession.
Scale: AI models’ sheer image output makes them a threat to artists where other human artists are not. One artist clearly profiting off another’s style can still be inspiration, and even part of the former’s path toward their own style; however, the functional equivalent of ten thousand artists doing the same is something else entirely. The art is produced at a scale that could drown out the original artist’s work, without which such image generation wouldn’t be possible in the first place.
Profession. Those profiting from AI art, which relies on unpaid scraping of artists’s work for data sets, are not themselves artists. They are programmers, engineers, and the CEOs and stakeholders who can even afford the ridiculous capital necessary in the first place to utilize this technology at scale. The idea that this is just a “continuation of the chain of inspiration from which all artists benefit” is nonsense.
As the popular adage goes nowadays, “AI models allow wealth to access skill while forbidding skill to access wealth.”
I find it surreal and profound that there is now a form of cybercrime that is, literally, using poetic maledictions. The line between technology and classic depictions of magic blurs yet further.