• 10 Posts
  • 2.75K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • Eh I disagree with the power usage point, specifically. Don’t listen to Altman lie through his teeth; generation and training should be dirt cheap.

    See the recent Z Image, which was trained on a shoestring budget and costs basically nothing to run: https://arxiv.org/html/2511.22699v2

    The task energy per image is less than what it took for me to type out this comment.


    As for if we “need” it, yeah, that’s a good point and what I was curious about.

    But then again… I don’t get why people use a lot of porn services. As an example, I just don’t see the appeal of OF, yet it’s a colossal enterprise.


  • But if we can stop people from looking at the illegal/dangerous stuff, and use AI to create it, let those people watch that instead, I think that would be a net positive. Of course you’d want to identify them, tag them and keep them separate from the rest of people; it’s not a solution to the problem they create, but if you can reduce the demand for it, I dunno, I want nothing to do with that kind of stuff, but I feel like there’s a solution in there somewhere.

    CP detectors got really good well before image gen was even a thing. They had to, as image hosting sites had to filter it somehow. So that’s quite solvable.

    Look at CivitAI as a modern example.

    They filter deepfakes. They filter CP. They correctly categorize and tag NSFW, all automatically and (seemingly) very accurately. You are describing a long solved problem in any jurisdiction that will actually enforce their laws.


    If you’re worried about power/water usage, already solved too. See frugal models like this, that could basically serve porn to the whole planet for pennies: https://arxiv.org/html/2511.22699v2


    IMO the biggest sticking point is datasets… The Chinese are certainly using some questionable data for the base models folks tend to use, though the porn finetunes tend to use publicly hosted booru data and such.




  • Maybe just maybe they will see it as a waist of money and ditch it just like Facebook’s metaverse or whatever it was.

    This is what I’m trying to tell you! The only way to do that is tell them it doesn’t work for the intended purpose: helping customers sell Amazon stuff. They don’t care about people messing around with the bot, that’s trivially discarded noise.

    Also I’m sure at this point all my conversion’s are being fed back in to train the next one.

    It is not. It is quickly classified (by a machine) and thrown out.

    If you want to fuck with the training set, get the bot to help you do something simple, then when it actually works, flag it as an error. Then cuss it out if you like. This either:

    • Pollutes the training set with a success as a “bad” response.

    • Creates a lot of work for data crunchers to look for these kind of “feedback lies.”

    And it’s probably the former.


  • Don’t tell it its wrong, leave feedback in a seperate box.

    Not in its chat, but with a feedback button.


    Let me emphasize: the LLM remembers nothing. Amazon does not care about an ‘adversarial’ response. All cussing it out possibly does is factor that into your Amazon ad profile, and not to your benefit.

    And if you tell the bot it did wrong, it does not care. It doesn’t factor into anything.

    But if you legitimately ask it to help you buy something, and it gets that wrong, and you leave dedicated feedback, that registers for Amazon. It tells them their chatbot isn’t working, but actually frustrating customers trying to use it to buy something. That’s how you tank the program.


  • You aren’t talking to AI, you’re talking to chatbots with no memory, nor ability to change their internal state; you don’t have to worry about that. Honestly its a waste of your keystrokes and brainpower, as you are shouting into a void.

    …If you want to attack it, try getting it to actually do something (like find me an item with X requirements), then give feedback that its wrong if theres a button for it. That does get registered.












  • Every news site is biased. Read them with that mind.

    As an example, one of my usual sources since like 2015 is Axios. Their site is clean, lean, and they are extremely well sourced in Washington. But they recently got a big cash infusion from OpenAI. And, surprise surprise, they post a small but steady stream of Tech Bro evangelism on the side now.

    RT is generally awful, but sometimes their reporting outside of Russia, where they have incentive to dig, can be good.


    Hence, my bucket for Guardian is “high class liberal catnip .” They are clickbaity. That’s they trend so much here on Lemmy.

    They’re well sourced. Their integrity is leagues beyond, say, rawstory or dailybeast that get spammed on Lemmy. So you have to filter their stories with that in mind.


    And this is pretty much what ALL written news is doing to survive, if they can. Because their competition on YouTube/Facebook/whatever is not bound to the same standards they are.

    If they don’t, they die.

    I used to write small articles for a tech hardware site. The owner chose to take the site down rather than chase the clickbait game.