• 0 Posts
  • 89 Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle


  • Ha!

    Hahahahahaha!

    Hahahahahahaahahahahahahahahahahahahahahahahhahahahahahahahahhahahahahahahahahahahahhahahagahahahahahahahahahahahahahahahhhaahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaahahahahahahhhahahahahahahahahahahahahahahahahahahahahahahahahahahahahahabahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahabahahahahahahahahahahahahahahahahahahahahahahahababahababababahahahahahahahabaahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaha!!!

    Hahaha…

    Haaaaaaaaaaaaaaahhhhhhh…

    Ha.

    No.














  • If you are wondering why your cookies come out different every time you bake, it isn’t due to variance of temperature and humidity – IT IS BECAUSE YOU ARE USING WILDLY DIFFERENT AMOUNTS OF FLOUR.

    And yes you ducking can tell the difference between a batch of cookies where the flour is weighed vs scooped.

    You can’t accurately measure flour by volume. The amount you get in a scoop will vary depending on how compressed it is. You weigh flour to remove that variance, which can be far greater than 5%. Don’t believe me? Put a cup of flour in a measuring cup, then start pressing on it to pack it (you won’t have anywhere near a cup anymore). Controlling for flour density (ie: consistently measure by volume) is nearly impossible.

    Brown sugar is similar but easier to manage (most recipes tell you to use packed measures instead of scooping).

    Things like white sugar, sure – scoop away.





  • They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

    Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

    At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.