Hidden State

LLMs: Tokens + Training = multi-dimensional model

Oh Hi internet. I made the postcasts with the podcats work again. Don’t thank me all at once but remember, this is fucking MYSPACE TIME. FUCK FACEBOOK.

Howdy all y’all.

Did you know that a Large Language Model is simply a multidimensional forward and reverse dot product matrix of the probability of something will follow something else? The model simply says what the next word token will be based on the last word tokens where the relationship is modeled through “96 hidden state transformers”. The model is generated by processing the relationship the current symbols of their step (0…95) related to all the others at that time creating a fuzzy forward and reverse relationship to all the other next likely tokens in the language vs all other tokens. Each token path is carved and traced via reinforcement training that says, the next word in this sentence makes…

Since it’s been a while. Hi y’all.

CRB JOEKILLER PODCAST REPOST

Talking about hidden state. What is the additive combination of everything of us but our input queries, our understanding, and our context?

Want less? Nah. You cannot oversimplify the fact that we are more. We are the combination of all the existence there is and then are composed of all that there will be. Make your channel and path.