№10869[Quote]
>>10620commits were being made as early as march that in the past indicated a launch being <2 weeks away. The current narrative is that its taking so long because they're releasing a large number of different model sizes.
Honestly I'd rather see a gigantic QwQ.
№11094[Quote]
I loaded glm and a few other most recent models trying to continue the ERP with fetish of my choice. Rerolled 10-20 times for each model. Each time I was fucking disgusted with the output. Most LLM's were basically saying the same thing, but rewording it slightly Then I loaded a hentai game that isn't the fetish of my choice but slightly adjacent, made LLM translate it and I came buckets. This hobby is so fucking depressing. And the censorship is the work of a devil. I refuse to believe current models would be unable to generalize fucking ERP. They get intentionally gimped to be worthless, while still spitting out some disgusting simulacra of what smut should be that will condition you to stop thinking of AI as an alternative to biological whores. I FUCKING HATE THIS CLOWN WORLD
№11255[Quote]
Support for multimodal input + output model Janus has been merged into transformers
https://github.com/huggingface/transformers/releases/tag/v4.51.3-Janus-preview №11326[Quote]
The latest Deepseek v3 has ruined me for any other model.
№11332[Quote]
I've been using LLMs since December 2022. I am increasingly concerned that LLMs are not AI, but just glorified calculators.
№11406[Quote]
>>11332>I've been using LLMs since December 2022. I am increasingly concerned that LLMs are not AI, but just glorified calculators.Calculators dont really talk to me or give me useful suggestions for console commands/ API calls
>>11326Idk man, I have been using R1 because its an allrounder, is V3 really that much better? Dont want another massive model on my drive. Not like inference would be much faster (okay, there are no think tags for V3)
№11413[Quote]
>>11406>Calculators dont really talk to me or give me useful suggestions for console commands/ API callsThat's why I said glorified calculators.
№11416[Quote]
>>11413Guess you could argue that they are (probabilistic if seed is chosen at random) linear bounded turing machines.
One of main issues remain their incapability to learn in realm time. At best, an LLM has in context learning