This year, I’m going to try an experiment. I’m going to use this blog in notebook mode, posting very short shitposty things at a higher frequency.
Let’s kick things off with this screenshot of a prompt I tried in Dall-E this morning, inspired by a conversation about the implications of LxMs being really bad at repeating things exactly or maintaining invariants across responses (such as a series of images that feature the exact same object). Like humans, and unlike traditional computers, LxMs are very bad at generating highly deterministic and reproducible behavior. Modulo random-number seeds at the start of a blank-slate (empty context) generation attempt for a fixed-weights model. Based on these results, I have reached no conclusion on whether or not AI has Buddha nature.