Tweet
Running (tiny) LLM locally (even though fetched remotely, straight from @HuggingFace) in an immersive environment.
Here running tinyllamas/stories15M on the Quest 3 in #WebXR thanks to wllama (WASM binding for llama.cpp) so no remote API called.
(original)