Tweets
-
What if, arguably seen through Paul Virilio’s dromology, thing didn’t actually sped up, we just had to continuously learn about new things but we gradually got lazier because it didn’t fit our model until now? What if everything was at a constant speed but we slowed down? https://twitter.com/utopiah/status/1646818466106621954
(original)
-
Replying to @DaltonKern, @VoidOfSpace, @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
Sorry, missing context 😅 https://twitter.com/utopiah/status/1646882778766622721
(original)
-
Replying to @DaltonKern, @VoidOfSpace, @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
That’s actually why I tried, normal camera from iPhone XS here.
(original)
-
Replying to @utopiah
The source video behind all this.
(original)
-
Replying to @utopiah
PPS: why did I use @rgibli and @doctorow’s “Chokepoint Capitalism” as a prop📙? Well because, beside being an important book, the entire pipeline is also open-source, all tools mentioned and done on Linux.🐧
(original)
-
Replying to @utopiah
PS: I kept the part of the table on purpose, could have easily been removed. Also the “joke” about mixing the real and the virtual would be funnier with color passthrough but you still get the idea I’m sure.😅
(original)
-
Replying to @utopiah
Is it a “good” model? Not, it’s not. Is it a usable model though? Yes, and based on the amount of time spent, both mine and the machine, I’d argue in some situations it’s not a bad compromise, especially when the workflow is properly setup, it gets more and more convenient.
(original)
-
Replying to @utopiah
Little bit of “behind the scenes”, namely NeRFstudio viewer and Blender.
(original)
-
Replying to @gfodor
Precisely because they stopped? FWIW I’m not saying learning, from text or otherwise, has to be a sigmoid but rather learning while taking account resources used probably does show decreasing return on investment, both economical and energetic.
(original)
-
What’s real, what’s virtual, can you even tell anymore?!
Well… yes you can. That wasn’t exactly THE best conversion but it was mine😅
🤳📙video > NeRFstudio with COLMAP > NeRF > obj mesh > obj2gltf > Blender to remove parts > gltf-transform to optimize > WebXR to manipulate👌
(original)
-
Replying to @VoidOfSpace, @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
Glad I tried : NeRF straight from the image stream from my phone, captured as a single video.
COLMAP was tricky to install, to compile with CUDA architecture for ninja, but then makes for a convenient workflow.
I wrongly assumed that LiDAR or IMA was required from mobile apps.
(original)
-
Replying to @VoidOfSpace, @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
Let’s see…
(original)
-
Replying to @_userotti, @mrdoob and @threeja
This is a step done outside of WebXR, while the headset tries to recognize the environment. It will try to find it on its own but if it’s off, usually a bit too high, you can bring your hand to the floor and it will adjust.
(original)
-
Replying to @_userotti, @mrdoob and @threeja
I’m not sure it does know for the walls but for the floor at least it’s the room setup that puts it as y=0. It works here but wouldn’t work with a staircase going down.
(original)
-
Replying to @VoidOfSpace, @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
Seems so from https://github.com/nerfstudio-project/nerfstudio/#4-using-custom-data , trying.
(original)
-
Replying to @VoidOfSpace, @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
Thanks, without even entering in the processing step, the capturing then is exactly identical, namely take photos of the target object or scene?
(original)
-
Replying to @bzzeowGhnGhnGhn and @_akhaliq
I mean… it could also include sex indeed, it’s important to learn about biology and reproduction too!😅
(original)
-
Replying to @gfodor
My bet is on opportunity cost. It makes me think of https://fabien.benetou.fr/ReadingNotes/Epistemetrics namely can we assume infinite linear growth or is it becoming a sigmoid that flattens regardless of the amount of resources, both compute and novel datasets left to train on, thrown at the problem.
(original)
-
Replying to @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
Thanks, can you please elaborate on the different with photogrammetry specifically then?
(original)
-
Replying to @mrdoob, @threejs and @threeja
OMG is it a @threejs fork?😅
(original)
-
RT @mrdoob: Starting to update the @threeja WebXR examples so they just work in passthrought headsets.
Casting shadows and intersecting th…
(original)
-
Replying to @utopiah
Interesting to remain critical of individual examples https://twitter.com/mgrczyk/status/1646712007222251520 and look for flaws there too yet the question remains, is it a fundamental limit of the technique regardless of specific implementations and if so, what are the consequences at scale and over time.
(original)
-
⚠️"demonstrating such jailbreaks is to show a fundamental security vulnerability of LLM’s to logic manipulation
[…] such “toy” Jailbreak examples will be used to perform actual criminal activities and cyberattacks, which will be extremely hard to detect and prevent." https://twitter.com/alexalbert__/status/1646624856430215168
(original)
-
Replying to @yiliu_shenburke
Doesn’t really matter as long as you keep on building IMHO ;)
(original)
-
Replying to @utopiah
Also works with code e.g
(original)
-
Replying to @bitbybit_dev, @threejs, @ImmersiveWebW3C and @glTF3D
Appreciate the kind words, sent a DM.
(original)
-
Replying to @bitbybit_dev, @threejs, @ImmersiveWebW3C and @glTF3D
Very cool! I’ve done of related prototypes, e.g block snapping https://twitter.com/utopiah/status/1617955604739526658 direct meshing https://twitter.com/utopiah/status/1643512369237172225 or parametric with that basic 3D surface. I’ll try in 2D and WebXR, thanks for the prompt reply and amazing work.
(original)
-
Replying to @_akhaliq
Immersive realtime photorealistic explorable explanation across multiscale simulations.
(original)
-
RT @_akhaliq: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
abs: https://arxiv.org/abs/2304.06706
project page: https://jonbarron.info/zipnerf/…(original)
-
Replying to @anders_iversen, @nfinf5, @IGrowNeo and @nearcyan
AFAIK this isn’t how NeRF capture works. Unlike photogrammetry that tries to invert position from images here it uses the device sensors, e.g IMU and depth sensor, to get the relative position then reconstruct from that too.
(original)
-
Replying to @CubanBTC
Fair, you are entitled to your own value system and priorities and it being different from mine.
(original)
-
Replying to @deliprao and @ipvkyte
Any benchmark I could check? Made me curious.
(original)
-
Replying to @CubanBTC
The point here, again, is NOT to play down the privacy issue, rather that it also is an ecological problem.
(original)
-
Replying to @CubanBTC
I could list example https://twitter.com/utopiah/status/1642401483751661569 or check model cards but I bet others have done that better before.
(original)
-
Replying to @CubanBTC
Actually trivial, you estimate with something roughly equivalent, e.g https://twitter.com/utopiah/status/1629405616564326400
It’s standard practice in the industry when you are actually open to clarify the process to build the model and it can include the CO2eq cost.
(original)
-
The Silicon Valley business model really, nothing change.
Establish a monopoly by “borderline illegally” scrapping thanks to VC funds. Acknowledge it was a “mistake”, once legally forced, but only when you are so big compare to competition it does not matter anymore.
Profit. 🙄 https://twitter.com/utopiah/status/1646786402175074304
(original)
-
Replying to @CubanBTC
Also to clarify the /$ means sarcasm with a money twist.
(original)
-
Replying to @CubanBTC
I can’t say if data leakage is the worst part but what I can say for sure is that one problem doesn’t cancel out another problem. So yes data leakage is the focus and is terrible but training models also has an environment impact AND is also terrible and irreversible.
(original)
-
Another great step in the democratization of AI, e.g LLMs : GPT4All-J Chat UI Installers by @nomic_ai
Download from https://github.com/nomic-ai/gpt4all , start the installer as the simple GUI in the bottom right, let it download the model as you grab a coffee, minutes later chat, locally.👍
(original)
-
Replying to @utopiah
i.e a very small unproven, not to say imagined, competitive advantage, at a huge cost both professionally (e.g Samsung) and personally.
Again this is NOT against the technology, i.e LLM, per se but rather HOW that specific for-profit implemented very important research! /🧵
(original)
-
Replying to @utopiah
processes, e.g search in generic search engine, with keywords, in a dedicated search engine, completion, code completion, etc BUT also other LLMs or even “just” AI tools, e.g neural search, already included in professional tools.
IMHO the trade off is absolutely not worth it…
(original)
-
Replying to @utopiah
I had few discussions with acquaintances about this. Several were convinced that it would somehow send Italy🇮🇹 back to the dark ages. Let’s indeed check in a short while if the efficiency gains are real, not yet another cool demo. This is to benchmark not just again existing…
(original)
-
Who cares about privacy if few people can earn amounts of money they couldn’t spend in a lifetime while burning tons of CO2eq🌍 letting masses imagine they are going to gain some efficiency in writing emails or searching documentations via a black box with “Open” in its name.🤔/$ https://twitter.com/hackylawyER/status/1643614067448315907
(original)
-
Replying to @hackylawyER and @WIRED
cc @djleufer
(original)
-
Replying to @darkpatterns
Well I don’t usually listen to audio books but have to support @librofm and @doctorow’s work here!
(… plus I have to be coherent with my own principle https://twitter.com/utopiah/status/1306566303839391744 ;)
(original)
-
RT @darkpatterns: DRM is user hostile.
(original)
-
RT @bitbybit_dev: http://bitbybit.dev bridge demo project showcases the power of our web CAD app. Latest release includes powerful new…
(original)
-
Replying to @bitbybit_dev and @threejs
Very nice! Is there an easy way to see, or even edit, the result in XR? Either directly WebXR support from @ImmersiveWebW3C (e.g https://threejs.org/docs/#manual/en/introduction/How-to-create-VR-content ) or export to e.g @glTF3D (e.g https://threejs.org/docs/examples/en/exporters/GLTFExporter.html ) of the result?
(original)
-
RT @algoritmic: Create animations starring your own drawn characters https://fairanimateddrawings.com/site/home & https://github.com/facebookresearch/AnimatedDrawings https://t.co/AvDzK0H…
(original)
-
RT @utopiah: Documentation of the process https://fabien.benetou.fr/Cookbook/Electronics#SocialWebXRRPi0 including a 700Mb image to use with https://github.com/raspberrypi/rpi-imager
Suggestions…
(original)