Project Genie lets you generate new worlds 60 seconds at a time, but only if you pay for AI Ultra.
See full article...
See full article...
Generate a world free of AI slop.
And fascists.
Please
I am afraid Google Genie will feel like Wolfenstein 3D
and 3D artist. my brother graduated university in 2022 and hasn't been able to find a single opening since. in Montréal, which has a bunch of studios. and it's not like he's choosy - he'd even go work for Ubisoft ! half his cohort already pivoted.potentially the viability of the profession of game designer
I've wondered the same in a couple of occasions. It seems they think consistent works models are important, probably for robotics. The visuals are a way to test it, maybe with some selling opportunities.Is there a reason they use AI to generate video instead of AI to generate assets which are fed into a more traditional rendering engine?
It would seem to me that solving for smaller parts of the whole would be cleaner and quicker than simulating the complete end result. Build the holodeck rather than a short YouTube video about the holodeck, in other words.
Reminds me of the scene in Inception when the old man is explaining to Cobb that "for them, the dream is the reality". Seems like something lots of folks here could relate to.Damaging the actual world you live in to "create" a fantasy world?
I'd agree.Is there a reason they use AI to generate video instead of AI to generate assets which are fed into a more traditional rendering engine?
It would seem to me that solving for smaller parts of the whole would be cleaner and quicker than simulating the complete end result. Build the holodeck rather than a short YouTube video about the holodeck, in other words.
It does seem obvious to me too, get these things generating assets and worlds in Unreal Engine, you get coherency, physics, rendering etc virtually for free.However, until AI can create longer, repeatable, fully coherent simulations with different sensor types (point cloud, multiple cameras, IR) it's no more than a cool demo. The physics models for the autonomous systems need to be able to interact with the world, its objects and surfaces. The sensor data needs to be returned in a form that closely mimics the real sensors. Engineers need to be able to run the same simulation dozens of times, tweaking the system and observing the results.
This can all be done right now with 3D environments and detailed sensor models.
You start with a single triangle. Add more triangles at random, if it looks more like the object you give the AI a virtual cookie, if it doesn't you have it throw out the changes and add more triangles at random until it does. Everything that exists can be represented by a number of triangles greater than 1 and less than some upper bound I haven't quite pinned down yet. Once we do finally figure out that upper bound the software will run a looooot smoother, what with not having to keep all those extra triangles in memory all the time until we need them.It does seem obvious to me too, get these things generating assets and worlds in Unreal Engine, you get coherency, physics, rendering etc virtually for free.
I wonder though if the issue is training - where would the data come from for the AI to learn to do that? Video is easy, they've got zillions of hours of YouTube etc. Teaching it how Unreal works (not how the output looks, but what to do to get that output) might be a lot harder. Feed it the manuals and some sample projects? I dunno.
AI cloud computing timeshares coming soon.At $250/month, I expect to be able to move into the damned thing. At least part time.
So it's exactly the thing that it is not, technically, though.World models are exactly what they sound like—an AI that generates a dynamic environment on the fly. They’re not technically 3D worlds, though.
Functionally pointless today, but think about it from a dystopian, greedy, business perspective: There is big marketing potential where a generated experience can have visuals (be it ads, product placement, psychological cues, etc) tailored to individuals as they watch or interact in real time.I wish tech companies would publish their internal project approval documents when they release a new product. Is there one? How did someone justify a team working on this? There has to be some sort of project qualification mechanism...right? Its neat but functionally pointless.
Google has always prided itself on letting its developers work on things that are merely neat while deferring judgment on whether they turn out to be pointless. The hope is that they'll be the only company that chances upon an amazing application of something that everybody else had dismissed as pointless — albeit with the irony that Google published word2vec way back in 2013, which ws fairly foundational to modern LLMs even though it is now outdated, but seemingly without anybody there realising the next step that would unlock a money bubble.There has to be some sort of project qualification mechanism...right? Its neat but functionally pointless.