-
I can't promise that a generative AI model won't ever be able to make 3D animation that looks right, but it is true that making intentional choices to solve artistic problems is not something these models can understand, at least not given how they're currently being built @ChadNotChud/1633822252734066690
-
Neural networks do not understand the underlying meaning behind the data they are trained on. They model the mathematical relationship between vectorized inputs and outputs. In basic terms, they are big pattern recognition machines. Which isn't nothing but also clearly limited
-
A model like Stable Diffusion does not understand what lighting, construction, or composition is. It understands what images with certain descriptions tend to look like. It can't reproduce details (like hands) that don't show up often in the overall pattern it is learning
-
I'm sure if you train a model with a text description as input, and skeleton animation and camera placement as an output, it'll do... something. Maybe even something okay-looking! But it will not "understand" what rules are okay to break to achieve any specific effects