News from the RNDR Coin Development team on October 21, 2022
Earlier this week, Apple showcased Octane X on the iPad, bringing cinematic quality rendering software to a whole new mobile platform. Alongside bringing all of Octane 2022 to the iPad, the Octane X app will seamlessly integrate with the Render Network, allowing users to experience the burdenless creative power that it provides. Following the Octane X showcase, this week the Render Network previewed a new AI toolset in development built on Stable Diffusion that — combined with next generation consumer workflows — will revolutionize the creative process. Let’s dive into some of the details:
AI Neural Rendering
Part of the experience porting Octane X to the iPad was creating a new UI that would facilitate a simpler workflow than currently offered on desktop. What this retooling allowed was for the Render Network to be more seamlessly integrated into the Octane X experience, but it was not the only opportunity that was opened. This development cycle allowed for something that had previously been previewed to be more fully realized: AI rendered Neural net objects.
Jules Urbach on Twitter: "3/10 In March at #GTC22, I first previewed our Neural Rendering work: AI and raytracing seamlessly blend in a single GPU rendering pipeline: #RNDR https://t.co/brTyypV8vD pic.twitter.com/EslYZlJ7iy / Twitter"
3/10 In March at #GTC22, I first previewed our Neural Rendering work: AI and raytracing seamlessly blend in a single GPU rendering pipeline: #RNDR https://t.co/brTyypV8vD pic.twitter.com/EslYZlJ7iy
These neural net objects are created with realistic properties that they would have in-universe, baked in. This includes textures, lighting and more. Keeping in line with the ease of use of an iPad, the UI for these models are swapped out on Octane X for a simple voice/text input, similar to what’s seen in AI projects like Dalle. Simplifying the creation process will allow for jobs to be created more efficiently, expanding the possible scale.
This might seem impossible given the limitations of the iPad GPU, but that’s where the Render Network integration comes in. Throughout the process of creation, hundreds of “micro-jobs” are performed on the Render Network to produce the final frame renders the AI create. Think of the AI as a master delegator, parsing out work to Node Operator nodes without user intervention in order to produce a desired output. But what does this mean for the Render Network users?
A whole new kind of work and user
Expanding onto a tool like the iPad brings with it more than just a new platform, but a whole new user base. With an easy to use system that will be integrated seamlessly into the Render Network itself, more creators will join the Render Network easier than before, bringing with them more jobs for the current crop of Node Operators. Additionally, the workload demanded by AI modeling will increase demand for Nodes in order to process the micro-jobs from the AI. Outside of just the increased job numbers, Render Network is currently testing support for multiple AI models, which would expand the workload and Nodes needed exponentially for each model supported.
A Revolutionary Creative Workflow
AI toolsets like Stable Diffusion integrated into Octane and the Render Network have the potential to augment the creative process in some very interesting ways. With a single scene or animation, artists can now create on-chain generative prompt based renders using Stable Diffusion. AI also can simplify time consuming, laborious elements of the creative process — for example, generic texture creation — making creating 3D artwork much more intuitive and accessible to a wider variety of artists. AI textures also have the potential to open up new creative concepts, where artists can rapidly iterate on styles using prompts, creating a new toolset for look and style development. Most interesting, is how all of this can be explored on chain as new mediums emerge in the post-Internet web3 metaverse.
Jules Urbach on Twitter: "4/10 6 months later: RNDR (beta) supports dozens of AI models - run a #stableDifusion job from the #RNDR website just like a normal 3D render job;l‼️ This alone *will* push demand-side much higher when it launches, e.g. we'd likely need ~2 million OB to power #dreamstudio today pic.twitter.com/Q3vk1wS50w / Twitter"
4/10 6 months later: RNDR (beta) supports dozens of AI models - run a #stableDifusion job from the #RNDR website just like a normal 3D render job;l‼️ This alone *will* push demand-side much higher when it launches, e.g. we'd likely need ~2 million OB to power #dreamstudio today pic.twitter.com/Q3vk1wS50w
The Future of AI | NFTs | Blockchain and Immersive Media.
For NFT artists, the combination of Stable Diffusion and on-chain rendering could open up avenues to creating large, accessible editions of NFT releases but with the uniqueness of a generative 1/1. This creates the potential that any scene can become a large-scale individually unique, or even long form generative series. Combining randomized on-chain scene deformations, Stable Diffusion rendering, and unique generative rarity opens a new universe for cryptoartists to explore. Because Stable Diffusion is prompt based, it is possible to use on-chain oracles to generate prompts, creating a new vector for on-chain cryptoart where on-chain or oracle based activity creates a feedback loop with artwork creation. Additionally, these artworks can become dynamic with ever evolving prompts from oracles, creating a living-breathing work of art. For example, an artwork could evolve based on inputs like on-chain activity, or even data fed in from other sources like news, social media, image libraries or other public data sets.
Finally, as we look to the future of fully immersive real-time media, AI and Stable Diffusion based rendering can lead to fully immersive generative worlds — in which background AI rendering and procedural geometry creates continuous, open ended virtual experiences. Imagine exploring a VR scene that evolves based on prompts and activity as a user navigates through it. These could also be packaged as individually unique immersive experiences via unique on-chain seeds, giving artists a new avenue to create and sell individually unique immersive streaming experiences. It may sound far-fetched, but that future is coming soon.
While there is good reason to be concerned about the weaponization of AI as well as disruption in the future of labor that come with AI automation, there are applications of the technology that will augment the creative process, creating a new type of creative labor that will become increasingly important for the global economy. As Jules recounted all the way back in 2017 in his “Life after Automation” piece published about the future of the Render Network, “When AI has parity with more rarified levels of intelligent human skill and labor, it will only be commoditizing the value of human work, never human worth.”
This idea stands at the foundation of this new AI generated work: that while the AI has the ability to craft, it does not have the ability to create, which still lies firmly in the hands of the human beings behind it. What generative art opens the door for is an enhanced creative workflow: whereby new pieces can be formed more easily on multiple platforms; where the idea of collective creation is brought to life through AI and the Node Operators of the Render Network helping to render singular visions; where the limits of creative expression are eroded. The future of creativity is not to be feared:
“In a post-automation world, AI may ultimately evolve to the point where it can generate our media, art, stories, synthetic experiences and expose us to novel and meaningful truths. This may feel like it threatens the sacred final frontier of human creative output. But we should remember that even if AI one day powers the entire economy of creating thought and art, the only transactional unit of currency in such a system, will be when we, as humans, render it valuable.”
The rise of AI and immersive media add urgency to concerns about centralization risks, technological control and inequality — which make it even more imperative to build open platforms for a shared and open decentralized metaverse. Making AI tool-sets available in Octane and on the Render Network provide more widespread access to this technology, democratizing who can create with cutting-edge technology. It is an essential part of the mission for the Render Network.
Jules will be speaking about AI Neural net rendering, Octane X on the iPad and more during his talk at Solana Breakpoint 2022. Be on the lookout for more information about his talk, streaming links and more on the Render Network social channels.