The bleeding edge: Adobe held its annual MAX conference this week and, as usual, closed the event with its Sneaks presentation. Sneaks is a showcase of experimental features that Adobe developers have been working on for Creative Cloud throughout the year. This year’s Sneaks theme was the Multiverse referring to the ongoing enthusiasm over metaverse technologies.
Adobe held its annual MAX conference this week and, as usual, closed the event with its Sneaks presentation. Sneaks is a showcase of experimental features that Adobe developers have been working on for Creative Cloud throughout the year. This year’s Sneaks theme was the Multiverse referring to the ongoing enthusiasm over metaverse technologies.
Most of the Project Sneaks this year have some application for use in the metaverse, whether it’s creating 3D environments from a 2D image or compositing images to use in a metaverse presentation. The features are in various stages of development, so they don’t always work as well as you might expect. Most might never see the light of day again.
However, three of the Projects stood out for us this year — Project Clever Composites, Project Vector Edge, and Project All of Me. Each uses Adobe’s Sensei AI to make things much easier for content creators and look polished enough that they have a good chance of making it into Creative Cloud at some point.
Project Clever Composites
Clever Composites makes compositing multiple images a breeze. If you’ve ever tried to place an object or person on the background of a different photo, you know it’s a challenge. First, you have to isolate the subject and cut it from one image, then place it on the background, and that’s the easy part. Adjusting the lighting, shadows, color saturation, contrast, and many other settings to match the background image can sometimes take hours.
Clever Composites does a few cool things. First, using the Creative Cloud search function to find suitable composite images can be tricky and time-consuming. With Clever Composites selecting an area of the background image acts as a filter and returns only images that make sense for that visual context.
After finding a suitable subject for the composite, fitting it to the background is as simple as dragging and dropping. The AI will automatically adjust the lighting, scale, and all other effects, including creating a drop shadow.
Project Vector Edge
Vector Edge was another feature tailored to creating composites. Placing logos and other visuals on three-dimensional surfaces within a photo can be complex. The perspective tool is your friend with tasks like placing an image on a television screen. However, things get trickier when you start dealing with edges. For example, situating a logo over the edge of a box requires so deft photoshopping magic. Vector Edge does all of this for you.
The Vector Edge tool uses AI to eliminate the need for manually clipping foreground images and adjusting for multiple perspectives. To demonstrate, Adobe started with a background image containing several objects. Sensei analyzes the photo and determines object dimensionality. Then when you place a foreground image, like a logo, on a flat surface, it automatically adjusts for perspective.
Furthermore, it detects when the foreground subject overlaps the edge of a background 3D surface and fixes the perspective for all three dimensional planes. It can also realistically affix content on rounded objects. It does this all in real-time, so you can quickly drag the foreground image to wherever it needs to go, seeing it morph based on where it is set.
Project All of Me
Finally, there was All of Me. You could call All of Me an uncropping tool, and you wouldn’t be wrong. While it’s easy to crop an image, getting the lost parts back without the original is impossible unless you have Sensei working for you.
Adobe demoed this tech using a picture of a woman in front of a house with a large yard. By examining the existing elements, the AI gets an idea of how the scene should look on a larger canvas. So then, by expanding the picture’s outer edges, All of Me will fill in the missing parts with visually accurate material. The example Adobe showed recreated part of the house, yard, and even the woman’s legs. Interestingly, the footwear was visually appropriate to the context of her dress.
As previously mentioned, these projects are experimental, and many still need more work. As such, they are unavailable to the public at this time and may or may not make it out of Adobe studios. That said, these three seem to be the best candidates to show up in future versions of Creative Cloud.
If you’re interested in the other experimental features Adobe had on show, we’ve included the entire presentation in the masthead for your convenience.
#Creative #Cloud #technologies #showcased #Adobe #MAX #Sneaks