
When you fire up Photoshop in 2026, you aren’t just opening an image editor; you’re entering a hybrid workspace where local processing meets the heavy lifting of the cloud. But as Adobe leans into its “ethical” AI branding, a fascinating tension has emerged. Does a “clean” dataset make for a “worse” artist?
The Hybrid Engine: Local vs. Cloud
If you’ve noticed your computer fans spinning up during a “Select Subject” but staying quiet during a “Generative Fill,” there’s a technical reason for that. Photoshop currently uses a split-brain architecture:
-
Local Processing: Tasks like Select Subject, Remove Background, and Neural Filters often run on your local machine. These models are optimized for your computer’s NPU (Neural Processing Unit).
-
Cloud Processing: Generative tools like Generative Fill and Generative Expand are too massive for most consumer hardware. When you hit “Generate,” your selection and prompt are sent to Adobe’s servers (likely running the Firefly Image Model 3 or 4 by now), and the results are streamed back.
-
The 2026 Update: Interestingly, the latest versions have begun allowing partner models like Gemini (Nana Banana) or Flux directly in the dropdown. These are strictly cloud-based and often require “Premium Generative Credits.”
The “Ethical” Filter: A Tech Bottleneck?
Adobe’s core marketing claim is that they only train on Adobe Stock and public domain images. This is their “commercially safe” shield. However, from a technical standpoint, this has created a “Data Desert” compared to the “Wild West” of competitors like Midjourney.
As I explored in my recent DDMC article, “Dreaming in Watermarks,” platforms like ChatGPT and Gemini often produce ghostly watermarks because they’ve “memorized” unauthorized web-scrapes from datasets like LAION.
By avoiding these scrapes, Adobe avoids the “watermark ghosts,” but they also lose the vast, messy diversity of the open web. This is likely why Photoshop’s native AI can feel a bit “stiff” or “stock-photo-ish” when asked to create something truly imaginative from scratch. It prioritizes the Aesthetic of Safety over the Aesthetic of Creativity.
Editing vs. Creation: Why it Feels Different
The consensus among professionals—and my own experience—is that Photoshop’s AI is an elite editor but a mediocre creator.
| Feature | Performance | The “Why” |
| Generative Creation | Weak | Restricted training data makes it less “vibey” and creative than Midjourney. |
| Object Removal | Exceptional | Adobe’s “Remove Tool” and “Generative Fill” are unmatched at maintaining pixel-level consistency. |
| Interface | Complex | The “Contextual Task Bar” and selection-based workflow require “Photoshop Skill” that a simple chat box doesn’t. |
Others in the community are finding that while DALL-E or Midjourney are better for “concepting,” Photoshop is where you go to “fix” the AI’s mistakes. It’s the difference between an Artist and a Retoucher.
Final Thought: The Accountability Crisis
In 2026, we are seeing a split in AI philosophy. One side (Adobe) offers transparency and legal safety at the cost of some creative flair. The other (OpenAI/Google) offers unlimited creative “memory” but brings along the “ghosts” of scraped data.
For the professional, Adobe is the only one that won’t get you sued—even if the interface feels like it was designed by a committee of lawyers and engineers.

This is so helpful, Stephen! I was familiar with everything that went on behind the scenes with Photoshop!