Adobe will use your work to train its AI algorithms unless you opt out

Adobe automatically analyzes user content stored on its Creative Cloud services to train AI algorithms unless people opt-out of the service. The Krita Foundation, an independent non-profit group building open-source graphics software for artists, raised concerns about the policy this…

Adobe automatically analyzes user content stored on its Creative Cloud services to train AI algorithms unless people opt-out of the service.

The Krita Foundation, an independent non-profit group building open-source graphics software for artists, raised concerns about the policy this week. In the privacy and personal data account settings, Adobe states “[it] may analyze your content using techniques such as machine learning (e.g, for pattern recognition) to develop and improve [its] products and services.” Users automatically allow Adobe access to their data unless they opt-out of the service. Okay, we know… We made fun of Adobe when its cloud service went down. We’ve made fun of Corel Painter and Clip Studio. We joined in the protest No AI Generated Images protest. We made our stance on NFT’s clear. But this is beyond making fun of. This is EW! EW! EW! pic.twitter.com/40wBWYci7V — @Krita@mastodon.art (@Krita_Painting) January 4, 2023 A closer look at the company’s Content Analysis FAQ shows the rule was updated last year in August, and applies to images, audio, videos, text or documents that are stored on its cloud servers. The company said it doesn’t look at content processed or stored locally on user’s devices. “When we analyze your content for product improvement and development purposes, we first aggregate your content with other content and then use the aggregated content to train our algorithms and thus improve our products and services,” it confirmed. Adobe confirms UK looking into its $20b Figma deal, EU probe ‘expected’ Adobe to sell AI-generated images on its stock photo platform US Justice Department requests more info from Adobe on $20b Figma buy University students recruit AI to write essays for them. Now what? Adobe’s latest content analysis policy allows the company to amass its own AI training datasets without having to scrape the internet. Some artists are frustrated that their work has been scraped and used to train generative AI art models without their consent, allowing just about anyone using text-to-image tools like DALL-E, Midjourney, or Stable Diffusion to create content that mimics their style. Although Adobe has built a range of AI applications that help users do things like converting a 2D image to a 3D one, or quickly search through videos to find specific clips, it doesn’t have its own text-to-image product yet. It does, however, sell AI-generated work on its stock images platform. A company spokesperson told The Register in a statement: “We give customers full control of their privacy preferences and settings. The policy in discussion is not new and has been in place for a decade to help us enhance our products for customers. For anyone who prefers their content be excluded from the analysis, we offer that option here.” “When it comes to Generative AI, Adobe does not use any data stored on customers’ Creative Cloud accounts to train its experimental Generative AI features. We are currently reviewing our policy to better define Generative AI use cases.” ®