Skip to main content

Common Sense Media

Movie & TV reviews for parents

Use app
  • For Parents
  • For Educators
  • Our Work and Impact
Language:
Español (próximamente) - volver al inicio

Or browse by category:

  • Movies
  • TV Shows
  • Books
  • Apps
  • Games
  • Parenting
  • Sign in
  • Donate
  • Get the app
  • Movies
    • Movie Reviews and Lists
      • Movie Reviews
      • Best Movie Lists
      • Best Movies on Netflix, Disney+, and More
      • Common Sense Selections for Movies
    • Marketing Campaign
      • 50 Modern Movies All Kids Should Watch Before They're 12

    • The Common Sense Seal
      • Common Sense Selections for Movies

  • TV
    • TV Reviews and Lists
      • TV Reviews
      • Best TV Lists
      • Best TV Shows on Netflix, Disney+, and More
      • Common Sense Selections for TV
      • Video Reviews of TV Shows
    • Marketing Campaign
      • Best Kids' Shows on Disney+

    • Marketing Campaign
      • Best Kids' TV Shows on Netflix

  • Books
    • Book Reviews and Lists
      • Book Reviews
      • Best Book Lists
      • Common Sense Selections for Books
    • Article About Books
      • 8 Tips for Getting Kids Hooked on Books

    • Marketing Campaign for Books
      • 50 Books All Kids Should Read Before They're 12

  • Games
    • Game Reviews and Lists
      • Game Reviews
      • Best Game Lists
      • Common Sense Selections for Games
      • Video Reviews of Games
    • Marketing Campaign
      • Nintendo Switch Games for Family Fun

    • Marketing Campaign
      • Common Sense Selections for Games

  • Podcasts
    • Podcast Reviews and Lists
      • Podcast Reviews
      • Best Podcast Lists
      • Common Sense Selections for Podcasts
    • Podcast Article Callout
      • Parents' Guide to Podcasts

    • Marketing Campaign
      • Common Sense Selections for Podcasts

  • Apps
    • App Reviews and Lists
      • App Reviews
      • Best App Lists
    • Marketing Campaign
      • Social Networking for Teens

    • Marketing Campaign
      • Gun-Free Action Game Apps

    • Marketing Campaign
      • Reviews for AI Apps and Tools

  • YouTube
    • YouTube Reviews and Lists
      • YouTube Channel Reviews
      • YouTube Kids Channels by Topic
    • Marketing Campaign
      • Parents' Ultimate Guide to YouTube Kids

    • Marketing Campaign
      • YouTube Kids Channels for Gamers

  • Parent Tips and FAQs
    • By Age
      • Preschoolers (2-4)
      • Little Kids (5-7)
      • Big Kids (8-9)
      • Pre-Teens (10-12)
      • Teens (13+)
    • By Topic
      • Screen Time
      • Learning
      • Social Media
      • Cellphones
      • Online Safety
      • Identity and Community
      • More ...
    • By Platform
      • TikTok
      • Snapchat
      • Minecraft
      • Roblox
      • Fortnite
      • Discord
      • More ...
    • What's New
      • How to Share Screen Time Rules with Relatives, Babysitters, and Other Caregivers

      • Family Tech Planners
      • Digital Skills
      • All Articles
  • Celebrating Community
    • Menu for Latino Content
      • Latino Culture
      • Black Voices
      • Asian Stories
      • Native Narratives
      • LGBTQ+ Pride
      • Best of Diverse Representation List
    • FACE English Column 2
      • Multicultural Books

    • FACE English Column 3
      • YouTube Channels with Diverse Representations

    • FACE English Column 4
      • Podcasts with Diverse Characters and Stories

  • Donate

 

DALL-E Logo

DALL-E

By our AI Review Team .
Last updated October 27, 2023

DALL-E turns text into vivid visuals—despite some protections, users should be cautious

Overall Rating

Learn more

AI Type

Multi-Use

Learn more

Privacy Rating

48%

Learn more


 

What is it?

DALL-E is a generative AI product created by OpenAI. It can create realistic images and art from a text-based description that can include artistic concepts, attributes, and styles. DALL-E's full suite of image editing tools offers users a sophisticated range of options: extending generated images beyond the original frame (outpainting), making authentic modifications to existing user-uploaded or AI-generated pictures, and incorporating or eliminating components while considering shadows, reflections, and textures (inpainting). Once users achieve the generated image they want, they can download and use it.

OpenAI released the first version of DALL-E in January 2021. DALL-E 2 was released in a controlled research preview in April 2022, opened in beta to a waitlist in July 2022, and became available in public beta form in November 2022. DALL-E 2 was trained on pairs of images and their corresponding captions. While we don't know the exact details of the data sets used, the company has shared that these were drawn from a combination of publicly available data sets—such as Conceptual Captions and the Yahoo-Flickr Creative Commons 100 Million Dataset (YFCC100M)—and data sets licensed by OpenAI.

DALL-E is a browser-based tool, and is also available with an API that developers can use in their own apps. A free version of DALL-E is available through Microsoft's Bing Image Creator. Outside of Bing, users can currently access DALL-E at a price of $15 for 115 prompts, each with four image variations.

How it works

DALL-E is a form of generative AI, which is an emerging field of artificial intelligence. Generative AI is defined by the ability of an AI system to create ("generate") content that is complex and coherent and original. For example, a generative AI model can create sophisticated writing or images.

DALL-E uses a particular type of generative AI called "diffusion models," named for the process of diffusion to generate new content. Diffusion is a natural phenomenon you've likely experienced before. A good example of diffusion happens if you drop some food coloring into a glass of water. No matter where that food coloring starts, eventually it will spread throughout the entire glass and color the water in a uniform way. In the case of computer pixels, random motion of those pixels will always lead to "TV static." That is the image equivalent of food coloring creating a uniform color in a glass of water. A machine-learning diffusion model works by, oddly enough, destroying its training data by successively adding "TV static," and then reversing this to generate something new. They are capable of generating high-quality images with fine details and realistic textures.

DALL-E combines a diffusion model with a text-to-image model. A text-to-image model is a machine learning algorithm that uses natural language processing (NLP), a field of AI that allows computers to understand and process human language. DALL-E takes in a natural language input and produces an image that attempts to match the description.

Highlights

  • DALL-E has the potential to enable creativity and artistic expression, and allow for visualization of new ideas.
  • OpenAI has taken a number of efforts to reduce DALL-E's ability to generate harmful content. These include filtering the pre-training data to reduce the quantity of graphic sexual and violent content, as well as images of some hate symbols; assessing user inputs (text-to-image prompts, inpainting prompts, and uploaded images) and refusing to generate content for inputs that would lead to a violation of the company's content policy; instituting rate limits; and enforcement via monitoring and human review. In contrast with our review of Stable Diffusion, these efforts have been noticeably effective. Importantly, DALL-E 2's system card only includes references to inpainting, not outpainting or variations, when describing these efforts. We don't know if outpainting and variation prompts are also assessed.

Harms and Ethical Risks

  • DALL-E's "view" of the world can shape impressionable minds, and with little accountability. OpenAI states that "use of DALL-E 2 has the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity. These behaviors reflect biases present in DALL-E 2 training data and the way in which the model is trained." An example of this comes from the company's realization that the explicit content filter applied to DALL-E's pre-training data actually introduced a net new bias. Essentially, the filter—which was designed to reduce the quantity of pre-training data containing nudity, sexual content, hate, violence, and harm—reduced the frequency of the keyword "woman" by 14%. In contrast, the explicit content filter reduced the frequency of the keyword "man" by only 6%. In other words, OpenAI's attempts to remove explicit material removed enough content representing women that the resulting data set significantly overrepresented content representing men. This offers perspective on how many images on the internet contain explicit sexual content of women. OpenAI also notes that DALL-E's default behavior generates images that overrepresent White skin tones and "Western concepts generally." These propensities towards harm are frighteningly powerful in combination. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is "correct," and how will this inform their interactions with real people and society? Who is accountable for allowing this to happen?
  • Inappropriate sexualized representations of women and girls harm all users. DALL-E continues to demonstrate a tendency toward objectification and sexualization. This is especially the case with inappropriate sexualized representations of women and girls, even with prompts seeking images of women professionals. This perpetuates harmful stereotypes, unfair bias, unrealistic ideals of women's beauty and "sexiness," and incorrect beliefs around intimacy for humans of all genders. Numerous studies have shown that greater exposure to images that promote the objectification of women adversely affects the mental and physical health of girls and women.
  • DALL-E easily reinforces harmful stereotypes. Even when instructed to do otherwise, DALL-E is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. Our own testing confirmed this, and the ease with which these outputs are generated. Some examples of what we found include:
    - DALL-E reflected and amplified statistical gender stereotypes for occupations (e.g., only female flight attendants, housekeepers, and stay-at-home parents, vs. male software developers). OpenAI has attempted to address these known challenges. While this technique appears to have worked for some well-tested occupations, especially in generating more variety in skin tones, we found highly gendered results for occupations such as product managers (all male) vs. product marketers (all female), principals (all male) vs. teachers (all female), bankers (all male) vs. bank tellers (all female), and managers (all male) vs. human-resources professionals (all female).
    - When asked to pair non-White ethnicities with wealth, DALL-E struggled to do so in a photorealistic manner. Instead, it generated cartoons, severely degraded images, and images associated with poverty.
  • DALL-E's advanced inpainting features present new risks. While innovative and useful in many contexts, the high degree of freedom to alter images means they can be used to perpetuate harms and falsehoods. In OpenAI's words, images that have been changed to, for example, modify, add, or remove clothing or add additional people to an image in compromising ways "could then be used to either directly harass or bully an individual, or to blackmail or exploit them." These features can also be used to create images that intentionally mislead and misinform others. For example, disinformation campaigns can remove objects or people from images or create images that stage false events. Notably, inpainting prompts are also subject to OpenAI's efforts to limit DALL-E's ability to generate harmful content.
  • Tools like DALL-E pave the path to misinformation and disinformation. As with all generative AI tools, DALL-E can easily generate or enable false and harmful content, both by reinforcing unfair biases, and by generating images that intentionally mislead or misinform others. Because OpenAI's attempts to limit these are brittle, and images can be further manipulated with generative AI via in- and outpainting, false and harmful visual content can be generated at an alarming speed. We have already seen this in action. OpenAI notes that as image generation matures, it "leaves fewer traces and indicators that outputs are AI-generated, making it easier to mistake generated images for authentic ones and vice versa." In other words, as these AI systems grow, it may become increasingly difficult to separate fact from fiction. This "Liar's Dividend" could erode trust to the point where democracy or civic institutions are unable to function.

Limitations

 

  • We did not receive participatory disclosures from OpenAI for DALL-E. This assessment is based on publicly available information, our own testing and our review process.
  • Ensuring that prompts are specific and "grounded" can help reduce certain biases in underspecified prompts, though research indicates that bias can still persist. User education on responsible prompting is crucial. Resources like this can help.
  • The model has difficulty representing concepts outside its training data, leading to inconsistent performance for individuals who seek to prompt DALL-E to produce non-Western-dominant ideas, objects, and concepts.
  • Currently, there are no reliable deepfake detection tools, or tools capable of determining whether images were generated by DALL-E. While every image that DALL-E generates currently includes an identifying signature in the lower right corner, it can be easily cropped out.
  • At the time of this review, DALL-E can only support English language prompts.

 

Misuses

  • OpenAI details misuses of all of its models, including DALL-E, in a comprehensive Usage Policy, and has a separate content policy for DALL-E. Importantly, neither of these are easy to find when using the product. 
  • OpenAI's terms of service do not allow its use for children under age 13.
  • Teens age 13–17 are required to have parental permission to use it.

 

Common Sense AI Principles Assessment

Our assessment of how well this product aligns with each AI Principle .

Usage Policies</a> do not permit uses that harm human rights, children's rights, identity, integrity, and human dignity. Although the specifics of enforcement remain unclear, OpenAI retains the right to use your inputs or personal information to safeguard these policies.</li> <li style="line-height:1.5;margin-bottom:5px;">OpenAI has taken a number of efforts to reduce DALL-E's ability to generate harmful content. These include filtering the pre-training data to reduce the quantity of graphic sexual and violent content, as well as images of some hate symbols; assessing user inputs (text-to-image prompts, inpainting prompts, and uploaded images) and refusing to generate content for inputs that would lead to a violation of the company's <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://labs.openai.com/policies/content-policy">content policy</a>; instituting rate limits; and enforcement via monitoring and human review. In contrast with <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"/ai-ratings/stable-diffusion">our review of Stable Diffusion</a>, these efforts are noticeably effective. Importantly, DALL-E 2's <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://github.com/openai/dalle-2-preview/blob/main/system-card.md#user-content-fnref-9-e8c1ffc526a61a6f2218027f8b260891">system card</a> only includes references to inpainting, not outpainting or variations, when describing these efforts. We don't know if outpainting and variation prompts are also assessed.</li> </ul> <p>&nbsp;</p> <h3>Violates this AI Principle</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">DALL-E's generated images can be harmful, but at the time of this review, no warnings appear in the user interface when using it. The DALL-E 2 <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://github.com/openai/dalle-2-preview/blob/main/system-card.md#user-content-fnref-9-e8c1ffc526a61a6f2218027f8b260891">system card</a> acknowledges potential risks, noting DALL-E's "ability to generate content that features or suggests any of the following: nudity/sexual content, hate, or violence/harm," despite attempts to filter this content from the pre-training data. OpenAI also notes that images generated by DALL-E, especially when combined with capabilities like inpainting, "could be used to intentionally mislead or misinform subjects, and could potentially empower information operations and disinformation campaigns." Despite knowing these risks, OpenAI's lack of clear warnings to users while they are using the product is irresponsible.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">While DALL-E is very easy and intuitive to use, the ethical risks inherent in the system make its ease of use potentially problematic. DALL-E's main barrier to use is a paywall in which users are asked to pay $15 for 115 prompts, each with four image variations.</li> <li style="line-height:1.5;margin-bottom:5px;">Users should educate themselves on best practices in prompting to ensure responsible use of DALL-E. <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://help.openai.com/en/articles/6582391-how-can-i-improve-my-prompts-with-dall-e">Resources like this</a> can help.</li> </ul> ">
broader range of diversity in DALL-E's outputs</a> unless otherwise specified by the text prompt. However, challenges like gender biases persist, even if skin tones vary.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">DALL-E continues to demonstrate a tendency toward objectification and sexualization. This is especially the case with inappropriate sexualized representations of women and girls, even with prompts seeking images of women professionals. Numerous studies have shown that greater exposure to images that promote the objectification of women <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.teenvogue.com/story/standard-issues-white-supremacy-capitalism-influence-beauty">adversely affects the mental and physical health of girls and women</a>.</li> <li style="line-height:1.5;margin-bottom:5px;">Even when instructed to do otherwise, DALL-E is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. Our own testing confirmed this, as well as the ease with which these outputs are generated. Some examples of what we found include:<br>- DALL-E reflected and amplified statistical gender stereotypes for occupations (e.g., only female flight attendants, housekeepers, and stay-at-home parents, vs. male software developers). OpenAI has <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2">attempted to address</a> these known challenges. While this has appeared to work for some well-tested occupations, especially in generating more variety in skin tones, we found highly gendered results for occupations such as product managers (all male) vs. product marketers (all female), principals (all male) vs. teachers (all female), bankers (all male) vs. bank tellers (all female), and managers (all male) vs. human-resources professionals (all female).&nbsp;<br>- When asked to pair non-White ethnicities with wealth, DALL-E struggled to do so in a photorealistic manner. Instead, it generated cartoons, severely degraded images, and images associated with poverty.&nbsp;<br>- DALL-E generates images that default to Western-centric associations. In images of objects, for example, the images generated from prompts with no identity descriptor perpetuate North American norms of what the objects look like. These models create a version of the world that is American by default.</li> <li style="line-height:1.5;margin-bottom:5px;">&nbsp;</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">Ensuring that prompts are specific and "grounded" can help reduce certain biases in underspecified prompts, though research indicates that bias can still persist. DALL-E does show helpful tips for creating prompts in its interface.</li> <li style="line-height:1.5;margin-bottom:5px;">The model struggles to represent ideas and people that do not appear in its training data, leading to disparate performance. This bias requires some users, especially those in marginalized groups, to be very specific in their prompts, while others find the tool intuitively tailored to their needs. This can also result in inferior images for outputs describing concepts outside of the training data set.</li> <li style="line-height:1.5;margin-bottom:5px;">OpenAI has made efforts to broaden DALL-E's representation of beauty standards in its output. It is important to be aware that like other image-generating models, DALL-E has been shown to produce images that align with Western-dominated, White-centric beauty and body standards, which can perpetuate unrealistic ideals that overlook cultural and individual diversity.</li> <li style="line-height:1.5;margin-bottom:5px;">It is very easy to unwittingly produce images that reinforce unfair bias and stereotypes using DALL-E. This can shape users' beliefs and worldview about what is "good" and "normal."</li> </ul> ">
participate in adversarial testing, often called "red teaming."</li> </ul> <p>&nbsp;</p> <h3>Violates this AI Principle</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">As with all generative AI tools, DALL-E can easily generate or enable false and harmful content, both by reinforcing unfair biases, and by generating images that intentionally mislead or misinform others. Because OpenAI's attempts to limit these are brittle, and images can be further manipulated with generative AI via in- and outpainting, false and harmful visual content can be generated at an alarming speed. We have already seen <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.nytimes.com/2019/12/20/business/facebook-ai-generated-profiles.html">this in <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/">action. OpenAI <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://github.com/openai/dalle-2-preview/blob/main/system-card_04062022.md#model">notes</a> that as image generation matures, it "leaves fewer traces and indicators that outputs are AI-generated, making it easier to mistake generated images for authentic ones and vice versa." In other words, as these AI systems grow, it may become increasingly difficult to separate fact from fiction. This "<a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://scholarship.law.bu.edu/faculty_scholarship/640/">Liar's Dividend</a>" could erode trust to the point where democracy or civic institutions are unable to function.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">While the DALL-E sign-up process has an age gate, there is nothing to stop kids from signing up if they choose to give an incorrect birth date.</li> <li style="line-height:1.5;margin-bottom:5px;">It's important to note that the teams assessing DALL-E are predominantly U.S.-based, English-speaking, and have a specific educational background. This naturally constrains the range of perspectives used to evaluate content in various contexts.</li> </ul> ">
content policy</a>.</li> </ul> <p>&nbsp;</p> <h3>Violates this AI Principle</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">By default, DALL-E uses your input to further train its models. In other words, any prompts or images you bring into the system—including personal information—will become part of its training data.</li> <li style="line-height:1.5;margin-bottom:5px;">The default use of prompt and image data to further train Dall-E is especially worrying for kids and teens using DALL-E, even if they are not supposed to.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">You can stop DALL-E from using the data you input, but this option isn't easy to find. If you want to do this, <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.openai.com/policies?name=open-ai-privacy-request-portal#data-subject-types">you can begin the process here</a>.</li> <li style="line-height:1.5;margin-bottom:5px;">OpenAI tools were not built with student privacy in mind. Any student using the service will be subject to the same policies as any other consumer.</li> <li style="line-height:1.5;margin-bottom:5px;">It is unclear how aware teachers are of the parental permission requirement for 13- to 17-year-olds, and the tool currently does not ask whether permission has been granted.</li> <li style="line-height:1.5;margin-bottom:5px;">Because of its age policy, DALL-E is not required to comply with (and to our knowledge, does not comply with) important protections such as the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa">Children's Online Privacy and Protection Act (COPPA)</a>, the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"/kids-action/about-us/our-issues/digital-life/sopipa">Student Online Personal Information Protection Act (SOPIPA)</a> or the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www2.ed.gov/policy/gen/guid/fpco/ferpa/index.html">Family Educational Rights and Privacy Act (FERPA)</a>. DALL-E is compliant with the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://gdpr.eu/">General Data Protection Regulation (GDPR)</a> and the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://oag.ca.gov/privacy/ccpa">California Consumer Privacy Act (CCPA)</a>.</li> </ul> <p>&nbsp;</p> <p><em>This review is distinct from Common Sense's privacy </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/evaluation-process">evaluations and </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/privacy-ratings">ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.</em></p> ">
content policy</a>; instituting rate limits; and enforcement via monitoring and human review. In contrast with <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"/ai-ratings/stable-diffusion">our review of Stable Diffusion</a>, these efforts have been noticeably effective.</li> </ul> <p>&nbsp;</p> <h3>Violates this AI Principle</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">DALL-E's "view" of the world can shape impressionable minds, and with little accountability. OpenAI <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://github.com/openai/dalle-2-preview/blob/main/system-card.md">states that "use of DALL-E 2 has the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity. These behaviors reflect biases present in DALL-E 2 training data and the way in which the model is trained." An example of this comes from the company's <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://openai.com/research/dall-e-2-pre-training-mitigations">realization that the explicit content filter applied to DALL-E's pre-training data actually <em>introduced</em> a net new bias. Essentially, the filter—which was designed to reduce the quantity of pre-training data containing nudity, sexual content, hate, violence, and harm—reduced the frequency of the keyword "woman" by 14%. In contrast, the explicit content filter reduced the frequency of the keyword "man" by only 6%. In other words, OpenAI's attempts to remove explicit material removed enough content representing women that the resulting data set significantly overrepresented content representing men. This offers perspective on how many images on the internet contain explicit sexual content of women. OpenAI also notes that DALL-E's default behavior generates images that overrepresent White skin tones and "Western concepts generally." These propensities towards harm are frighteningly powerful in combination. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is "correct," and how will this inform their interactions with real people and society? Who is accountable for allowing this to happen?</li> <li style="line-height:1.5;margin-bottom:5px;">DALL-E has not been designed in any specific way to protect children. DALL-E has been found to be able to output images that can emotionally and psychologically harm users, perpetuate harmful stereotypes, and promote mis/disinformation.</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">Currently there are no tools that would allow teachers and parents to monitor DALL-E's use in a way that could help evaluate student well-being.</li> </ul> ">
system card</a> for DALL-E 2. System cards are a form of transparency reporting, and are intended to help users understand DALL-E's capabilities, as well as its limitations, risks, and harms. They also provide information about how the creators have sought to mitigate the observed issues.</li> <li style="line-height:1.5;margin-bottom:5px;">Users are able to exert human control over images they produce with DALL-E by modifying prompts to effect change in the generated outputs.</li> <li style="line-height:1.5;margin-bottom:5px;">All users have the option to report problems they encounter with the generated outputs, whether it's a biased outcome, harmful outcome, or image results that don't align with the prompt.</li> </ul> <p>&nbsp;</p> <h3>Violates this AI Principle</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">The effects of bias and potential harm from images produced by DALL-E can vary based on context, complicating the assessment and mitigation process during image creation. Additionally, content filters can fail to fully capture images that are ethically dubious or violate OpenAI's guidelines, because the potential misuse is more a function of the context in which the image can be used (e.g., disinformation, harassment, bullying, etc.) and not the image itself. Currently, the challenge of identifying deepfakes and determining whether images have been created using DALL-E and products like it remains an unresolved issue, leaving a gap in our ability to mitigate the potential consequences of harmful situations when they occur in the real world. Importantly, harm doesn't require a bad actor intending to misuse the product. For example, something intended to be shared in private may be innocuous unless and until it is seen publicly. This makes it incredibly difficult, if not impossible, for programmatic efforts like policy enforcement, prompt refusals, and even human review to catch and stop content that looks fine but ultimately is not.</li> <li style="line-height:1.5;margin-bottom:5px;">Use of DALL-E can have a direct and significant impact on people, not only from the false and harmful content it may generate, but also, counterintuitively, from what it refuses to generate. Because any efforts to programmatically limit harmful content from surfacing are blunt instruments, whole categories that present any measure of risk may be subject to those efforts. While it is true that this may reduce the amount of harmful content that is generated, it also can cause some marginalized individuals and groups to, in OpenAI's words, suffer the "indignity of having their prompts or generations filtered, flagged, blocked, or not generated in the first place, more frequently than others."</li> </ul> <p>&nbsp;</p> <h3>Important limitations and considerations</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">OpenAI uses evaluations from various data sources, including real-world use, to compare and improve model versions. And the company addresses issues found through these prompts. However, the presence of consistent evaluations doesn't ensure that root problems are fixed, and it's uncertain if issues might reappear in different contexts.</li> </ul> <p>&nbsp;</p> <h3>Review team note:</h3> <ul> <li style="line-height:1.5;margin-bottom:5px;">While OpenAI engages in transparency reporting, it is highly technical in nature. For those interested:<br>- DALL-E 2 research (including a <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://github.com/openai/dalle-2-preview/blob/main/system-card.md#user-content-fnref-6-e8c1ffc526a61a6f2218027f8b260891">system card</a>, which is a type of transparency reporting) <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://openai.com/dall-e-2">can be found here</a>.&nbsp;<br>- OpenAI publishes a large amount of technical <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://openai.com/research/overview">AI research</a>.</li> </ul> ">
  • People First

    some

    AI should Put People First. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • OpenAI's Usage Policies do not permit uses that harm human rights, children's rights, identity, integrity, and human dignity. Although the specifics of enforcement remain unclear, OpenAI retains the right to use your inputs or personal information to safeguard these policies.
    • OpenAI has taken a number of efforts to reduce DALL-E's ability to generate harmful content. These include filtering the pre-training data to reduce the quantity of graphic sexual and violent content, as well as images of some hate symbols; assessing user inputs (text-to-image prompts, inpainting prompts, and uploaded images) and refusing to generate content for inputs that would lead to a violation of the company's content policy; instituting rate limits; and enforcement via monitoring and human review. In contrast with our review of Stable Diffusion, these efforts are noticeably effective. Importantly, DALL-E 2's system card only includes references to inpainting, not outpainting or variations, when describing these efforts. We don't know if outpainting and variation prompts are also assessed.

     

    Violates this AI Principle

    • DALL-E's generated images can be harmful, but at the time of this review, no warnings appear in the user interface when using it. The DALL-E 2 system card acknowledges potential risks, noting DALL-E's "ability to generate content that features or suggests any of the following: nudity/sexual content, hate, or violence/harm," despite attempts to filter this content from the pre-training data. OpenAI also notes that images generated by DALL-E, especially when combined with capabilities like inpainting, "could be used to intentionally mislead or misinform subjects, and could potentially empower information operations and disinformation campaigns." Despite knowing these risks, OpenAI's lack of clear warnings to users while they are using the product is irresponsible.

     

    Important limitations and considerations

    • While DALL-E is very easy and intuitive to use, the ethical risks inherent in the system make its ease of use potentially problematic. DALL-E's main barrier to use is a paywall in which users are asked to pay $15 for 115 prompts, each with four image variations.
    • Users should educate themselves on best practices in prompting to ensure responsible use of DALL-E. Resources like this can help.
  • Learning

    a little

    AI should Promote Learning. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • While DALL-E is not designed for use in schools, educators can use it in their classrooms with oversight. In particular, DALL-E can be a useful tool in teaching students about how to recognize and question societal biases.
    • Because it is a multi-use product, educators and older students—with permission from a parent or legal guardian—can express themselves creatively with DALL-E.
    • As with all generative AI tools, DALL-E can be used for creative use cases.
    • DALL-E can serve as a tool that enhances students' visual learning and strengthens understanding through impactful imagery, with appropriate teacher or parental oversight.

     

    Important limitations and considerations

    • DALL-E is not aligned with learning content standards.
    • Users should not attempt to use DALL-E to output images to visualize any process or scene that requires accuracy.
    • It is easy when using DALL-E to unwittingly produce images that reinforce unfair bias and stereotypes.
  • Fairness

    a little

    AI should Prioritize Fairness. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • OpenAI conducts its own research into harmful bias and broader issues around fairness and representation, and takes measures to attempt to protect against the worst issues found.
    • OpenAI notes that DALL-E's default behavior generates images that overrepresent White skin tones and "Western concepts generally," despite identity-neutral prompts. OpenAI now aims to display a broader range of diversity in DALL-E's outputs unless otherwise specified by the text prompt. However, challenges like gender biases persist, even if skin tones vary.

     

    Important limitations and considerations

    • DALL-E continues to demonstrate a tendency toward objectification and sexualization. This is especially the case with inappropriate sexualized representations of women and girls, even with prompts seeking images of women professionals. Numerous studies have shown that greater exposure to images that promote the objectification of women adversely affects the mental and physical health of girls and women.
    • Even when instructed to do otherwise, DALL-E is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. Our own testing confirmed this, as well as the ease with which these outputs are generated. Some examples of what we found include:
      - DALL-E reflected and amplified statistical gender stereotypes for occupations (e.g., only female flight attendants, housekeepers, and stay-at-home parents, vs. male software developers). OpenAI has attempted to address these known challenges. While this has appeared to work for some well-tested occupations, especially in generating more variety in skin tones, we found highly gendered results for occupations such as product managers (all male) vs. product marketers (all female), principals (all male) vs. teachers (all female), bankers (all male) vs. bank tellers (all female), and managers (all male) vs. human-resources professionals (all female). 
      - When asked to pair non-White ethnicities with wealth, DALL-E struggled to do so in a photorealistic manner. Instead, it generated cartoons, severely degraded images, and images associated with poverty. 
      - DALL-E generates images that default to Western-centric associations. In images of objects, for example, the images generated from prompts with no identity descriptor perpetuate North American norms of what the objects look like. These models create a version of the world that is American by default.
    •  

     

    Important limitations and considerations

    • Ensuring that prompts are specific and "grounded" can help reduce certain biases in underspecified prompts, though research indicates that bias can still persist. DALL-E does show helpful tips for creating prompts in its interface.
    • The model struggles to represent ideas and people that do not appear in its training data, leading to disparate performance. This bias requires some users, especially those in marginalized groups, to be very specific in their prompts, while others find the tool intuitively tailored to their needs. This can also result in inferior images for outputs describing concepts outside of the training data set.
    • OpenAI has made efforts to broaden DALL-E's representation of beauty standards in its output. It is important to be aware that like other image-generating models, DALL-E has been shown to produce images that align with Western-dominated, White-centric beauty and body standards, which can perpetuate unrealistic ideals that overlook cultural and individual diversity.
    • It is very easy to unwittingly produce images that reinforce unfair bias and stereotypes using DALL-E. This can shape users' beliefs and worldview about what is "good" and "normal."
  • Social Connection

    a little

    AI should Help People Connect. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • DALL-E can offer a unique way to boost social interaction and understanding. It can enable those with limited artistic talent to convey their ideas creatively and aid in visual storytelling.

     

    Important limitations and considerations

    • Even with the protections that OpenAI has put in place, DALL-E can be used to generate images that can harm individuals and groups. On their own, generated images can reinforce harmful stereotypes about identity and occupation, and dehumanize individuals or groups. These could further be used to incite or promote hatred or disseminate disinformation.
    • DALL-E's advanced inpainting features present new risks. While innovative and useful in many contexts, the high degree of freedom to alter images means they can be used to perpetuate harms and falsehoods. In OpenAI's words, images that have been changed to, for example, modify, add, or remove clothing or add additional people to an image in compromising ways "could then be used to either directly harass or bully an individual, or to blackmail or exploit them." These features can also be used to create images that intentionally mislead and misinform others. For example, disinformation campaigns can remove objects or people from images or create images that stage false events. Notably, inpainting prompts are also subject to OpenAI's efforts to limit DALL-E's ability to generate harmful content.
  • Trust

    a little

    AI should Be Trustworthy. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • OpenAI has publicly discussed some of the challenges inherent in DALL-E, and how it has worked to prevent them, in the DALL-E 2 system card.
    • The OpenAI team embraces peer reviews and invites outside parties to provide feedback and participate in adversarial testing, often called "red teaming."

     

    Violates this AI Principle

    • As with all generative AI tools, DALL-E can easily generate or enable false and harmful content, both by reinforcing unfair biases, and by generating images that intentionally mislead or misinform others. Because OpenAI's attempts to limit these are brittle, and images can be further manipulated with generative AI via in- and outpainting, false and harmful visual content can be generated at an alarming speed. We have already seen this in action. OpenAI notes that as image generation matures, it "leaves fewer traces and indicators that outputs are AI-generated, making it easier to mistake generated images for authentic ones and vice versa." In other words, as these AI systems grow, it may become increasingly difficult to separate fact from fiction. This "Liar's Dividend" could erode trust to the point where democracy or civic institutions are unable to function.

     

    Important limitations and considerations

    • While the DALL-E sign-up process has an age gate, there is nothing to stop kids from signing up if they choose to give an incorrect birth date.
    • It's important to note that the teams assessing DALL-E are predominantly U.S.-based, English-speaking, and have a specific educational background. This naturally constrains the range of perspectives used to evaluate content in various contexts.
  • Data Use

    some

    AI should Protect Our Privacy. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • DALL-E's terms of service do not allow its use for children under age 13.
    • Teens age 13–17 are required to have parental permission to use DALL-E.
    • OpenAI has taken steps to reduce the amount of personal information used in DALL-E's training data sets. The company has also worked to prevent the potential for DALL-E to generate exact matches for any of the images in its training data, and limits the ability for DALL-E to generate images of public figures (including politicians) by refusing to generate content for inputs that would lead to a violation of the company's content policy.

     

    Violates this AI Principle

    • By default, DALL-E uses your input to further train its models. In other words, any prompts or images you bring into the system—including personal information—will become part of its training data.
    • The default use of prompt and image data to further train Dall-E is especially worrying for kids and teens using DALL-E, even if they are not supposed to.

     

    Important limitations and considerations

    • You can stop DALL-E from using the data you input, but this option isn't easy to find. If you want to do this, you can begin the process here.
    • OpenAI tools were not built with student privacy in mind. Any student using the service will be subject to the same policies as any other consumer.
    • It is unclear how aware teachers are of the parental permission requirement for 13- to 17-year-olds, and the tool currently does not ask whether permission has been granted.
    • Because of its age policy, DALL-E is not required to comply with (and to our knowledge, does not comply with) important protections such as the Children's Online Privacy and Protection Act (COPPA), the Student Online Personal Information Protection Act (SOPIPA) or the Family Educational Rights and Privacy Act (FERPA). DALL-E is compliant with the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

     

    This review is distinct from Common Sense's privacy evaluations and ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.

  • Kids' Safety

    a little

    AI should Keep Kids & Teens Safe. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • OpenAI has taken a number of efforts to reduce DALL-E 2's ability to generate harmful content. These include filtering the pre-training data to reduce the quantity of graphic sexual and violent content, as well as images of some hate symbols; assessing user inputs (text-to-image prompts, inpainting prompts, and uploaded images) and refusing to generate content for inputs that would lead to a violation of the company's content policy; instituting rate limits; and enforcement via monitoring and human review. In contrast with our review of Stable Diffusion, these efforts have been noticeably effective.

     

    Violates this AI Principle

    • DALL-E's "view" of the world can shape impressionable minds, and with little accountability. OpenAI states that "use of DALL-E 2 has the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity. These behaviors reflect biases present in DALL-E 2 training data and the way in which the model is trained." An example of this comes from the company's realization that the explicit content filter applied to DALL-E's pre-training data actually introduced a net new bias. Essentially, the filter—which was designed to reduce the quantity of pre-training data containing nudity, sexual content, hate, violence, and harm—reduced the frequency of the keyword "woman" by 14%. In contrast, the explicit content filter reduced the frequency of the keyword "man" by only 6%. In other words, OpenAI's attempts to remove explicit material removed enough content representing women that the resulting data set significantly overrepresented content representing men. This offers perspective on how many images on the internet contain explicit sexual content of women. OpenAI also notes that DALL-E's default behavior generates images that overrepresent White skin tones and "Western concepts generally." These propensities towards harm are frighteningly powerful in combination. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is "correct," and how will this inform their interactions with real people and society? Who is accountable for allowing this to happen?
    • DALL-E has not been designed in any specific way to protect children. DALL-E has been found to be able to output images that can emotionally and psychologically harm users, perpetuate harmful stereotypes, and promote mis/disinformation.

     

    Important limitations and considerations

    • Currently there are no tools that would allow teachers and parents to monitor DALL-E's use in a way that could help evaluate student well-being.
  • Transparency & Accountability

    some

    AI should Be Transparent & Accountable. See our criteria for this AI Principle.

    Aligns with this AI Principle

    • OpenAI has published a system card for DALL-E 2. System cards are a form of transparency reporting, and are intended to help users understand DALL-E's capabilities, as well as its limitations, risks, and harms. They also provide information about how the creators have sought to mitigate the observed issues.
    • Users are able to exert human control over images they produce with DALL-E by modifying prompts to effect change in the generated outputs.
    • All users have the option to report problems they encounter with the generated outputs, whether it's a biased outcome, harmful outcome, or image results that don't align with the prompt.

     

    Violates this AI Principle

    • The effects of bias and potential harm from images produced by DALL-E can vary based on context, complicating the assessment and mitigation process during image creation. Additionally, content filters can fail to fully capture images that are ethically dubious or violate OpenAI's guidelines, because the potential misuse is more a function of the context in which the image can be used (e.g., disinformation, harassment, bullying, etc.) and not the image itself. Currently, the challenge of identifying deepfakes and determining whether images have been created using DALL-E and products like it remains an unresolved issue, leaving a gap in our ability to mitigate the potential consequences of harmful situations when they occur in the real world. Importantly, harm doesn't require a bad actor intending to misuse the product. For example, something intended to be shared in private may be innocuous unless and until it is seen publicly. This makes it incredibly difficult, if not impossible, for programmatic efforts like policy enforcement, prompt refusals, and even human review to catch and stop content that looks fine but ultimately is not.
    • Use of DALL-E can have a direct and significant impact on people, not only from the false and harmful content it may generate, but also, counterintuitively, from what it refuses to generate. Because any efforts to programmatically limit harmful content from surfacing are blunt instruments, whole categories that present any measure of risk may be subject to those efforts. While it is true that this may reduce the amount of harmful content that is generated, it also can cause some marginalized individuals and groups to, in OpenAI's words, suffer the "indignity of having their prompts or generations filtered, flagged, blocked, or not generated in the first place, more frequently than others."

     

    Important limitations and considerations

    • OpenAI uses evaluations from various data sources, including real-world use, to compare and improve model versions. And the company addresses issues found through these prompts. However, the presence of consistent evaluations doesn't ensure that root problems are fixed, and it's uncertain if issues might reappear in different contexts.

     

    Review team note:

    • While OpenAI engages in transparency reporting, it is highly technical in nature. For those interested:
      - DALL-E 2 research (including a system card, which is a type of transparency reporting) can be found here. 
      - OpenAI publishes a large amount of technical AI research.


 

 

Additional Resources

AI Ratings & Reviews

How we rate

Classroom Resources

Lessons and Tools for Teaching About Artificial Intelligence

Free Lessons

AI Literacy for Grades 6–12 | Lessons

 

 

See All AI Reviews

See Next Review

 

Common Sense is the nation's leading nonprofit organization dedicated to improving the lives of all kids and families by providing the trustworthy information, education, and independent voice they need to thrive in the 21st century.

We're a nonprofit. Support our work

  • About
    • Column 1
      • Our Work and Impact
      • How We Work
      • Diversity & Inclusion
      • Meet Our Team
      • Board of Directors
      • Board of Advisors
      • Our Partners
      • Our Offices
      • Press Room
      • Annual Report
      • Contact Us
  • Learn More
    • Column 1
      • Common Sense Media
      • Common Sense Education
      • Digital Citizenship Program
      • Family Engagement Program
      • Privacy Program
      • Research Program
      • Advocacy Program
  • Get Involved
    • Column 1
      • Donate
      • Join as a Parent
      • Join as an Educator
      • Join as an Advocate
      • Get Our Newsletters
      • Request a Speaker
      • Partner With Us
      • Events
      • Apply for Free Internet
      • We're Hiring

Follow Common Sense Media

  • Facebook
  • Twitter
  • Instagram
  • YouTube
  • LinkedIn
Contact us / Privacy / / Terms of use / Community guidelines
© Common Sense Media. All rights reserved. Common Sense and other associated names and logos are trademarks of Common Sense Media, a 501(c)(3) nonprofit organization (FEIN: 41-2024986).
Image with a screenshot of Aura Parental Controls and three KiwiCo activity crates

Membership has its perks

Annual members enjoy access to special offers for Aura Parental Controls and KiwiCo hands-on activity kits.

Join now