Search our artificial intelligence database and discover stories by sector, tech, company, and more.ĭALL-E 2 was trained using a combination of photos scraped from the internet and acquired from licensed sources, according to the document authored by OpenAI ethics and policy researchers. Last year, MIT researchers concluded that the measurement and mitigation of bias in vision data sets is “critical to building a fair society.” A year later, the creators of a data set called 80 Million Tiny Images took it offline after a decade of circulation, citing racial slurs and other harmful labels within the training data. The ImageNet competition and resulting data set laid the foundation for the field in 2009 and led to the launch of a number of companies, but sources of bias in its training data led its creators to cut labels related to people in 2019. “The purpose of this is to learn how to eventually do faces safely if we can, which is a goal we’d like to get to,” says Altman.Ĭomputer vision has a history of deploying AI first, then apologizing years later when audits reveal a history of harm. The 400 people with early access to DALL-E 2-predominantly OpenAI employees, board members, and Microsoft employees-were told not to share photorealistic images in public, in large part due to these issues. But as OpenAI CEO Sam Altman said in a late April interview, text prompts involving people, and in particular photorealistic faces, generate the most problematic content. Dowling said early testers weren’t told to avoid posting negative or racist content generated by the system. OpenAI VP of communications Steve Dowling declined to share images generated from text prompts recommended by DALL-Eval creators when WIRED requested them. The DALL-Eval team found that bigger multimodal models generally have more impressive performance-but also more biased outputs. They claim to have made the first method for evaluating multimodal AI models for reasoning and societal bias. Those text prompts and dozens of others were recommended to OpenAI by the creators of DALL-Eval, a team of researchers from the MURGe Lab at the University of North Carolina. Early tests by red team members and OpenAI have shown that DALL-E 2 leans toward generating images of white men by default, overly sexualizes images of women, and reinforces racial stereotypes. As part of OpenAI’s “red team” process-in which external experts look for ways things can go wrong before the product’s broader distribution-AI researchers found that DALL-E 2’s depictions of people can be too biased for public consumption. We are in the future,” says Vipul Gupta, a PhD candidate at Penn State who has used DALL-E 2.īut amid promotional depictions of koalas and pandas spreading on social media is a notable absence: people’s faces. “What people thought might take five to 10 years, we’re already in it. Photorealistic depictions that look like the real world, shared widely on social media by a select number of early testers, have given the impression that the model can create images of almost anything. It can take in text and spit out images, whether that’s a “Dystopian Great Wave off Kanagawa as Godzilla eating Tokyo” or “Teddy bears working on new AI research on the moon in the 1980s.” It can create variations based on the style of a particular artist, like Salvador Dali, or popular software like Unreal Engine. Last month, OpenAI introduced the second-generation version of DALL-E, an AI model trained on 650 million images and text captions. After two weeks of testing DALL-E 2, the CTO of the Institute for Ethics and Emerging Technologies thinks AI might be on the verge of its own Jurassic Park moment. The dinosaurs looked so convincing that they felt like the real thing, a special effects breakthrough that permanently shifted people’s perception of what’s possible. Marcelo Rinesi remembers what it was like to watch Jurassic Park for the first time in a theater.
0 Comments
Leave a Reply. |