
You may have seen photos that suggest otherwise, but former President Donald Trump wasn’t arrested last week, and the Pope wasn’t wearing a stylish, bright white puffer coat. ‘s recent viral hit is the result of an artificial intelligence system that processes a user’s text prompts to create an image. They show that these programs have gotten very good very quickly.
How can a skeptical viewer discern images that may have been generated by an artificial intelligence system such as DALL-E, Midjourney, or Stable Diffusion? ) will vary depending on how convincing it is and what the algorithm is telling you. For example, AI systems have historically struggled to imitate human hands, producing broken appendages with too many fingers. However, as technology has improved, systems such as the Midjourney V5 seem to have solved the problem (at least in some instances). As a rule, experts say that the best images from the best generators are difficult, if not impossible, to distinguish from real images.
S. Shyam Sundar, a researcher at Pennsylvania State University who studies the psychological effects of media technology, said: “There’s been a huge leap in his year or so in terms of image generation capabilities.”
Factors behind this exponential power include the ever-increasing number of images available for training AI systems, and the data processing infrastructure and interfaces that make this technology accessible to ordinary Internet users. progress has been included, Sundar said. As a result, artificially generated images are ubiquitous and can be “nearly impossible to detect,” he says.
A recent experiment revealed just how well AI can deceive. Sophie Nightingale, a psychologist at Lancaster University in the UK who specializes in digital technologies, tests whether online volunteers can distinguish between real-life images and passport-like headshots created by her AI system called StyleGAN2. co-authored a study on Even in late 2021, when researchers conducted the experiment, the results were disappointing. “It’s basically gotten to a point where it’s gotten so real that people can’t reliably tell the difference between an artificial face and a real real face, a real person’s face that actually exists.” provided AI with some help (the researchers sorted the images generated by StyleGAN2 and selected only the most realistic images), but Nightingale did not use such programs for malicious purposes. It states that people trying to use it are likely to do the same.
In a second test, researchers sought to help subjects improve their AI detection abilities. We marked each answer correct or incorrect after the participants responded and prepared the participants in advance by reading advice for detecting artificially generated images. This advice highlighted an area where AI algorithms often stumble, creating mismatched earrings or blurring a person’s teeth. Nightingale also points out that algorithms often struggle to create something more sophisticated than a solid background. But even with these additions, she says, participants only improved their accuracy by about 10%. Also, the AI system that generated the images used in the trial has since been upgraded to a new and improved version.
Ironically, as image generation technology continues to improve, AI systems trained to detect artificial images may be our best defense against being fooled by AI systems. Experts say that as AI image generation advances, algorithms are better able to detect small pixel-scale fingerprints made by robots than humans.
Writing these AI detection programs works like any other machine learning task, says Yong Jae Lee, a computer scientist at the University of Wisconsin-Madison. “In addition to collecting a dataset of real images, we also collect a dataset of AI-generated images,” he says. “We can then train a machine learning model to distinguish between the two.”
Still, these systems have serious drawbacks, Lee and other experts say. Most such algorithms are trained on images from a specific AI generator and are unable to identify fakes produced by different algorithms. (Lee and his research team say they are working on a way around this problem by training detectors to recognize what constitutes an image instead. real.) Also, most detectors lack a user-friendly interface, which has led many to try generative AI systems.
Additionally, AI detectors are constantly scrambling to keep up with AI image generators, some of which incorporate similar detection algorithms, but how to learn to make false output harder to detect. I am using it as “The battle between AI systems that generate images and AI systems that detect AI-generated images will be an arms race,” said Wael Abd Almaged, an associate professor of computer science research at the University of Southern California. said. “I don’t think either team will win anytime soon.”
AbdAlmaged says there is no approach that can capture all artificially generated images, but that doesn’t mean you should give up. He suggests that social media platforms should stand up to his AI-generated content on their sites. These companies are better suited to implement detection algorithms than individual users.
Users should also be more skeptical of visual information by asking if it is false, AI-generated, or harmful before sharing. “We humans grow up thinking that a picture is worth a thousand words,” says Abdulmaged. “It’s no longer true. Seeing is believing.