This Is the Smart Home Tech I’m Most Excited About in 2025 (and Beyond)
This Black Boy's Face Is So Dang Beautiful That People Thought He Was AI-Generated
Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience.Generate Key Takeaways
Beautiful Black boy - Screenshot: TikTok/@fatimfatim148
A small Black boy is going viral on TikTok for being the "most beautiful in the world." So beautiful in fact, that many people think he's not even real, but a product of AI generation. But he is in fact very real.
TikTok account @fatimfatim148 posted videos of the little boy on Monday, March 31. The first video of the boy stunned viewers, so much so the clip of him gained over 13.8 million views, 1.1 million likes, and 43K bookmarks.
Viewers couldn't contain their excitement in the comment section after seeing the little boy, who wore an orange shirt and black pants.
The boy, whose name or age has been revealed, according to many users is "so cute people think he's AI generated." Another online user wrote how, "God took his time to make this boy, wow!" while others penned, "I have never seen such beauty in human form," "When did angels start landing on earth," and "He's so perfect."
"He is beyond handsome," one person wrote, while another called him a "gift from God." A third individual complimented his "pure, African, beautiful, chocolate brown skin," while a fourth had to ask, "Is he even real? Cuteness overload!"
"Those who thought this is AI, let's gather," one person joked, whose comment has nearly 4K likes. "At first I thought he wasn't real," another added. Someone else wrote how his "innocent eyes are gonna make gals run mad!"
tiktok-7487660713134001413
In the first clip, the boy is standing outside while someone's hand holds his chin up as he looks innocently into the camera. A follow-up clip, posted the same day, showed the boy sitting quietly on a couch in a room with four women.
Viewers began to ask the account holder to tag the boy's parents, to witness for themselves the pair who created such beauty, with suggestions to not post him on social media to protect him from the "evil eye." One user instructed in the comment section: "Please do not expose this boy too much," while another prayed for "God to protect him from all evil and harm" considering the international attention he has suddenly garnered.
tiktok-7488011455623695622
A third video, shared two days ago, showed the boy's playful side. While sitting in a chair, the boy can be seen playfully dodging a woman's kisses while turning his head away. And the comments were equally positive. "Perfectly designed by God," one wrote, while another added how they only followed the account "because of this little boy."
tiktok-7489036926108994821
"The meaning of Black beauty," another wrote. "Get him a modeling gig," one person suggested, while another person called him "the most beautiful baby boy."
For the latest news, Facebook, Twitter and Instagram.
Reclaim Imperfect Faces
After a few months of shivering through Severance's blank white corridors and icy exterior shots, I've appreciated the sultry visual texture of The White Lotus's third season: the vivid prints of high-end resort wear; the ominous blue of the ocean; the verdant setting (as wild and seething as anything manicured into luxury-hotel perfection can be). The show is thrilling as a sensory experience, humming with sinister percussive beats and the occasional muffled animal squawk in the distance. Against this backdrop, it feels only natural that we'd fall in love with the characters who seem the most real, the most alive.
I'm talking, of course, about Chelsea, played by Aimee Lou Wood, and Chloe, played by Charlotte Le Bon—two gorgeous women who meet at a bar after Chelsea says, "I love your outfit," and Chloe replies, "Thank you! I love your teeth." This quick moment set off a good-natured riot of online debate—labeled the "smile discourse" by Allure—about what it means to see not just imperfect teeth on-screen, but also imperfect teeth on women who are undeniable knockouts. I'll defer to others regarding the particulars of dental trends, but I can tell you how it made me feel to see such gloriously irregular beauty amid all the identical Instagram faces with the same Tic-Tac veneers, stenciled eyebrows, and contoured cheekbones: relieved.
Lately, I've been finding myself more and more unsettled by digital faces tweaked and pixelated into odd perfection and real bodies buffed and whittled down into obscene angularity—women who look less like flesh-and-blood beings than porcelain ornaments. At the Oscars last month, Rachel Tashjian wrote in The Washington Post, the eerie flawlessness of so many red-carpet looks seemed to encapsulate "how weight loss drugs and technology, including photo editing and AI-generated imagery, have ushered in an outrageous drive for perfection that has overtaken Hollywood." If you compare the poreless, rose-toned face of the superstar Ariana Grande with the sculpted cheekbones and button nose of the Spanish influencer Aitana Lopez, it's hard to discern even infinitesimally minute flaws in either. Unlike Grande, though, Lopez is computer-generated—one of a new breed of models with hundreds of thousands of followers and horny men continually sliding into her DMs, despite the fact that she's wholly nonexistent.
Much has been written over the past few months about the propagandist tendencies of artificially generated art—the way it's been gleefully adopted by right-wing trolls to create photorealistic but recognizably fake images of Elon Musk giving out wads of cash, or the surreal 30-second clip that Donald Trump recently posted imagining Gaza as a gilded beachside temple to wealth and potentates. These kinds of pictures are intended to provoke—to catch the eye with their mawkish absurdity and uncanny-valley optics. But to me at least, the beautified AI faces are no less offensive. They reflect back at us toxic values that we're in thrall to, and capture none of the qualities we should truly appreciate. The writer Daphne Merkin once observed that in reality, we find imperfection enchanting because we recognize "that behind the visceral image lies an internal life." Which, I'd wager, is why the wonky smiles of Wood and Le Bon are so compelling in this moment: They assert the intangible beauty of having a soul.
We have never, as mere human bags of flesh and bone, been so perfectible. We've never had as many tools in our arsenal with which to maximize our superficial value: weight-loss drugs that can make slim bodies even smaller, Botox and fillers that smooth out wrinkles, contouring pens that define features. This is even before we get into the realm of augmented reality. On TikTok, I can broadcast myself using a filter that makes me look exactly as I did at 23: lifted, smoothed, softer, and also somehow lighter and less harried. Ninety percent of British women and nonbinary people polled in 2020 confessed to sometimes using filters before posting selfies, and 85 percent to using external editing software such as FaceTune to tweak pictures of themselves. Every single woman surveyed said they had been served videos promoting plastic surgery in their feeds: before-and-after reels selling lip fillers, teeth-whitening treatments, butt enhancers. A few months ago, I too was suddenly inundated with clips of scrub-wearing surgeons "analyzing" Lindsay Lohan's face, after new images of the actor suddenly began to circulate revealing catlike eyes, a heart-shaped face, and the skin of a well-rested teenager.
What struck me about the Lohan images was less what work she had or hadn't done, and more the way in which, virtually overnight, a battalion of influencer-doctors jumped onto social media, selling us on the idea of our own transformation. To some extent, each generation has lived through its own freakout regarding what technological advances might be doing to beauty standards, and to our fragile sense of self. In 2006, The Guardian noted that Photoshop was making even supermodels outraged, and that tabloids were reacting to the prevalence of perfected images by seeking out unflattering candid shots for balance: stars with straggly hair, or visible cellulite, or slight paunches. In 2019, the cultural critic Jia Tolentino coined the term Instagram Face for the "single, cyborgian look" being popularized on social media by models and influencers. And in her new book, Searches: Selfhood in the Digital Age, the journalist Vauhini Vara writes about how technology has managed to change the way human beings look by altering our ideals, giving us a funhouse-mirror reflection of how we think we should look. "To live like this, endlessly comparing our imperfect fleshy selves with the sanitized digital simulacra of selfhood that appears online and finding ourselves wanting," Vara notes, "exerts such a subtle psychic violence that we might not even be aware of it as it's happening."
In some ways, though, technology also primed us for what was to come. The more fault we're compelled to find with our own unsymmetrical, lined, irredeemably lived-in faces, the more we're set up to be swayed by the unreal smoothness of AI imagery. In 2023, when the AI image generator Stable Diffusion XL was launched, the company behind it boasted that the product created the most photorealistic images yet available. (It offered, by way of emphasis, a picture of a panda in a spacesuit sitting at a bar.) What was clear early on, though, was that Stable Diffusion XL had the same biases and prejudices humans do, amplified to an absurd extent. Prompts for "a person at social services" generated pictures of predominantly Black women; prompts for "a productive person" generated largely white men in suits. AI image generators also had, as my former colleague Caroline Mimbs Nyce reported, a "hotness problem," generating pictures of people who were all improbably attractive. Possibly this is because they were built by scanning edited and airbrushed photos—not just of professionally attractive people, but of us. (Every time you FaceTune a selfie, the theory goes, a neural network further distorts its sense of what humans actually look like.)
Recently, I asked Microsoft's Image Creator for a picture of a normal woman. It gave me four extraordinarily beautiful women with curly hair, sculpted jawlines, and plump lips. (All four were wearing glasses, a supposed de-beautifying trick that didn't work in She's All That and doesn't work now.) Then I asked for a picture of an average woman, for which I received four images of radiantly smiling women in baggy sweaters with slightly frizzy hair. Finally, prompted to give me a picture of an average 42-year-old woman (my birthday is this month), the program gave me the eeriest images of all: four Anne Hathaway look-alikes with monstrously oversize grins and visible clavicles, betraying only slight lines around their eyes, and inexplicably surrounded by other grinning hot people, as if advertising a cult.
What's so unsettling about these images, I think, is how they reflect what we're allowing technology to do to us, what it's already done. Given the ability to amend our own faces, we've helped normalize and propagate a horribly restrictive vision of beauty and humankind, and the more we distort ourselves in turn, the more confining the ideal becomes. Recently, the art historian Sonja Drimmer argued that artificial intelligence was "essentially useless" for the purpose of studying history, because historians "look for untold stories" and "elements of the history of mankind that are novel and unexpected." Programs such as ChatGPT, by contrast, can only skim and interpret texts and images that already exist, extrapolating them into likely outcomes. If you're looking for nuance, or uncertainty, or subtext, it can't help you.
With regards to beauty, I'd bargain that everyone knows someone who shouldn't, by all superficial accounts, be attractive, and yet they are. Because: We're better than computers at reading between the lines and can see other people's faces not just as structural compositions of bone and skin, but also as reflections of personality, of humanity, of depth. And the more we can defend beauty as nonconformist, as the essence of something internal and unmeasurable, the more we protect ourselves from the narrowing grip of techno-homogenization. In The White Lotus, and in reality, Wood's face isn't just beautiful. It's guileless, openhearted, kind, tender. "You're never going to look like what you think perfect is," the actor told Glamour. And the more I see perfect, the less I can bear it.
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
An AI Image Generator's Exposed Database Reveals What People Really Used It For
Tens of thousands of explicit AI-generated images, including AI-generated child sexual abuse material, were left open and accessible to anyone on the internet, according to new research seen by WIRED. An open database belonging to an AI image-generation firm contained more than 95,000 records, including some prompt data and images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé de-aged to look like children.
The exposed database, which was discovered by security researcher Jeremiah Fowler, who shared details of the leak with WIRED, is linked to South Korea–based website GenNomis. The website and its parent company, AI-Nomis, hosted a number of image generation and chatbot tools for people to use. More than 45 GB of data, mostly made up of AI images, was left in the open.
The exposed data provides a glimpse at how AI image-generation tools can be weaponized to create deeply harmful and likely nonconsensual sexual content of adults and child sexual abuse material (CSAM). In recent years, dozens of "deepfake" and "nudify" websites, bots, and apps have mushroomed and caused thousands of women and girls to be targeted with damaging imagery and videos. This has come alongside a spike in AI-generated CSAM.
"The big thing is just how dangerous this is," Fowler says of the data exposure. "Looking at it as a security researcher, looking at it as a parent, it's terrifying. And it's terrifying how easy it is to create that content."
Fowler discovered the open cache of files—the database was not password protected or encrypted—in early March and quickly reported it to GenNomis and AI-Nomis, pointing out that it contained AI CSAM. GenNomis quickly closed off the database, Fowler says, but it did not respond or contact him about the findings.
Neither GenNomis nor AI-Nomis responded to multiple requests for comment from WIRED. However, hours after WIRED contacted the organizations, websites for both companies appeared to be shut down, with the GenNomis website now returning a 404 error page.
"This example also shows—yet again—the disturbing extent to which there is a market for AI that enables such abusive images to be generated," says Clare McGlynn, a law professor at Durham University in the UK who specializes in online- and image-based abuse. "This should remind us that the creation, possession, and distribution of CSAM is not rare, and attributable to warped individuals."
Before it was wiped, GenNomis listed multiple different AI tools on its homepage. These included an image generator allowing people to enter prompts of images they want to create, or upload an image and include a prompt to alter it. There was also a face-swapping tool, a background remover, plus an option to turn videos into images.
"The most disturbing thing, obviously, was the child explicit images and seeing ones that were clearly celebrities reimagined as children," Fowler says. The researcher explains that there were also AI-generated images of fully clothed young girls. He says in those instances, it is unclear whether the faces used are completely AI-generated or based on real images.
As well as CSAM, Fowler says, there were AI-generated pornographic images of adults in the database plus potential "face-swap" images. Among the files, he observed what appeared to be photographs of real people, which were likely used to create "explicit nude or sexual AI-generated images," he says. "So they were taking real pictures of people and swapping their faces on there," he claims of some generated images.
When it was live, the GenNomis website allowed explicit AI adult imagery. Many of the images featured on its homepage, and an AI "models" section included sexualized images of women—some were "photorealistic" while others were fully AI-generated or in animated styles. It also included a "NSFW" gallery and "marketplace" where users could share imagery and potentially sell albums of AI-generated photos. The website's tagline said people could "generate unrestricted" images and videos; a previous version of the site from 2024 said "uncensored images" could be created.
GenNomis' user policies stated that only "respectful content" is allowed, saying "explicit violence" and hate speech is prohibited. "Child pornography and any other illegal activities are strictly prohibited on GenNomis," its community guidelines read, saying accounts posting prohibited content would be terminated. (Researchers, victims advocates, journalists, tech companies, and more have largely phased out the phrase "child pornography," in favor of CSAM, over the last decade).
It is unclear to what extent GenNomis used any moderation tools or systems to prevent or prohibit the creation of AI-generated CSAM. Some users posted to its "community" page last year that they could not generate images of people having sex and that their prompts were blocked for non-sexual "dark humor." Another account posted on the community page that the "NSFW" content should be addressed, as it "might be looked upon by the feds."
"If I was able to see those images with nothing more than the URL, that shows me that they're not taking all the necessary steps to block that content," Fowler alleges of the database.
Henry Ajder, a deepfake expert and founder of consultancy Latent Space Advisory, says even if the creation of harmful and illegal content was not permitted by the company, the website's branding—referencing "unrestricted" image creation and a "NSFW" section—indicated there may be a "clear association with intimate content without safety measures."
Ajder says he is surprised the English-language website was linked to a South Korean entity. Last year the country was plagued by a nonconsensual deepfake "emergency" that targeted girls, before it took measures to combat the wave of deepfake abuse. Ajder says more pressure needs to be put on all parts of the ecosystem that allows nonconsensual imagery to be generated using AI. "The more of this that we see, the more it forces the question onto legislators, onto tech platforms, onto web hosting companies, onto payment providers. All of the people who in some form or another, knowingly or otherwise—mostly unknowingly—are facilitating and enabling this to happen," he says.
Fowler says the database also exposed files that appeared to include AI prompts. No user data, such as logins or usernames, were included in exposed data, the researcher says. Screenshots of prompts show the use of words such as "tiny," "girl," and references to sexual acts between family members. The prompts also contained sexual acts between celebrities.
"It seems to me that the technology has raced ahead of any of the guidelines or controls," Fowler says. "From a legal standpoint, we all know that child explicit images are illegal, but that didn't stop the technology from being able to generate those images."
As generative AI systems have vastly enhanced how easy it is to create and modify images in the past two years, there has been an explosion of AI-generated CSAM. "Webpages containing AI-generated child sexual abuse material have more than quadrupled since 2023, and the photorealism of this horrific content has also leapt in sophistication, says Derek Ray-Hill, the interim CEO of the Internet Watch Foundation (IWF), a UK-based nonprofit that tackles online CSAM.
The IWF has documented how criminals are increasingly creating AI-generated CSAM and developing the methods they use to create it. "It's currently just too easy for criminals to use AI to generate and distribute sexually explicit content of children at scale and at speed," Ray-Hill says.
Comments
Post a Comment