Pickup artists using AI, deep fake nudes outlawed, Rabbit R1 fail: AI Eye

0


FX Job Fair 728×90

AI Tupac vs. AI Drake

A little over a year ago, a fake AI song featuring Drake and the Weeknd racked up 20 million views in two days before Universal memory-holed the track for copyright violation. The shoe was on the other foot this week, however, when lawyers for the estate of Tupac Shakur threatened Drake with a lawsuit over his “TaylorMade” diss track against Kendrick Lamar, which used AI-faked vocals to “feature” Tupac. Drake has since pulled the track down from his X profile, although it’s not hard to find if you look.

Deep fake nudes criminalized

Apps like Nudeify simplify the process of creating deep fake nudes. (Nudeify)

The governments of Australia and the United Kingdom have both announced plans to criminalize the creation of deep fake pornography without the consent of the people portrayed. AI Eye reported in December that a range of apps, including Reface, DeepNude and Nudeify, make the creation of deepfakes easy for anyone with a smartphone. Deep fake nude creation websites have been receiving tens of millions of hits each month, according to Graphika.

Baltimore police have arrested the former athletic director of Pikesville High School, Dazhon Darien, over allegations he used AI voice cloning software to create a fake racism storm (“fakeism”) in retaliation against the school’s principal, who forced his resignation over the alleged theft of school funds.

Darien sent audio of the principal supposedly making racist comments about black and Jewish students to another teacher, who passed it on to students, the media and the NAACP. The principal was forced to step down amid the outcry, however forensic analysis showed the audio was fake and detectives arrested Darien at the airport as he was about to fly to Houston with a gun.

Everyone — in the media at least — seems to hate Meta’s new AI integration in the Instagram search bar, mostly because it’s too eager to chat, and it’s not very good at search. The bot has also been joining Facebook conversations unprompted and talking nonsense after a question is asked in a group, and no one responds within an hour.

Defrocked AI priest

An AI Catholic Priest was defrocked after just two days for endorsing incest. The California-based Catholic Answers introduced the Father Justin chatbot last week to answer educational questions about the Catholic faith.

But after it started advising people they could baptize their children with Gatorade, and it blessed the “joyous occasion” of a brother and sister getting married, Catholic Answers was forced to apologize and demote the chatbot to plain old Justin. “Prevalent among users’ comments is criticism of the representation of the AI character as a priest,” CA said. “We won’t say he’s been laicized because he never was a real priest!”

AI Priest
It’s just “Justin” now. (Catholic Answers)

Rabbit R1 reviews

As soon as wildly popular tech reviewer Marques Brownlee said the Rabbit R1 “has a lot in common with Humane AI Pin” you knew the device was doomed — Brownlee absolutely slated Humane’s device two weeks ago. The Rabbit R1 is a much-hyped handheld AI device that you interact with primarily through voice, with it operating apps on your behalf. Brownlee criticized the device for being barely finished and “borderline non-functional”, with terrible battery life, and said it wasn’t very good at answering questions.

TechRadar called the R1 a “beautiful mess” and noted the market couldn’t support “a product that is so far from being ready for the mass consumer.” CNET’s reviewer said there were moments “when everything just clicked, and I understood the hype,” but they were vastly outweighed by the negatives. The main issue with the dedicated AI devices so far is they are more limited than smartphones, which already perform the same functions more effectively.

Fake live streams to hit on women

New apps called Parallel Live and Famefy use AI-generated audience interaction to fake big social media audiences for live streams – with pickup artists reportedly using the apps as social proof to impress women. In one video, influencer ItsPolaKid shows a woman in a bar that he’s “live streaming” to 20,000 people, she asks him if he’s rich, and they wind up leaving together. “The audience is AI generated, which can hear you and respond, which is hilarious. She couldn’t get enough,” the influencer said.

The rule of thumb on social media is that whenever an influencer mentions a product, it’s probably an ad. Parallel Live creator Ethan Keiser has also released a bunch of promotional videos with millions of views, pushing a similar line that social proof from fake audiences can get models to fall all over you and invitations to the VIP sections of clubs. 404 Media’s Jason Koebler reported the apps use speech-to-text AI recognition, which meant fake AI viewers “responded” to things “I said out loud and referenced things I said aloud while testing the apps.”

Itspolokidd
Influencer users fake AI audiences to sell apps to real audiences, with a potentially staged pick-up. (itspolokidd/Instagram)

“No-AI” guarantee for books

British author Richard Haywood is a self-publishing superstar, with his Undead series of post-apocalyptic novels selling more than 4 million copies. He’s now fighting zombie “authors” by adding a NO-AI label and warranty to all his books, with a “legally binding guarantee” that each novel was written without the aid of ChatGPT or other AI assistance. Haywood estimates that around 100,000 fake books churned out by AI have been published in the past year or so and believes an AI-free guarantee is the only way to protect authors and consumers.

AI reduces heart disease deaths by one-third

An AI trained on almost half a million ECG tests and survival data was used in Taiwan to identify the top 5% of most at-risk heart patients. A study in Nature reported that the AI reduced overall heart issue deaths among patients by 31%, and among high risk patients by 90%.

AIs are as stupid as we are

With large language models converging around the baseline for humans on a bunch of tests, Meta’s chief AI scientist, Yann LeCunn, argues human intelligence could be the ceiling for LLMs due to the training data.

“As long as AI systems are trained to reproduce human-generated data (e.g. text) and have no search/planning/reasoning capability, performance will saturate below or around human level.”

AbacusAI CEO Bindu Reddy agrees that “models have hit a wall” despite progressively more computing and data being added. “So in some sense, it’s actually not possible to get past some level with just plain language models,” she said, although she added that “No one knows what ‘superhuman reasoning’ looks like. Even if LLMs manifested these superhuman abilities, we wouldn’t be able to recognize them.”

Pedro Domingo X
AIs are flatlining around human intelligence (Pedro Domingo X)

Safety board doesn’t believe in open source

The U.S. Department of Homeland Security has enlisted the heads of centralized AI companies, including OpenAI, Microsoft, Alphabet and Nvidia, for its new AI Safety and Security Board. But the board has been criticized for not including a representative of Meta, which has an open-source AI model strategy, or indeed anyone else working on open-source AI. Maybe it’s already been deemed unsafe.

Homeland Security X post
Homeland Security X post about AI Safety Board.
Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.





Source link

You might also like
Leave A Reply

Your email address will not be published.

Join our email newsletter and get news & updates into your inbox for free.

You have Successfully Subscribed!