FRIDAY! Time for the roundup.
On the podcast this week, we talk about the Xz backdoor drama that unfolded in the open source world, and how AI might be sneaking into the peer-review process. In the paid subscribers’ section, we discuss a major journal’s decision to stop allowing a Playboy centerfold in research submissions.
@404.media on the pod this week: the wild Xz backdoor and open source labor issues surrounding it, how AI might be sneaking into the peer-review process, and why a major journal has decided to stop the use of an infamous image in its papers.
♬ original sound - 404 Media - 404 Media
What else? Here’s what we got into this week:
BACKDOORS AND BULLIES
The Xz backdoor captured the minds of security and open-source enthusiasts this week. Jason wrote about how a very similar situation—pushy contributors throwing fits and guilt-tripping maintainers—nearly led Android open source app store F-Droid to push an update that would have introduced a security vulnerability into the product three years ago. “Other open source developers and security experts have pointed to the dynamic of bullying and the general reliance on a small number of volunteer developers. They explained that it’s a problem across much of the open source software ecosystem, and is definitely a problem for the large tech companies and infrastructure who rely on these often volunteer-led projects to build their for-profit software on top of,” he wrote.
WHERE’S KATE
Earlier this week, Kate Middleton truthers were freaking out about an editor’s note placed by Getty Images on Middleton’s video address where she talks about her cancer diagnosis. I wrote about how this editor’s note, which says “This Handout clip was provided by a third-party organization and may not adhere to Getty Images' editorial policy,” isn’t some smoke signal from Getty that the video is fake or that Middleton is in a coma or dead, as many conspiracy theorists believe.
GARBAGE IN GARBAGE OUT
Emanuel wrote about how he was able to find the AI-generated books with the same method we’ve previously used to find AI-generated Amazon product reviews, papers published in academic journals, and online articles: searching for the term “As of my last knowledge update,” which is associated with ChatGPT-generated answers. It returned dozens of books that include that phrase, and most of the books in the first eight pages of results turned up by the search appeared to be AI-generated. This could render research tools like Google Ngram “completely unusable” within a few years, experts say.
“STOP SHOOTING HER”
Newly released video and audio, obtained by 404 Media through the Freedom of Information Act, shows a California cop told an unarmed 15-year-old girl to walk toward him, and screamed for his fellow officers not to shoot her, immediately before they shot and killed her. “Come here! Come to me, come to me!” the officer shouted. This disturbing content from the scene, captured on camera and in a belt recording, finally answers questions after the 2022 shooting that police wouldn’t divulge, or gave conflicting information about at the time.
FRIENDSHIP ENDED WITH GOOGLE
Google search is bad and getting worse, as most people know. But in a world increasingly dominated by the GOOG, what is the alternative? Jason has been shelling out $10 a month for a search engine called Kagi, that’s basically Google without all the bullshit. “I will probably not ever switch back to Google unless Kagi becomes significantly worse or Google reverses years of annoying interface and search decisions that have prioritized ads, sponsored results, spammy affiliate content, and AI-generated results,” he wrote in his ode to Kagified living.
READ MORE
- This Camera Turns Every Photo Into a Nude
- A ‘Law Firm’ of AI Generated Lawyers Is Sending Fake Threats as an SEO Scam
- ChatGPT Looms Over the Peer-Review Crisis
- The Car Repair Apocalypse Could Soon Be Upon Us
- Google Bans Face Swap App for Inviting Users to Make Deepfake Porn
- Cryptographers Who Solved Zodiac Killer Cipher Publish Paper About How They Did It
COMMENTS OF THE WEEK
Replying to “Google Books Is Indexing AI-Generated Garbage,” Rick Sacks writes:
Thanks for this. It would be funny if it wasn't so seditious. At least the scam artist should read 'their' books and take out "as of my last knowledge update" lines. But even more disturbing, they don't see it as scams. And if the recent stories that young people are getting back into reading (BookTok) then governments need to step in and regulate. Google, at best will seek out and wrist slap those who generate these mindless (if associative) spews but more likely they will just create an ai to filter the telltale signs.
And Stefan wrote about “ChatGPT Looms Over the Peer-Review Crisis:”
I agree with the assessment that generative AI is not the core issue here, it's the process of blind peer-reviewing that is broken. While I didn't have particularly bad experiences with blind peer-reviews for my own work, I now do research at a nonprofit and the review process is so much better. I always ask multiple experts (more than three usually) to review my reports before publication in a shared doc. Every reviewer is given a choice: Directly make suggestions and comments in the shared doc that all other reviewers can see, or give feedback to me only via a private copy or email to be anonymous to the other reviewers. The result: Most experts choose to give feedback in the shared doc and sometimes start discussing among themselves via comments. Reviewers that choose to be anonymous sometimes pick up those discussions too. This is not only helpful for me, but also rewarding for the reviewers who get to talk with their peers or experts in their field they haven't talked with before. Not saying that this is the perfect solution, but I really think more transparency and more rewards and incentives for reviewers will be key to address this crisis.
Replying to Stefan, Andrew Trettel said:
I like your solution. It is cooperative and not adversarial like the current peer review process. Still, I do also like the idea of requiring X number of peer reviews to submit an article. It creates an incentive to peer review that does not exist right now. The only thing that most people get out of peer review right now is an acknowledgment annually that you refereed at least one article in this journal in the past year. That doesn't help scientists advance their careers, so too many people ignore peer review entirely.
Okay, I need to get back to work on my cryptographs for now. See you next week.