The questions raised by Sisi Wei, editor-in-chief at The Markup, in a recent article shed light on the dilemmas faced by journalists when covering AI-generated pictures. She questions whether the news articles should contain the generated images and, if so, how to label them or what kinds of disclaimers to include. As she notes, this issue is difficult because readers may not pay attention to the caption. The following is a quote from the article.
There’s no question to me that anyone who comes into contact with the internet these days will need to start questioning if the images they’re seeing are real. But what’s our job as journalists in this situation? When we republish viral or newsworthy images that have been altered or were generated by AI, what should we do to make sure we’re giving readers the information they need? Doing it in the caption or the headline isn’t good enough—we can’t assume that readers will read them.
The NO TECH MAGAZINE reader shared some intriguing links worth checking out:
In an article at The Atlantic Ian Bogost comments on a recent episode of Amazon’s use of social media in a campaign to influence opinion regarding criticisms of the company’s exploitative labour and business practices. Interestingly, he notes a change in the corporations communicate:
Previously, companies could speak only through formal messages on billboards; by mail, radio, or television; or via media coverage of their actions. The web had shifted that control a bit, but websites were still mostly marketing and service portals. Social media and smartphones changed everything. They made corporate speech functionally identical to human speech. Case law might have given companies legal personhood, but the internet made corporations feel like people.
It also allowed companies to behave like people. As their social-media posts were woven into people’s feeds between actual humans’ jokes, gripes, and celebrations, brands started talking with customers directly. They offered support right inside people’s favorite apps. They did favors, issued giveaways, and even raised money for the downtrodden. Brands became #brands.
A humorous article from The Economist explains how “Netflix is creating a common European culture” through their efforts of promoting shows from different European and translating/subtitling in all languages:
Umberto Eco, an Italian writer, was right when he said the language of Europe is translation. Netflix and other deep-pocketed global firms speak it well. Just as the eu employs a small army of translators and interpreters to turn intricate laws or impassioned speeches of Romanian meps into the eu’s 24 official languages, so do the likes of Netflix. It now offers dubbing in 34 languages and subtitling in a few more.
Read the full article here.
In article from the Financial Time on how Big Tech can “best tackle conspiracy theories,” the author shares some interesting insights from research done by a group of ethnographers called Ethnographic Praxis in Context. Apparently these researchers observed that people who believe in conspiracy theories tend to “believe information that comes from scruffier, amateurish sites, since these seem more ‘authentic’”:
Anyone hoping to debunk these ideas also needs to think hard about cultural signals. Take website design. Twenty-first century professionals typically give more credibility to information that comes from sites that look polished.
Conversely, the ethnographers discovered that conspiracy theorists are more likely to believe information that comes from scruffier, amateurish sites, since these seem more “authentic”. This point may not be obvious to techies at places such as Google — and is not the type of insight that big data analysis will reveal. But it is crucial.
Read the full article here.
The following video essay on the “The Late Capitalism of K-Pop” by a Youtube channel called “Cuck Philosophy” gives some interesting historical insights on the development of K-Pop. And links this development with related critiques on consumerism such as the work of Baudrillard.
I discovered this video through a recent podcast from “Pretty Much Pop” on “The Korean Wave”.
One aspect of this coronavirus pandemic is the spread of information. I therefore find this reflection by Erin McAweeney on “Who Benefits from Health Misinformation?” quite important:
Different groups with different motives are exploiting the COVID-19 pandemic in different ways. I’m a senior analyst at Graphika, a social media network analysis firm, where we map “cyber-social terrain” and the information that flows through them. To date, we’ve found online communities from health topics, political groups, and social identity groups pushing misinformation on COVID-19: grifter televangelists, QAnon, MAGA Twitter, anti-vaxxers, conservative and anti-CCP politicians and billionaires, and anti-immigration parties in France and Italy. These groups frame and misrepresent the issue to fit their ideological goals.
Two interesting articles from The Fast Company:
One article about how a popular chart of the coronavirus and flattening the curve: “The story behind ‘flatten the curve,’ the defining chart of the coronavirus”
And another article on the history of an important technology in this pandemic: “the mask”: “The untold origin story of the N95 mask”