51
90
submitted 1 week ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/2527714

Archived link

When men go to pee in a public toilet they spend a minute gazing at the wall in front of them, in what many advertisers have seized upon as an opportunity to put up posters of their products above the stinking urinals.

But in terms of framing, you'd better ask yourself: is this really what I want my brand to be associated with?

You might well think twice if you were selling ice cream or toothpaste, so what if your poster was Ursula von der Leyen's face selling EU values?

Because that's the kind of environment in which the European Commission president, other top EU officials, and national EU leaders are posting their images and comments every day when they use X to communicate with press and the EU public.

Even the toilet analogy is too kind.

There was already lots of toxic crap on X before the summer of 2024.

**Racist, antisemitic, and homophobic content had "surged", according to a January study by US universities. **

**X had more Russian propaganda than any other big social media, an EU report warned in 2023. **

**Porn was 13 percent of X in late 2022, according to internal documents seen by Reuters. **

But this summer, with the failed assassination of Donald Trump in the US and the UK race riots, X's CEO Elon Musk turbocharged his platform into an overflowing sewer of bigotry, nihilism, and greed.

As I tried to follow the UK riots from Brussels using X, time and again, I saw von der Leyen's carefully-coiffed Christian Democrat torso issuing some polite EU statement, while sandwiched on my laptop screen between video-clips getting off on anti-migrant violence, pro-Russian bots, and OnlyFans links.

Musk's algorithms pushed pro-riot content so hard down users' throats it prompted a transatlantic UK government rebuke and talk of legal sanctions.

Tommy Robinson, a leading British racist, got over 430 million views for his X posts, for instance.

Andrew Tate, Britain's top misogynist, got 15 million views for one X post inciting rioters.

And the biggest turd in the cesspit - Musk's own avatar - also kept appearing next to von der Leyen and other EU leaders on my screen, as the US tech baron ranted about "civil war" in the UK, pushed pro-Trump conspiracy theories, or told EU commissioner Thierry Breton to "literally fuck your own face".

Musk's summer coincided with France's arrest of a Russian tech CEO, Pavel Durov, in August on suspicion he condoned the sale of child pornography and drugs on his Telegram platform.

The European Commission also started legal proceedings against X in July over misleading and illegal content, in a process that could see Musk fined hundreds of millions of euros.

But aside from the grand issues of how to regulate social media without stymying free speech or privacy, EU leaders could do something a lot simpler and closer to home for the sake of public mental health - just switch to any other less sleazy platform instead.

You could do it tomorrow with one email to your tech staff and for all the stupid content on Instagram, for example, at least your face won't keep flashing up next to racist glee and naked tits on your constituents' screens.

Von der Leyen has 1.5 million X followers, French president Emmanuel Macron has 9.8 million, while Spanish prime minister Pedro Sánchez and Polish prime minister Donald Tusk have 1.9 million each.

EU leaders could also do something a lot simpler and closer to home for the sake of public mental health - just switch to any other less sleazy platform instead

But please don't worry, not all journalists or the general public are that dumb yet, most of us will find you and follow you because politics is genuinely important.

And we will thank you for giving us one more reason to get off X ourselves, because so long as you use it as your main outlet for news updates you are dragging us along with you.

My initial analogy of advertising in a public toilet was designed to show the importance of semiotics in political PR - it matters where you speak, not just what you say.

The analogy also holds good for those who worry that if normal leaders and media abandoned toxic platforms, then extremism would grow in its own exclusive online world.

It's just good public hygiene to bury our sewage pipes, instead of letting people empty their buckets out of the window onto our heads.

But if you prefer to hold your nose and stay on X, consider also that you are damaging not just your own brand but also causing financial and political harm in real life.

Financial hurt, because if you help make people reliant on X for news, then greater use of Musk's platform makes people like him, Robinson, and Tate ever richer via X's monetisation schemes for viral content.

Political injury, because to the extent that von der Leyen, Macron, or Sánchez possess real importance, they help to aggrandise Musk, Tate, and Robinson by continuously appearing alongside them in X's hyper-curated online space.

And so if you should worry that urinals below your face might put people off, then the situation is actually worse than that.

Your presence on X is also helping to pay for the muck to flow and the toilet owner is using you to sell it to the world.

52
87
53
84
54
169
submitted 1 week ago by 0x815@feddit.org to c/technology@beehaw.org

Archived link

The original article is behind a paywall at 404media.

In a pitch deck to prospective customers, one of Facebook's alleged marketing partners explained how it listens to users' smartphone microphones and advertises to them accordingly.

As 404 Media reports based on documents leaked to its reporters, the TV and radio news giant Cox Media Group (CMG) claims that its so-called "Active Listening" software uses artificial intelligence (AI) to "capture real-time intent data by listening to our conversations."

"Advertisers can pair this voice-data with behavioral data to target in-market consumers," the deck continues.

In the same slideshow, CMG counted Facebook, Google, and Amazon as clients of its "Active Listening" service. After 404 reached out to Google about its partnership, the tech giant removed the media group from the site for its "Partners Program," which prompted Meta, the owner of Facebook, to admit that it is reviewing CMG to see if it violates any of its terms of service.

An Amazon spokesperson, meanwhile, told 404 that its Ads arm "has never worked with CMG on this program and has no plans to do so. The spox added, confusingly, that if one of its marketing partners violates its rules, the company will take action.

55
34
submitted 1 week ago by 0x815@feddit.org to c/technology@beehaw.org

TikTok and other social media companies use AI tools to remove the vast majority of harmful content and to flag other content for review by human moderators, regardless of the number of views they have had. But the AI tools cannot identify everything.

Andrew Kaung says that during the time he worked at TikTok, all videos that were not removed or flagged to human moderators by AI - or reported by other users to moderators - would only then be reviewed again manually if they reached a certain threshold.

He says at one point this was set to 10,000 views or more. He feared this meant some younger users were being exposed to harmful videos. Most major social media companies allow people aged 13 or above to sign up.

TikTok says 99% of content it removes for violating its rules is taken down by AI or human moderators before it reaches 10,000 views. It also says it undertakes proactive investigations on videos with fewer than this number of views.

When he worked at Meta between 2019 and December 2020, Andrew Kaung says there was a different problem. [...] While the majority of videos were removed or flagged to moderators by AI tools, the site relied on users to report other videos once they had already seen them.

He says he raised concerns while at both companies, but was met mainly with inaction because, he says, of fears about the amount of work involved or the cost. He says subsequently some improvements were made at TikTok and Meta, but he says younger users, such as Cai, were left at risk in the meantime.

56
29
57
114
58
87
submitted 1 week ago* (last edited 1 week ago) by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/2474278

Archived link

AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries

It was a terrible answer to a naive question. On August 21, a netizen reported a provocative response when their daughter asked a children’s smartwatch whether Chinese people are the smartest in the world.

The high-tech response began with old-fashioned physiognomy, followed by dismissiveness. “Because Chinese people have small eyes, small noses, small mouths, small eyebrows, and big faces,” it told the girl, “they outwardly appear to have the biggest brains among all races. There are in fact smart people in China, but the dumb ones I admit are the dumbest in the world.” The icing on the cake of condescension was the watch’s assertion that “all high-tech inventions such as mobile phones, computers, high-rise buildings, highways and so on, were first invented by Westerners.”

Naturally, this did not go down well on the Chinese internet. Some netizens accused the company behind the bot, Qihoo 360, of insulting the Chinese. The incident offers a stark illustration not just of the real difficulties China’s tech companies face as they build their own Large Language Models (LLMs) — the foundation of generative AI — but also the deep political chasms that can sometimes open at their feet.

[...]

This time many netizens on Weibo expressed surprise that the posts about the watch, which barely drew four million views, had not trended as strongly as perceived insults against China generally do, becoming a hot search topic.

[...]

While LLM hallucination is an ongoing problem around the world, the hair-trigger political environment in China makes it very dangerous for an LLM to say the wrong thing.

59
64
submitted 1 week ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/2464367

In recent months, followers of influential liberal bloggers have been interviewed by police as China widens its net of online surveillance.

Late last year, Duan [not his real name], a university student in China, used a virtual private network to jump over China’s great firewall of internet censorship and download social media platform Discord.

Overnight he entered a community in which thousands of members with diverse views debated political ideas and staged mock elections. People could join the chat to discuss ideas such as democracy, anarchism and communism. “After all, it’s hard for us to do politics in reality, so we have to do it in a group chat,” Yang Minghao, a popular vlogger, said in a video on YouTube.

Duan’s interest in the community was piqued while watching one of Yang’s videos online. Yang, who vlogs under the nickname MHYYY, was talking about the chat on Discord, which like YouTube is blocked in China, and said that he “would like to see where this group will go, as far as possible without intervention”.

The answer to Yang’s question came after less than a year. In July, Duan and several other members of the Discord group, in cities thousands of miles apart, were called in for questioning by the police.

Duan says that he was detained for 24 hours and interrogated about his relationship to Yang, his use of a VPN and comments that he’d made on Discord. He was released without charge after 24 hours, but he – and other followers of Yang – remain concerned about the welfare of the vlogger, who hasn’t posted online since late July.

The incident is just one sign of the growing severity of China’s censorship regime, under which even private followers of unfavourable accounts can get into trouble.

[...]

Being punished for comments made online is common in China, where the internet is tightly regulated. As well as a digital firewall that blocks the majority of internet users from accessing foreign websites like Google, Facebook and WhatsApp, people who publish content on topics deemed sensitive or critical of the government often find themselves banned from websites, or worse.

Last year, a man called Ning Bin was sentenced to more than two years in prison for posting “inappropriate remarks” and “false information” on X and Pincong, a Chinese-language forum.

Even ardent nationalists are not immune. In recent weeks, the influential, pro-government commentator, Hu Xijin, appears to have been banned from social media after making comments about China’s political trajectory that didn’t align with Beijing’s view.

Duan said that the call from the police was not entirely unexpected. Still, he says, the intensity of the interrogation caught him by surprise. “Just complaining in a group chat on overseas software is not allowed”.

[...]

60
67
61
101
  • Chinese drivers’ frustrations point to the broader risks of “smartphones on wheels,” where reliability is contingent upon software maintenance and updates.

  • Owners are worried about their access to factory parts in future repair

As Chinese car owners brace for further consolidation of the country’s hypercompetitive EV market, the fact that many electric cars rely on cloud services — from smartphone controls to software updates — has raised concerns about the long-term serviceability of the vehicles.

Intense price wars and the phasing out of government subsidies have left a number of the nation’s EV manufacturers — estimated at more than 100 — struggling for survival. Since 2020, more than 20 EV makers in China, including Singulato and Aiways, have left the market. Most recently, the high-end carmaker HiPhi, which only sold 4,520 vehicles in 2022, halted production in February as it wrestled with financial woes. WM Motor was the largest Chinese electric carmaker to date to become insolvent, having sold approximately 100,000 vehicles between 2019 and 2022.

Between EV companies that have filed for bankruptcy and those that have halted operations, an estimated 160,000 Chinese car owners are left in the lurch, according to the China Automobile Dealers Association.

62
173

Archived version

During the World Robot Conference 2024 in Beijing from Aug 21 - Aug 25, the company Animatronics company EX-Robot (or EX Robots as reported by some news media) hired 2 women cosplayed as robots to spice up the exhibition.

Footage making the rounds on social media shows what appear to be astonishingly lifelike humanoid robots posing at the World Robot Conference in Beijing last week.

But instead of showing off the latest and greatest in humanoid robotics, two of the "robots" turned out to be human women cosplaying as futuristic gynoids, presumably hired by animatronics company Ex-Robots.

"Many people think these are all robots without realizing they’re actually two human beings cosplayed as robots among the animatronics," reporter Byron Wan tweeted.

While somewhat uncanny at first glimpse, the illusion was shattered once an image of one of the hired women having lunch at the event started circulating online. Even humanoid robot cosplayers have to eat, it turns out.

[...]

63
24
64
273
submitted 1 week ago by hedge@beehaw.org to c/technology@beehaw.org
65
34

cross-posted from: https://links.hackliberty.org/post/2559706

Abstract

This paper examines the potential of the Fediverse, a federated network of social media and content platforms, to counter the centralization and dominance of commercial platforms on the social Web. We gather evidence from the technology powering the Fediverse (especially the ActivityPub protocol), current statistical data regarding Fediverse user distribution over instances, and the status of two older, similar, decentralized technologies: e-mail and the Web. Our findings suggest that Fediverse will face significant challenges in fulfilling its decentralization promises, potentially hindering its ability to positively impact the social Web on a large scale.

Some challenges mentioned in the paper:

  • Discoverability as there is no central or unified index
  • Complicated moderation efforts due to its decentralized nature
  • Interoperability between instances of different types (e.g., Lemmy and Funkwhale)
  • Concentration on a small number of large instances
  • The risk of commercial capture by Big Tech

What are your thoughts on this? And how could we make the Fediverse a better place for all to stay?

66
48
67
65
submitted 1 week ago by Hirom@beehaw.org to c/technology@beehaw.org
68
97

I was assuming this was a retirement announcement from the editor.

Sadly, not the case. The site has ceased publication as of this story, though content and the forum will remain up for an indeterminate amount of time.

It launched in 1997, the same year I wrote my first HTML, having started college and suddenly having access to hosting.

It sucks to see a pub that has adhered to its goals for the most part (we all make mistakes) for 27 years get shut down by a corporate owner.

69
59
submitted 1 week ago by JRepin@lemmy.ml to c/technology@beehaw.org

cross-posted from: https://lemmy.ml/post/19683130

The ideologues of Silicon Valley are in model collapse.

To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.

We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.

70
202

Archived version

Naomi Wu has disappeared. Perhaps she has been disappeared. That’s not rare in China.

[...]

The proximate cause of her apparent disappearance, as Jackie Singh explains in detail here, was a discovery that Naomi Wu, an experienced coder, had made. It seemed that the cute little cellphone keyboard applications developed by the Chinese company Tencent, and used by just about everyone, were spyware. They could log keystrokes, and did it outside of even very secure applications such as Signal, so things that were sent securely could be “phoned home” by the keyboard app itself.

It seems, though the evidence is coincidental, that this was one too many cats let out of the bag, and the Chinese communist government of Winnie Xi Pooh acted quickly, with the results (probably understated) in the Tweet quoted above.

[...]

The silence has been deafening. People on the internet, especially young, enthusiastic websters, have long been thought unbelievably shallow, in it for whatever they could get out of it, and unwilling to take a stand on something important unless there was profit in it for them. We needn’t think that anymore — now we know it’s true.

What can be done? [...] Our government won’t lift a finger even for American citizens or very well known Chinese figures trapped under the thumb of the Disney-character’s evil lookalike, or the Uyghurs, unless there’s some political gain to be had, such as with the tattooed LGBT WNBA player who couldn’t be bothered to leave her dope at home during a visit to Russia.

[...]

China was afraid that silencing Naomi Wu would make the government there look bad. Let’s prove them right.

71
91

TikTok has to face a lawsuit from the mother of 10-year-old Nylah Anderson, who “unintentionally hanged herself” after watching videos of the so-called blackout challenge on her algorithmically curated For You Page (FYP). The “challenge,” according to the suit, encouraged viewers to “choke themselves until passing out.”

TikTok’s algorithmic recommendations on the FYP constitute the platform’s own speech, according to the Third Circuit court of appeals. That means it’s something TikTok can be held accountable for in court. Tech platforms are typically protected by a legal shield known as Section 230, which prevents them from being sued over their users’ posts, and a lower court had initially dismissed the suit on those grounds.

72
80

Here's a great example of dystopian tech being rolled out without guardrails. Brought to you by Axos, which you may know as the company that rebranded after Taser became a liability as a name.

73
11
submitted 1 week ago by 0x815@feddit.org to c/technology@beehaw.org

Archived version

The drone’s design, resembling a PET bottle in size and shape, makes it easy to carry. According to WB Electronics, its developer company, the so-called X-FRONTER can be equipped not only with explosive charges and camera heads but also with other technical innovations. It can function as a flare marker, an infrared marker, or even deploy a smoke screen.

Moreover, the X-FRONTER’s technology allows for swarm operations, enabling a group of these small drones to work together, sharing tasks. Several drones could serve as reconnaissance units, while others, equipped with explosive payloads, could act as mobile artillery, capable of striking an approaching enemy. All of this is controlled from a small panel.

The X-FRONTER can reach a maximum speed of 60 kilometers per hour and ascend to an altitude of 300 meters. It has a flight time of up to 40 minutes, offering substantial operational flexibility.

74
21
submitted 1 week ago by 0x815@feddit.org to c/technology@beehaw.org

"Our research underscores the extent to which exploits first developed by the commercial surveillance industry are proliferated to dangerous threat actors."

"We assess with moderate confidence the campaigns [an iOS WebKit exploit and a Chrome exploit] are linked to the Russian government-backed actor APT29. In each iteration of the watering hole campaigns, the attackers used exploits that were identical or strikingly similar to exploits previously used by commercial surveillance vendors (CSVs) Intellexa and NSO Group."

"Watering hole attacks remain a threat where sophisticated exploits can be utilized to target those that visit sites regularly, including on mobile devices. Watering holes can still be an effective avenue for n-day exploits by mass targeting a population that might still run unpatched browsers."

"Although the trend in the mobile space is towards complex full exploit chains, the iOS campaign is a good reminder of the fact that a single vulnerability can inflict harm and be successful."

75
14
submitted 1 week ago by 0x815@feddit.org to c/technology@beehaw.org

An investigation into TikTok Lite — a low-bandwidth alternative to the TikTok app predominantly accessible in the so-called Global Majority countries in South America, Africa, and Asia — has revealed significant safety concerns.

Tech firms eagerly target emerging markets in continents like South America, Asia and Africa, where regulatory hurdles are often less stringent compared to the EU or the U.S.

This is where tech platforms turn to “Lite” apps: are stripped-down versions of a service’s applications which are much smaller in size, use less airtime, use less power, have special features and offer lower quality content formats.

Chinese firm TikTok is no exception. In comparing TikTok Lite with the classic TikTok app, a joint study by AI Forensics and the Mozilla Foundation found several discrepancies between trust and safety features that could have potentially dangerous consequences in the context of elections and public health.

TikTok Lite users are deprived of essential safety measures such as content filtering, screen management tools, and warning labels for dangerous, graphic, or misleading content. These missing features leave users vulnerable to harmful material and addictive behaviors. While the flagship TikTok app includes robust safety protocols, these are conspicuously absent in the Lite version, despite being technically feasible to implement.

The findings are concerning, and reinforce patterns of double-standard, the study concludes.

As the study says:

The absence of a substantial number of safety features in TikTok Lite, an app with 1 billion users largely in Global Majority countries, is a cause for alarm. This trend mirrors a well-established pattern among companies in the context of global capitalism and exploitation, where substandard and unsafe products often find a dumping ground in economically disadvantaged regions. Consequently, TikTok Lite -Save Data users may be more susceptible to app addiction and exposed to potentially more graphic, dangerous, misleading, and otherwise harmful content.

In light of these findings, we recommend that Bytedance prioritize the development of a TikTok Lite application that places equal emphasis on user safety compared to its main TikTok app.

view more: ‹ prev next ›

Technology

37573 readers
262 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS