antonim

joined 2 years ago
MODERATOR OF
[–] antonim@lemmy.dbzer0.com 1 points 1 week ago* (last edited 1 week ago) (3 children)

By default, LibreWolf deletes the user's cookies and history when the browser is closed,

I'm not sure if these devs have the same priorities as me D:

[–] antonim@lemmy.dbzer0.com 13 points 1 week ago (6 children)

Yes. He mentioned it in his reddit comments, but even if you don't go out looking for it, there are various indicators - language (exclusively English), the fact lemmy.ml is registered from a US address, the fact that all of his discourse is heavily US-centric.

Frankly, only an American can be this obsessed with CIA.

[–] antonim@lemmy.dbzer0.com 24 points 1 week ago (10 children)

An American is taking care of the Russian community and defending it from CIA propaganda, that's so anti-imperialist. I'm sure Russians are thankful 🙏

[–] antonim@lemmy.dbzer0.com 0 points 1 week ago (5 children)

Until the next re-bloating update where your settings get reverted

As a Windows user, I've had this problem with Firefox browser a number of times, and never with Widows.

[–] antonim@lemmy.dbzer0.com 16 points 1 week ago (6 children)

I didn't know I'm already a computer pro by following a couple of idiot-proof steps I found by googling.

[–] antonim@lemmy.dbzer0.com 2 points 1 week ago

"some" = the one that's the basis of and technically closely tied to the one you're using right now

[–] antonim@lemmy.dbzer0.com 2 points 1 week ago

.li is maintained by an another group so it usually still works when .is doesn't; most of their database is the same. Unlike .is they have some pretty aggressive advertising on there, however, popups and stuff.

[–] antonim@lemmy.dbzer0.com 39 points 1 week ago (3 children)

It's the opposite, AA is pretty reliable whereas Libgen (.is domain) has been offline for over a month, came back online just a few days ago.

[–] antonim@lemmy.dbzer0.com 1 points 1 week ago

Yeah, usually they're just sourced from public-domain book collections such as Google Books (who scan older books which can end up visually messy), and I'm pretty sure some of those that are offered on Amazon were straight-up based on pirated PDFs.

[–] antonim@lemmy.dbzer0.com 1 points 1 week ago

because you’re paying

Well no, it's the buyer who is paying. Which they might find off-putting, if the final price is too high, so you get fewer buyers and less profit.

As for the quality, there’s literally no reason that a book that is printed on demand has to be low quality or use low quality materials.

Except that in practice they simply are of lower quality. I've seen quite enough of such books. Maybe higher quality materials could be used, but that would raise the price for the end-user even more, and possibly slow down the production.

and the proof is the fact that Amazon is filled with AI generated garbage books

One has to wonder how much money they actually make, though. I saw some YT videos about the topic, IIRC it's really difficult. Their mere presence doesn't prove their profitability but only the belief by many people that they could be profitable.

It's easy to start a business, sure. But you didn't explain the rest of the process and don't seem to actually know a lot about the particulars of book publishing (neither do I, but whatever I do know doesn't agree with your imagined "solution").

[–] antonim@lemmy.dbzer0.com 3 points 1 week ago (4 children)

I guess, but print on demand is also more expensive than printing in bulk, when looking per unit, and of lower quality (paper and binding). I'm not too familiar with the details of book publishing but I wouldn't expect that people are not using this route simply because they failed to notice its benefits.

 

GifCities was a special project of Internet Archive originally done as part of our 20th Anniversary in 2016 to highlight and celebrate fun aspects of the amazing history of the web as represented in the Wayback Machine. Since then, GifCities GIFs have been used in innumerable web projects, artistic works, and in the media and press, including this internet-melting combination of GifCities GIFs and the British Royal Wedding in this New York Times article and the avant-GIF “GifCollider” exhibit at Berkeley Art Museum & Pacific Film Archive.

The new version of GifCities includes a number of new improvements. We are especially excited at the drastic improvement in “GifSearchies” by implementing semantic search for GifCities, instead of the hacky old “file name” text search of the original version.

 
 

(actually I haven't installed either because I'm lazy)

 

Just published "West meets East: Papers in historical lexicography and lexicology from across the globe" edited by Geoffrey Williams, Mathilde Le Meur & Andrés Echavarría Peláez

https://langsci-press.org/catalog/book/458

Lexicography, in its many forms, is a very old, practical discipline solving practical problems concerning word usage. The term “word” seems more appropriate than “language” in this context, as lexicography addresses more questions relating to what we now call lexicology. As with all areas of human endeavour, what developed gradually through trial and error has eventually been subjected to a theoretical framework. The role of historical lexicography is to look back on the development of these highly varied word lists to understand how we arrived at the tremendous variety that characterises practice throughout the world.

This volume is both a selection of expanded papers from one conference on historical lexicography and lexicology, held under the aegis of the International Society for Historical Lexicography and Lexicology (ISHLL) in Lorient, France, in May 2022, and also the first in a new book series dedicated to the field. The new series represents a collaboration between two sister associations, ISHLL and the Helsinki Society for Historical Lexicography (HSHL). The volume contains texts in both English and French that provide insights into dictionaries, their compilers and users using evidence from numerous languages across the globe. It is also diachronic, moving from topics on medieval usage to contemporary issues concerning open access and digital publishing in historical lexicography. The title reflects the global scope of its authors and content, encompassing Japan to the United States, Eastern Europe to the United Kingdom, and Portugal.

This book is the first one in our new series "World Histories of Lexicography and Lexicology" https://langsci-press.org/catalog/series/whll

ContentsIntroduction (Geoffrey Williams)

On closure and its challenges: Examining the editors’ proofs of OED1 (Lynda Mugglestone)

Dictionaries in the web of Alexandria: On the dangerous fragility of digital publication (Daphne Preston-Kendal)

A dictionary of the languages of medieval England: Issues and implications (Gloria Mambelli)

The treatment of English high-frequency verbs in the Promptorium Parvulorum (1440) (Kusujiro Miyoshi)

Disattributing the Encyclopédie article on définition en logique from Jean-Henri-Samuel Formey (Alexander Bocast)

Project Cleveland: Documenting the lexicographic output of 20th-century Slovenian immigrants in the US (Alenka Vrbinc, Donna Farina, Marjeta Vrbinc)

The incorporation of proper nouns of Non-Slavic origin into the 16th-century Slovenian literary language (Alenka Jelovšek)

Dictionnaires manuscrits dans l’histoire de la lexicographie croate: Des recueils de mots aux trésors linguistiques et culturels (Ivana Franić)

Évaluer la dette: L’étendue de la présence de Richelet dans le Dictionnaire universel de Basnage (1701) (Clarissa Stincone))

De Félibien à Boutard: L’évolution du dictionnaire artistique entre le XVIIème et le début du XIXème siècle (Rosa Cetro)

La valeur pragmatique des langues dites « orientales » dans le Dictionnaire universel de Trévoux (1721) (Georgios Kassiteridis)

Musical terms of the Greek and Italian origin in the Ottoman Turkish lexicography (Agata Pawlina)

Exploring the unique method for encoding sinograms in the first known Chinese-Polish dictionary (Andrzej Swoboda)

Les travaux lexicographiques de Carlo da Castorano et ses tentatives pour faire imprimer un dictionnaire européen de chinois (Gianninoto Mariarosaria, Michela Bussotti)

The bilingual dictionary as a mediator between West and East: The beginnings of English-Polish lexicography (Mirosława Podhajecka))

Lexicon Lapponicum Bipartitum.....ungarice scriptum: Hungarian aspects of North Saami dictionary writing (Ivett Kelemen)

Les exemples dans les dictionnaires français–hongrois à travers les siècles (Gábor Tillinger)

Sul finir d’imparare la Grammatica Francese, fa d’uopo studiar il Dizionario delle Frasi: Deux recueils phraséologiques bilingues franco-italiens de la première moitié du 19e siècle (Michela Murano)

The discovery of a Russian-Tajik Dictionary (Abdusalom Mamadnazarov, Bahriddin Navruzshoev)

Lexicon of Oriental words in Ancient Greek (Rosół Rafał)

 
 
 

cross-posted from: https://lemmy.dbzer0.com/post/45888572

I don't know if this is an acceptable format for a submission here, but here it goes anyway:

Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

In our previous research (Content Simplification), we have identified two needs:

  • The need for readers to quickly get an overview of a given article or page
  • The need for this overview to be written in language the reader can understand

Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

Taking a quote from the page for the usability study:

"Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for very dumb children, and I couldn't even tekk whether it is accurate because they just scroll around in the video.

Very extensive discussion is going on at the Village Pump (en.wiki).

The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

Again, I recommend reading the whole discussion yourself.

 

I don't know if this is an acceptable format for a submission here, but here it goes anyway:

Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

In our previous research (Content Simplification), we have identified two needs:

  • The need for readers to quickly get an overview of a given article or page
  • The need for this overview to be written in language the reader can understand

Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

Taking a quote from the page for the usability study:

"Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

Very extensive discussion is going on at the Village Pump (en.wiki).

The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

Again, I recommend reading the whole discussion yourself.

EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

 

Iz članka Naš napredak u prirodnih znanostih za minulih 50 godinah Bogoslava Šuleka, iz 1885.

 
view more: ‹ prev next ›