1

The tingling and palpitations that have been spreading throughout the body can be caused by several conditions. Here are some possibilities based on the search results:

  • Episodic Neurologic Symptoms: Patients with this condition usually complain of severe apprehension, dizziness, chest discomfort, palpitations, or fear of "losing control." The onset is usually under age 40[1].

  • Heart Attack or Panic Attack: Both conditions can cause chest pain, shortness of breath, dizziness, sweating, passing out, tingling, or a sensation of impending doom. These shared symptoms of heart and panic attack may also be caused by other serious conditions such as blood clots, lung infection or collapse, or tear in the large vessels of the chest for patients with certain pre-existing risk factors[2].

  • Atrial Fibrillation: The most obvious symptom of this condition is heart palpitations, where the heart feels like it's pounding, fluttering, or beating irregularly, often for a few seconds or possibly a few minutes. Other symptoms include tiredness, feeling lethargic, chest pain, dizziness, and shortness of breath[3][6].

  • Peripheral Neuropathy: Symptoms of this condition include numbness and tingling in the feet or hands, burning, stabbing or shooting pain in affected areas, loss of balance and coordination, and muscle weakness, especially in the feet. These symptoms are usually constant but may come and go[4].

It's important to see a doctor if you experience any of these symptoms to determine the underlying cause and receive appropriate treatment.

Citations: [1] https://www.ncbi.nlm.nih.gov/books/NBK374/ [2] https://www.cedars-sinai.org/blog/is-it-a-heart-attack-or-a-panic-attack.html [3] https://www.nhs.uk/conditions/atrial-fibrillation/symptoms/ [4] https://www.nhsinform.scot/illnesses-and-conditions/brain-nerves-and-spinal-cord/peripheral-neuropathy [5] https://www.webmd.com/heart-disease/heart-palpitations-emergency [6] https://www.hopkinsmedicine.org/health/conditions-and-diseases/atrial-fibrillation/afib-symptoms

13
10

To download a website for offline browsing using wget, you can use the following command:

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains example.com --no-parent http://www.example.com

Replace example.com with the website you want to download. This command will download the entire website, including all the elements that compose the page (images, CSS, etc.), and convert the links so that they work locally and offline[1].

Here's a brief explanation of the options used in the command:

  • --recursive: Download the entire website recursively.
  • --no-clobber: Don't overwrite existing files.
  • --page-requisites: Download all the elements required to display the page properly (images, CSS, etc.).
  • --html-extension: Save files with the .html extension.
  • --convert-links: Convert links so that they work locally and offline.
  • --restrict-file-names=windows: Modify filenames so that they work in Windows as well.
  • --domains example.com: Don't follow links outside the specified domain.
  • --no-parent: Don't follow links outside the specified directory.

If the website uses external resources like jQuery, you can try using the --span-hosts option to download resources from other domains as well:

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains example.com --span-hosts --no-parent http://www.example.com

However, this may also download additional content from other domains that you might not need. To avoid downloading unnecessary content, you can manually download the required resources (like jQuery) and update the HTML files to use the local copies instead.

Keep in mind that some websites may have measures in place to prevent downloading their content using tools like wget. In such cases, you may need to adjust the command options or use alternative methods to download the website for offline browsing[6].

Citations: [1] https://www.linuxjournal.com/content/downloading-entire-web-site-wget [2] https://winaero.com/make-offline-copy-of-a-site-with-wget-on-windows-and-linux/amp/ [3] https://stackoverflow.com/questions/10842263/wget-download-for-offline-viewing-including-absolute-references [4] https://askubuntu.com/questions/391622/download-a-whole-website-with-wget-or-other-including-all-its-downloadable-con [5] https://superuser.com/questions/970323/using-wget-to-copy-website-with-proper-layout-for-offline-browsing [6] https://www.computerhope.com/unix/wget.htm [7] https://superuser.com/questions/1672776/download-whole-website-wget [8] https://gist.github.com/stvhwrd/985dedbe1d3329e68d70 [9] https://simpleit.rocks/linux/how-to-download-a-website-with-wget-the-right-way/ [10] https://www.guyrutenberg.com/2014/05/02/make-offline-mirror-of-a-site-using-wget/ [11] https://linuxreviews.org/Wget:_download_whole_or_parts_of_websites_with_ease [12] https://brain-dump.space/articles/how-to-get-full-offline-website-copy-using-wget-on-mac-os/ [13] https://dev.to/jjokah/how-to-download-an-entire-website-for-offline-usage-using-wget-2lli [14] https://alvinalexander.com/linux-unix/how-to-make-offline-mirror-copy-website-with-wget [15] https://askubuntu.com/questions/979655/using-wget-and-having-websites-working-properly-offline

140
Mind blown 🤯🤯 (libreddit.pussthecat.org)

Bing (multimodal) image input is free!

14

I couldn't find this using the search.

3

To limit your Python requests to 1.39 requests per second, you can use the time.sleep() function from the time module to introduce a delay between requests. You can also use third-party libraries like pyrate-limiter, requests-ratelimiter, or ratelimiter to achieve rate limiting.

Here's an example using the time module:

import requests
import time

urls = [...]  # List of URLs to send requests to

for url in urls:
    response = requests.get(url)
    print(response.status_code)
    time.sleep(1 / 1.39)  # Introduce a delay to achieve 1.39 requests per second

Alternatively, you can use the pyrate-limiter library:

from pyrate_limiter import Duration, RequestRate, Limiter
import requests

rate_limit = RequestRate(1, Duration.SECOND / 1.39)  # 1 request per 1.39 seconds
limiter = Limiter(rate_limit)

urls = [...]  # List of URLs to send requests to

for url in urls:
    with limiter:
        response = requests.get(url)
        print(response.status_code)

Another option is to use the requests-ratelimiter library:

from requests import Session
from requests_ratelimiter import LimiterAdapter

session = Session()

# Apply a rate limit of 1.39 requests per second to all requests
adapter = LimiterAdapter(per_second=1.39)
session.mount('http://', adapter)
session.mount('https://', adapter)

urls = [...]  # List of URLs to send requests to

for url in urls:
    response = session.get(url)
    print(response.status_code)

These examples demonstrate different ways to limit your Python requests to 1.39 requests per second. Choose the method that best suits your needs and project requirements.

Citations: [1] https://stackoverflow.com/questions/26098711/limiting-number-of-http-requests-per-second-on-python [2] https://pypi.org/project/requests-ratelimiter/ [3] https://github.com/vutran1710/PyrateLimiter [4] https://akshayranganath.github.io/Rate-Limiting-With-Python/ [5] https://stackoverflow.com/questions/40748687/python-api-rate-limiting-how-to-limit-api-calls-globally [6] https://pypi.org/project/limiter/ [7] https://github.com/JWCook/requests-ratelimiter [8] https://levelup.gitconnected.com/implement-rate-limiting-in-python-d4f86b09259f [9] https://limits.readthedocs.io [10] https://github.com/encode/httpx/issues/815 [11] https://365datascience.com/tutorials/python-tutorials/limit-rate-requests-web-scraping/ [12] https://www.seelk.co/blog/efficient-client-side-handling-of-api-throttling-in-python-with-tenacity [13] https://www.cisco.com/c/en/us/support/docs/security/firepower-ngfw/217900-troubleshoot-firepower-threat-defense-an.html [14] https://scrapfly.io/blog/how-to-rate-limit-asynchronous-python-requests/ [15] https://dev.to/paymon123/the-easiest-way-to-rate-limit-a-python-api-3njc [16] https://cloud.google.com/python/docs/reference/storage/1.39.0/retry_timeout [17] https://medium.com/clover-platform-blog/conquering-api-rate-limiting-dcac5552714d [18] https://subscription.packtpub.com/book/web_development/9781838983994/9/ch09lvl1sec72/api-rate-limiting [19] https://towardsdatascience.com/speeding-up-python-code-fast-filtering-and-slow-loops-8e11a09a9c2f [20] https://katiekodes.com/python-wrap-requests-functions/ [21] https://www.reddit.com/r/Python/comments/12xahnb/i_built_a_simple_and_efficient_rate_limiter_for/ [22] https://docs.alcf.anl.gov/theta/performance-tools/craypat/ [23] https://coderpad.io/blog/development/a-guide-to-api-rate-limiting-in-django/

1

To avoid defining the sqlite3.Connection and sqlite3.Cursor in many functions throughout your code, you can use the following approaches:

  1. Use a context manager: Create a context manager that handles the connection and cursor creation, as well as closing the connection when done. This way, you can use the with statement to manage the connection and cursor in your functions.
import sqlite3

DB_FILE = "your_database_file.db"

class DatabaseConnection:
    def __enter__(self):
        self.conn = sqlite3.connect(DB_FILE)
        self.cursor = self.conn.cursor()
        return self.cursor

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.conn.commit()
        self.cursor.close()
        self.conn.close()

def insert_post_to_db(issue: Issue, lemmy_post_id: int) -> None:
    with DatabaseConnection() as cursor:
        cursor.execute(
            "INSERT INTO posts (issue_url, lemmy_post_id, issue_title, issue_body) VALUES (?, ?, ?, ?)",
            (issue.url, lemmy_post_id, issue.title, issue.formatted_body),
        )
  1. Use a function decorator: You can create a decorator that handles the connection and cursor creation and passes the cursor to the decorated function.
import sqlite3
from functools import wraps

DB_FILE = "your_database_file.db"

def with_cursor(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        conn = sqlite3.connect(DB_FILE)
        cursor = conn.cursor()

        result = func(cursor, *args, **kwargs)

        conn.commit()
        cursor.close()
        conn.close()

        return result

    return wrapper

@with_cursor
def insert_post_to_db(cursor: sqlite3.Cursor, issue: Issue, lemmy_post_id: int) -> None:
    cursor.execute(
        "INSERT INTO posts (issue_url, lemmy_post_id, issue_title, issue_body) VALUES (?, ?, ?, ?)",
        (issue.url, lemmy_post_id, issue.title, issue.formatted_body),
    )

Both of these approaches help you avoid defining the connection and cursor in every function that interacts with the database. The context manager approach is more explicit and easier to read, while the function decorator approach is more concise. Choose the one that best fits your coding style and project requirements.

Citations: [1] https://stackoverflow.com/questions/14511337/efficiency-of-reopening-sqlite-database-after-each-query [2] https://stackoverflow.com/questions/50075325/python-sqlite3-nested-cursor-execute [3] https://blog.udemy.com/python-sqlite/ [4] https://stackoverflow.com/questions/54395773/what-are-the-side-effects-of-reusing-a-sqlite3-cursor [5] https://pynative.com/python-sqlite/ [6] https://arctype.com/blog/guide-sqlite-python/ [7] https://sqlite.org/forum/info/4393a42b3b5e2382 [8] https://docs.python.org/3/library/sqlite3.html [9] https://www.reddit.com/r/learnpython/comments/94i4k9/using_a_global_sqlite_cursor_across_multiple/ [10] https://stackoverflow.com/questions/9561832/what-if-i-dont-close-the-database-connection-in-python-sqlite [11] https://climbtheladder.com/10-python-sqlite-best-practices/ [12] https://pypi.org/project/cuttlepool/ [13] https://www.sitepoint.com/sqlite-python/ [14] https://pyneng.readthedocs.io/en/latest/book/25_db/sqlite3.html [15] https://www.geeksforgeeks.org/python-sqlite-connecting-to-database/ [16] https://towardsdatascience.com/python-sqlite-tutorial-the-ultimate-guide-fdcb8d7a4f30 [17] https://codereview.stackexchange.com/questions/285730/simple-connection-pool-for-sqlite-in-python [18] https://developer.android.com/training/data-storage/sqlite [19] https://www.blog.pythonlibrary.org/2021/09/30/sqlite/ [20] https://www.digitalocean.com/community/tutorials/how-to-use-the-sqlite3-module-in-python-3 [21] https://developer.android.com/topic/performance/sqlite-performance-best-practices [22] https://www.reddit.com/r/learnpython/comments/8tkbor/how_does_sqlalchemy_connection_pooling_work_with/ [23] https://pymotw.com/2/sqlite3/ [24] https://vegibit.com/interact-with-databases-using-the-python-sqlite3-module/ [25] https://blog.rtwilson.com/a-python-sqlite3-context-manager-gotcha/ [26] https://remusao.github.io/posts/few-tips-sqlite-perf.html [27] https://www.digitalocean.com/community/tutorials/how-to-use-an-sqlite-database-in-a-flask-application [28] https://www.tutorialspoint.com/sqlite/sqlite_python.htm [29] https://www.sqlite.org/whentouse.html [30] https://rogerbinns.github.io/apsw/execution.html [31] https://stackoverflow.com/questions/42635749/sqlite-database-connection-best-practice [32] https://realpython.com/python-mysql/

7
submitted 1 year ago* (last edited 1 year ago) by InternetPirate@lemmy.fmhy.ml to c/meta@programming.dev

I wanted to start a discussion about the use of AI-generated solutions on Programming.dev. Personally, I've found that AI-powered tools have been incredibly helpful in solving programming questions. I won't name any specific commercial software, but I use one that combines GPT-4 and web search to get more factual information. I write some answers I think I might revisit to the ShareGPT community, but I would prefer posting programming solutions to this instance. However, I'm not sure if AI-generated solutions are welcomed on programming.dev. I'd love to hear your thoughts on this. If AI-generated responses are accepted, how should we format the answers, should we just copy paste without quoting, should we quote the model, just mention that it's AI-generated,...?

9

I'm wondering if it's possible to see the local feed of another instance from the one I'm using. I'm interested in exploring content from other instances without having to visit every single community, but I'm not sure how to do it. I've tried searching for a way to do this on the documentation and using the Lemmy search, but I haven't found any clear instructions. Does anyone know how to see the local feed of another instance? Any help or guidance would be greatly appreciated!

7
submitted 1 year ago* (last edited 1 year ago) by InternetPirate@lemmy.fmhy.ml to c/lemmy_support@lemmy.ml

In Lemmy, the active filter view is designed to prioritize posts with the latest activity, similar to how forums work. However, it remains unclear whether commenting on your own post in Lemmy will bump it on the active filter view. Some forum platforms, such as Discourse, allow a practice known as the "ghost bump," where users can make a post and delete it to draw attention to their post without adding new content[^1]. While it is uncertain if this is possible on Lemmy, it's worth noting that even if it were, it would result in an unnecessary comment that cannot be completely removed. The comment would still be visible, indicating that it was deleted by the post's creator. If you have any experience with Lemmy's active filter view or know whether commenting on your own post bumps it, please share your thoughts in the comments below.

[^1]: What is "Bumping Topics"

8
submitted 1 year ago* (last edited 1 year ago) by InternetPirate@lemmy.fmhy.ml to c/lemmy@lemmy.ml

As an enthusiastic supporter of Lemmy, I am eager to contribute to the project. However, I hold strong reservations about writing a single line of code for a project hosted on a Micro$oft server. While I have created a few issues on GitHub, I firmly believe that my contributions could be significantly amplified if there were a mirror of Lemmy that utilized Forgejo hosting outside the United States. I would be absolutely delighted to have the opportunity to contribute more actively to this incredible project if such an alternative hosting option were available.

34
submitted 1 year ago* (last edited 1 year ago) by InternetPirate@lemmy.fmhy.ml to c/singularity@lemmy.fmhy.ml

GPT-4's details are leaked.

It is over.

Everything is here: https://archive.is/2RQ8X

Parameters count:

GPT-4 is more than 10x the size of GPT-3. We believe it has a total of ~1.8 trillion parameters across 120 layers.

Mixture Of Experts - Confirmed.

OpenAI was able to keep costs reasonable by utilizing a mixture of experts (MoE) model. They utilizes 16 experts within their model, each is about ~111B parameters for MLP. 2 of these experts are routed to per forward pass.

MoE Routing:

While the literature talks a lot about advanced routing algorithms for choosing which experts to route each token to, OpenAI’s is allegedly quite simple, for the current GPT-4 model.

There roughly ~55B shared parameters for attention.

Inference:

Each forward pass inference (generation of 1 token) only utilizes ~280B parameters and ~560 TFLOPs. This contrasts with the ~1.8 trillion parameters and ~3,700 TFLOP that would be required per forward pass of a purely dense model.

Dataset:

GPT-4 is trained on ~13T tokens.

These are not unique tokens, they count the epochs as more tokens as well.

Epoch number: 2 epochs for text-based data and 4 for code-based data.

There is millions of rows of instruction fine-tuning data from ScaleAI & internally.

GPT-4 32K

There was an 8k context length (seqlen) for the pre-training phase. The 32k seqlen version of GPT-4 is based on fine-tuning of the 8k after the pre-training.

Batch Size:

The batch size was gradually ramped up over a number of days on the cluster, but by the end, OpenAI was using a batch size of 60 million! This, of course, is “only” a batch size of 7.5 million tokens per expert due to not every expert seeing all tokens.

For the real batch size:

Divide this number by the seq len to get the real batch size. just stop with this misleading numbers already.

Parallelism Strategies

To parallelize across all their A100s GPUs They utilized 8-way tensor parallelism as that is the limit for NVLink.

Beyond that, they are using 15-way pipeline parallelism.

(likely used ZeRo Stage 1. It is possible they used block-level FSDP)

Training Cost

OpenAI’s training FLOPS for GPT-4 is ~2.15e25, on ~25,000 A100s for 90 to 100 days at about 32% to 36% MFU.

Part of this extremely low utilization is due to an absurd number of failures requiring checkpoints that needed to be restarted from.

If their cost in the cloud was about $1 per A100 hour, the training costs for this run alone would be about $63 million.

(Today, the pre-training could be done with ~8,192 H100 in ~55 days for $21.5 million at $2 per H100 hour.)

Mixture of Expert Tradeoffs

There are multiple MoE tradeoffs taken: For example, MoE is incredibly difficult to deal with on inference because not every part of the model is utilized on every token generation.

This means parts may sit dormant when other parts are being used. When serving users, this really hurts utilization rates.

Researchers have shown that using 64 to 128 experts achieves better loss than 16 experts, but that’s purely research.

There are multiple reasons to go with fewer experts. One reason for OpenAI choosing 16 experts is because more experts are difficult to generalize at many tasks. More experts can also be more difficult to achieve convergence with.

With such a large training run, OpenAI instead chose to be more conservative on the number of experts.

GPT-4 Inference Cost

GPT-4 costs 3x that of the 175B parameter Davincci.

This is largely due to the larger clusters required for GPT-4 and much lower utilization achieved.

AN estimate of it's costs is $0.0049 cents per 1k tokens for 128 A100s to inference GPT-4 8k seqlen and $0.0021 cents per 1k tokens for 128 H100’s to inference GPT-4 8k seqlen. It should be noted, we assume decent high utilization, and keeping batch sizes high.

Multi-Query Attention

OpenAI are using MQA just like everybody else.

Because of that only 1 head is needed and memory capacity can be significantly reduced for the KV cache. Even then, the 32k seqlen GPT-4 definitely cannot run on 40GB A100s, and the 8k is capped on max bsz.

Continuous batching

OpenAI implements both variable batch sizes and continuous batching. This is so as to allow some level of maximum latency as well optimizing the inference costs.

Vision Multi-Modal

It is a separate vision encoder from the text encoder, with cross-attention. The architecture is similar to Flamingo. This adds more parameters on top of the 1.8T of GPT-4. It is fine-tuned with another ~2 trillion tokens, after the text only pre-training.

On the vision model, OpenAI wanted to train it from scratch, but it wasn’t mature enough, so they wanted to derisk it by starting with text.

One of the primary purposes of this vision capability is for autonomous agents able to read web pages and transcribe what’s in images and video.

Some of the data they train on is joint data (rendered LaTeX/text), screen shots of web page, youtube videos: sampling frames, and run Whisper around it to get transcript.

[Dont want to say "I told you so" but..]

Speculative Decoding

OpenAI might be using speculative decoding on GPT-4's inference. (not sure 100%)

The idea is to use a smaller faster model to decode several tokens in advance, and then feeds them into a large oracle model as a single batch.

If the small model was right about its predictions – the larger model agrees and we can decode several tokens in a single batch.

But if the larger model rejects the tokens predicted by the draft model then the rest of the batch is discarded. And we continue with the larger model.

The conspiracy theory that the new GPT-4 quality had been deteriorated might be simply because they are letting the oracle model accept lower probability sequences from the speculative decoding model.

Inference Architecture

The inference runs on a cluster of 128 GPUs.

There are multiple of these clusters in multiple datacenters in different locations.

It is done in 8-way tensor parallelism and 16-way pipeline parallelism.

Each node of 8 GPUs has only ~130B parameters, or… twitter.com/i/web/status/1…

The model has 120, so it fits in 15 different nodes. [Possibly the there are less layers on the first node since it needs to also compute the embeddings]

According to these numbers: OpenAI should have trained on 2x the tokens if they were trying to go by chinchilla's optimal.

[let alone surpass it like we do]

This goes to show that they are struggling to get high quality data. Why no FSDP?

A possible reason for this could be that some of the hardware infra they secured is of an older generation.

This is pretty common at local compute clusters as the organisation usually upgrade the infra in several "waves" to avoid a complete pause of operation.… twitter.com/i/web/status/1…

Dataset Mixture

They trained on 13T tokens.

CommonCrawl & RefinedWeb are both 5T.

Remove the duplication of tokens from multiple epochs and we get to a much reasonable number of "unaccounted for" tokens: The "secret" data.

Which by this point we already get rumors that parts of it came from twitter, reddit & youtube.

[Rumors that start to become lawsuits]

Some speculations are:

  • LibGen (4M+ books)
  • Sci-Hub (80M+ papers)
  • All of GitHub

My own opinion:

The missing dataset it a custom dataset of college textbooks collected by hand for as much courses as possible.

This is very easy to convert to txt file and than with self-instruct into instruction form.

This creates the "illusion" that GPT-4 "is smart" no matter who use it.

Computer scientist? sure! it can help you with your questions about P!=NP

Philosophy major? It can totally talk to you about epistemology.

Don't you see?

It was trained on the textbooks. It is so obvious.

There are also papers that try to extract by force memorized parts of books from GPT-4 to understand what it trained on.

There are some books it knows so well that it had seen them for sure.

Moreover, If i remember correctly: It even know the unique ids of project Euler exes.

[-] InternetPirate@lemmy.fmhy.ml 26 points 1 year ago* (last edited 1 year ago)

I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.

Nah it's the same here in Lemmy. It's because the algorithm only accounts for votes and not for user engagement.

[-] InternetPirate@lemmy.fmhy.ml 53 points 1 year ago

America becoming a third world country.

[-] InternetPirate@lemmy.fmhy.ml 34 points 1 year ago

Jellyfin for a free and open-source media server and suite of multimedia applications.

[-] InternetPirate@lemmy.fmhy.ml 51 points 1 year ago

yt-dlp for downloading videos from various websites.

[-] InternetPirate@lemmy.fmhy.ml 39 points 1 year ago

Calibre for organizing, converting, and syncing eBooks.

[-] InternetPirate@lemmy.fmhy.ml 25 points 1 year ago

Sonarr for a PVR for Usenet and BitTorrent users that can monitor multiple RSS feeds for new episodes of your favorite shows.

[-] InternetPirate@lemmy.fmhy.ml 46 points 1 year ago

qBittorrent for a lightweight and open-source BitTorrent client.

[-] InternetPirate@lemmy.fmhy.ml 44 points 1 year ago

TLDR: Subreddits are protesting against Reddit's API changes, and r/pics, a subreddit with over 30 million members, has marked itself as NSFW (not safe for work). This means that advertisements can no longer be displayed alongside posts in the subreddit. The protest started in June when thousands of subreddits participated in a blackout to protest Reddit's plans to charge for API access. The changes have resulted in third-party apps like Apollo shutting down. As part of the protest, r/pics initially only allowed images of comedian John Oliver to be shared and later amended its rules to allow media featuring Oliver, including erotic fan fiction. The subreddit's moderators posted an "open letter" reminding the community not to swear, as marking the community as NSFW would deprive Reddit of advertising revenue. Reddit has reportedly removed mods for marking their communities NSFW as a protest. Despite this, r/pics was officially marked NSFW on Monday. Other subreddits, such as r/videos and r/funny, are also protesting in their own ways.

[-] InternetPirate@lemmy.fmhy.ml 72 points 1 year ago

The unpopular ones can be used since they don't reach the API calls free limit.

[-] InternetPirate@lemmy.fmhy.ml 247 points 1 year ago

RemindMe bot is no longer functional following the API pricing change, and many Redditors are still unaware of this fact.

[-] InternetPirate@lemmy.fmhy.ml 22 points 1 year ago

You don't want to delete your account until you are absolutely certain that your content will remain permanently deleted, as it appears that they are restoring deleted content.

[-] InternetPirate@lemmy.fmhy.ml 59 points 1 year ago* (last edited 1 year ago)
view more: next ›

InternetPirate

joined 1 year ago
MODERATOR OF