N0x0n

joined 2 years ago
[โ€“] N0x0n@lemmy.ml 9 points 1 day ago (6 children)

Sorry I haven't read through your whole post, it seems more of a hardware/software review and I wanted to emphasize on something else:

Fair to the workers, fair materials,

From what I have read/heard a few years ago, this is not quite true and actually impossible. They use like 2/3 fair materials and they base their whole maketing on those. While this is still better than others... I wouldn't ever make any reference to a phone as Fair to workers and fair materials.

Sorry for beeing that guy :/ but thats a fairytale and even misleading maketing (green-washing). But probably still better than it's counterparts I guess? Being Repairable should be their moto :)

[โ€“] N0x0n@lemmy.ml 1 points 1 day ago* (last edited 1 day ago) (1 children)

Everything related to art (photography, movie, music, graffiti or whatnot...) Has to be none AI for me to enjoy. I put it on the same hate level as ads (radio, tv, internet...).

However, I found out how good it can be to help me out to put together some scripts in python or bash. Not because it's good at doing those things but because I do not have time to learn proper python or bash :/. Though I do agree it only works well for small things.

Maybe, those who love AI generated art feel the same way as I do with AI generated code? ๐Ÿคทโ€โ™‚๏ธ

[โ€“] N0x0n@lemmy.ml 2 points 3 days ago

Ugh... Movin Facial recognition, what a joke. I put them on the same level of stupidity as those who put Tesla's AI chip in their brain.

Sad days for privacy and anonymity enthusiasts ๐Ÿ˜ฎโ€๐Ÿ’จ๐Ÿ˜ฎโ€๐Ÿ’จ

 

Hello everyone :)

Firstly, I'm not anyhow related to programming at ALL ! I can put together some easy to use bash scripts, to automate some stuff, while copy/pasting from the web and doing A LOT of trial an error. It sometimes took me a whole week to have a functional script. I also sometimes asked for some help here on Lemmy and still uses some of those script people helped me out to build up from the ground !

Secondly, I'm not really into the AI slope and have a lot of arguments why I hate it (Unauthorized webscrapping, High energy consumption, Privacy nightmare....).

However, I have to say I'm quite impressed how good my first experience with AI was, considering my very limited knowledge in programming. The script works perfectly for my use case. I had to switch between Claude and O4-mini to get the best results and took me a whole day of prompting around and testing before it behaved like I wanted it to !

Without going to much into details, I was looking for a way to interface with Qbittorrent's API and manage my torrents and move them around in new categories in an automated way. What this python script does is to export the .torrent file in specific directory (not the files) and stop the torrent and move it in a new category if desired based on specific criteria (ratio, category, tags, seeding time...) . If correctly configured, directories and sub-directories are also created on the fly.


My own opinion after this experience is that it probably won't write a fully functional software (not yet?), but for something like scripting or learning basic programming skills it's a very capable assistant!

  1. What do you think of the code overall? (see below)

  2. Also, do you think it's still relevant to get proficient and learn all the details or just stick to the basic and let AI do the heavy lifting?


DISCLAIMER

Keep in mind this works perfectly for my use case and maybe won't work like you expect. It has it's flaws and will probably break in more niche or specific use cases. Don't use it if you don't know what you're doing and proper testing ! I'm not responsible if all your torrents are gone !!!


## Made by duckduckgo AI ##
## Required to install requests with pip install requests ##
## see duck.ai_2025-07-13_16-44-24.txt ##

import requests
import os

# Configuration
QB_URL = "http://localhost:8080/"  # Ensure this is correctly formatted
USERNAME = ""  # Replace with your qBittorrent username
PASSWORD = ""  # Replace with your qBittorrent password
MIN_RATIO = 0.0  # Minimum ratio to filter torrents
MIN_SEEDING_TIME = 3600  # Minimum seeding time in seconds
OUTPUT_DIR = "./directory"  # Replace with your desired output directory
NEW_CATEGORY = ""  # Specify the new category name
NEW_PATH = "~/Downloads"

# Optional filtering criteria
FILTER_CATEGORIES = ["cats"]  # Leave empty to include all categories
FILTER_TAGS = []  # Leave empty to include all tags
FILTER_UNTAGGED = False  # Set to True to include untagged torrents
FILTER_UNCATEGORIZED = False  # Set to True to include uncategorized torrents

# Function to log in to qBittorrent
def login():
    session = requests.Session()
    response = session.post(f"{QB_URL}/api/v2/auth/login", data={'username': USERNAME, 'password': PASSWORD})
    if response.status_code == 200:
        print("Login successful.")
        return session
    else:
        print("Login failed.")
        return None

# Function to get torrents
def get_torrents(session):
    response = session.get(f"{QB_URL}/api/v2/torrents/info")
    if response.status_code == 200:
        print("Retrieved torrents successfully.")
        return response.json()
    else:
        print("Failed to retrieve torrents.")
        return []

# Function to stop a torrent
def stop_torrent(session, torrent_hash):
    response = session.post(f"{QB_URL}/api/v2/torrents/stop", data={'hashes': torrent_hash})
    if response.status_code == 200:
        print(f"Stopped torrent: {torrent_hash}")
    else:
        print(f"Failed to stop torrent: {torrent_hash}")

# Function to start a torrent
def start_torrent(session, torrent_hash):
    response = session.post(f"{QB_URL}/api/v2/torrents/start", data={'hashes': torrent_hash})
    if response.status_code == 200:
        print(f"Started torrent: {torrent_hash}")
    else:
        print(f"Failed to start torrent: {torrent_hash}")


# Function to create a category if it doesn't exist
def create_category(session, category_name, save_path):
    # Skip category creation if category or save path is empty
    if not category_name or not save_path:
        print("Skipping category creation: category or save path is empty.")
        return

    # Check existing categories
    response = session.get(f"{QB_URL}/api/v2/torrents/categories")
    if response.status_code == 200:
        categories = response.json()
        if category_name not in categories:
            # Create the new category with savePath
            payload = {
                'category': category_name,
                'savePath': save_path
            }
            response = session.post(f"{QB_URL}/api/v2/torrents/createCategory", data=payload)
            if response.status_code == 200:
                print(f"Category '{category_name}' created with save path '{save_path}'.")
            else:
                print(f"Failed to create category '{category_name}'. Status code: {response.status_code}")
        else:
            print(f"Category '{category_name}' already exists.")
    else:
        print("Failed to retrieve categories. Status code:", response.status_code)


# Function to set the category for a torrent
def set_torrent_category(session, torrent_hash, category_name, save_path):

    # If either category or path is missing, remove the category
    if not category_name or not save_path:
        response = session.post(f"{QB_URL}/api/v2/torrents/setCategory", data={'hashes': torrent_hash, 'category': ''})
        if response.status_code == 200:
            print(f"Removed category for torrent: {torrent_hash}")
        else:
            print(f"Failed to remove category for torrent: {torrent_hash}")
        return


def is_category_match(torrent_category, filter_categories):
    """
    Check if the torrent's category matches any of the filter categories.
    Supports partial category matching.

    Args:
    torrent_category (str): The category of the torrent
    filter_categories (list): List of categories to filter by

    Returns:
    bool: True if the category matches, False otherwise
    """
    # If no filter categories are specified, return True
    if not filter_categories:
        return True

    # Check if the torrent's category starts with any of the filter categories
    return any(
        torrent_category == category or
        torrent_category.startswith(f"{category}/")
        for category in filter_categories
    )


# Modify the export_torrents function to use the new category matching
def export_torrents(session, torrents):
    # Create the output directory if it doesn't exist
    os.makedirs(OUTPUT_DIR, exist_ok=True)

    for torrent in torrents:
        ratio = torrent['ratio']
        seeding_time = torrent['seeding_time']
        category = torrent.get('category', '')
        tags = torrent.get('tags', '')

        # Use the new category matching function
        if (ratio >= MIN_RATIO and
            seeding_time >= MIN_SEEDING_TIME and
            is_category_match(category, FILTER_CATEGORIES) and
            (not FILTER_TAGS or any(tag in tags for tag in FILTER_TAGS)) and
            (not FILTER_UNTAGGED or not tags) and
            (not FILTER_UNCATEGORIZED or category == '')):

            torrent_hash = torrent['hash']
            torrent_name = torrent['name']
            export_url = f"{QB_URL}/api/v2/torrents/export?hash={torrent_hash}"


            # Export the torrent file
            response = session.get(export_url)
            if response.status_code == 200:
                # Save the torrent file with its original name in the specified output directory
                output_path = os.path.join(OUTPUT_DIR, f"{torrent_name}.torrent")
                with open(output_path, 'wb') as f:
                    f.write(response.content)
                print(f"Exported: {output_path}")

                # Stop the torrent after exporting
                stop_torrent(session, torrent_hash)

                # Create the new category if it doesn't exist
                create_category(session, NEW_CATEGORY, NEW_PATH)

                # Set the category for the stopped torrent
                set_torrent_category(session, torrent_hash, NEW_CATEGORY, NEW_PATH)
            else:
                print(f"Failed to export {torrent_name}.torrent")

# Main function
def main():
    session = login()
    if session:
        torrents = get_torrents(session)
        export_torrents(session, torrents)

if __name__ == "__main__":
    main()

[โ€“] N0x0n@lemmy.ml 12 points 5 days ago* (last edited 5 days ago) (1 children)

Haha... Had a similar though 2 days ago. What a dumbshit move and this actually told me that "AI" isn't that good actually... Because the dubbed version was so horrible that the video became unwatchable xD.

Nice AI slope YouTube !!

[โ€“] N0x0n@lemmy.ml 6 points 5 days ago* (last edited 5 days ago)

Wow, what a combo ! I guess this would reduce the tarpit's overall power consumption?

I haven't looked at your link yet and maybe it already contains my answer, but I wish to customize for how long they are traped into the tarpit before fail2ban kicks in so I can still poison their AI while saving alot of ressources !!

Edit:

block anything that visits it more, than X times with fail2ban

I guess this is it, but I'm not sure how that translates from nepenthese to fail2ban. Need further reading and testing !

Thanks for the link !

[โ€“] N0x0n@lemmy.ml 12 points 1 week ago (1 children)

You do not get rich with hardwork and a degree... You get rich by stealing and taking the works of others and claim it for yourself ! Keep in mind they just want you to get enough money to keep your head out of the waters.

I wish and hope you will find some ease of mind and keep it up. You can't take down such a system alone but you can quietly and discretely fuck it up, one stone at a time ! Be the change you wish to see in the world.

[โ€“] N0x0n@lemmy.ml 2 points 1 week ago (1 children)

Thanks for sharing your experience ! I was kinda interested for my new N300 if I should install promox+LXC-docker or promox+VM-docker !

Hearing you had a lot of issues and caveats makes my choice easier wihout even giving it a try ! So thanks !

[โ€“] N0x0n@lemmy.ml 6 points 2 weeks ago* (last edited 2 weeks ago)

Yeah... After a user here on Lemmy pointed out that the AdGuard app on mobile made a lot of strange requests to ads services, I gave it a try myself with PCAPdroid and seeing all thoses requests made by only opening the app made me think twice about AdGuard...

Removed all their services from my network and a happy piHole + quad9 user since !

[โ€“] N0x0n@lemmy.ml 2 points 2 weeks ago

Yep, but only if you familiar, otherwise it can range from 1day to a week depending how complex our setup is (OCID,Fail2ban,reverse proxy, self-signed miniCA...).

But once your setup is all ready and you get all the bell and whistle it's just a matter of 5mins (and very fun too if you have time to spend !)

[โ€“] N0x0n@lemmy.ml 3 points 2 weeks ago

Thank you to everyone working on homebox ! Can't wait to see the better Tags update whenever it's ready !

I also hope an option to switch between AND/OR capability for tag searching.

[โ€“] N0x0n@lemmy.ml 2 points 2 weeks ago

Sorry for the late response !

Here in EU I never heard of that kind of behavior ! I'm kinda chocked to hear it for the first time here on Lemmy.

I see them everyday in my garden and in the woods nearby and never seen one of them grab mushrooms ! Kinda wild !

[โ€“] N0x0n@lemmy.ml 0 points 2 weeks ago

It's not 1 way sync. Please look up what you're talking about before speaking.

Yeah it's not, however you can configure syncthing as a 1way sync solution and I was emphasizing that this still isn't a cloud solution.

Don't get me wrong, syncthing is great and I use it everyday, but syncthing is not a cloud solution.

 

Partially Solved

While I haven't found a native solution on how to integrate NTFY to glance, I did build up something that actually send basic text streams to glance in an automated way. It's very rudimentary and probably error prone, but that's the best I could do right now... Maybe someone else will chime in and give some better advice/solution.

For those interested postgREST allows to build a simple docker container postgres database you can query for the custom api in glance. It DOES work, but If like myself, your database/json/postgre knowlege is very limited, it only allows basic text response like: "Update Failed".

I did try to get a little further into the rabbit hole, but it does come with the necessity to have a good database and query/response background ? Not a very good solution and will probably not go one or try to improve on that right now... But feel free to give better advice or another lead to follow :)

Further notes:

On a final note, I do see a lot of interest in the Glance community and alot of new and interesting updates:

  • Added .Options.JSON to the custom API widget which takes any nested option value and turns it into a JSON string v0.8.3
  • [Custom API] Synchronous API calls and options property v0.8.0

Hello everyone !

I kinda hit a roadblock here and I'm interested if someone actually have done something similar or an alternative to what I'm trying to achieve.

Some background

Right now I'm playing around with NTFY and works great. I even hooked some automated backup script to my server with stdout/stderr output:

(Please, no bash-shaming ! :P)

#!/bin/bash

$COMMAND

if [ $? -eq 0 ]; then
        echo "Success"
        issue=$(<stdout.txt)
        curl -H "Title: Hello world!" -H "Priority: urgent" -d "$issue" https://mydomain/glancy

else
        echo "Failure"
        issue=$(<stderr.txt)
        curl -H "Title: Hello world!" -H "Priority: urgent" -d "$issue" https://mydomain/glancy

fi

This works great and I receive my notification on every device subscribed to the topic

What I'm trying to achieve?

Send the NTFY notification to a visual dashboard like Glance. If there's no native way to achieve this, self-host a simple json api that get's populated by my server's script response?

What's the issue ?

After skimming all the GitHub repos, there's no mention on any self-hosted dashboard to integrate NTFY as a notification hook. I find it kinda strange because NTFY is just a simple HTTP PUT or POST requests so It should be rather easy no?

And after searching the whole day on the web, there wasn't any good results or resources. So I came to the conclusion that It wasn't that easy and probably needs a bit more of something I'm probably bad at (coding?).

In the glance documentation there's configuration to hook a custom api and looks rather simple, however now I hit a roadblock I'm not able to solve... I have no idea where or how to spin up a self-hosted and dynamic json api that communicates with my server and updates/populate that json file... Here's an example to show what I mean:

Json api: https://api.laut.fm/station/psytrancelicious/last_songs

Custom Glance API template:

- type: custom-api
  title: Random Fact
  cache: 6h
  url: https://api.laut.fm/station/psytrancelicious/last_songs
  template: |
    <p class="size-h4 color-paragraph">{{ .JSON.String "title" }}</p>
Questions

  1. Any native way to hook NTFY's notification to a dashboard like instance (Glance, Homer, Dashy?)

  2. If no, Is it possible to self-host a json api that gets populated by my script's response? A good pointer to the right direction would be very nice, preferably a Docker solution !

  3. Another solution to have a visual dashboard (not the native NTFY dashboard) and visualize all my script response notification in one place ?


Thank in advance for all your responses :) and sorry for my bad wording, web development terminology is not really my cup of tea !

 

cross-posted from: https://lemmy.ml/post/28250905

cross-posted from: https://lemmy.ml/post/28250870

Hello everyone !

I'm seeding/cross-post this in 3 communities because I think I will get better answers in each respective one (Hardware, coding, electronics).

As the title say I'm want to learn to build from the ground up those cheap solar led/optic fiber lightning, here some images to get what I mean:

They come in bundles but after awhile they just die out without repair ability which kinda sucks and because they are cheap my mum keeps buying them... So, I would like to build ones I'm able to repair and customize :). However I have absolutely NO idea where to begin and what exactly I'm searching for... I'm lacking the skills and knowledge on the 3 fronts !

  • What hardware I'm looking for ?
  • What kind of electronics ?
  • What programming language to glue everything together?
  • .... ?

I'm not afraid to get my hands dirty and learn how to micro-solder, learn some coding skills to get everything neatly glued together software wise, learn the necessary hardware or other important and necessary stuff to achieve this goal ! I'm looking for every good and reliable advice to get me started !

One thing though, If i have to learn some hardware/low level coding skills I would prefer a language that would be useful for other stuff in the long run.

Thank you in advance and I'm already sorry if I'm very slow to respond, I'm not native and the flood amount of information I will probably get, will surpass my ability to respond to everyone right away.

Also every other directions are welcome, like:

  • how to repair the old ones? Do I need to flash their proprietary software/hardware?

Thank you !

 

cross-posted from: https://lemmy.ml/post/28250870

Hello everyone !

I'm seeding/cross-post this in 3 communities because I think I will get better answers in each respective one (Hardware, coding, electronics).

As the title say I'm want to learn to build from the ground up those cheap solar led/optic fiber lightning, here some images to get what I mean:

They come in bundles but after awhile they just die out without repair ability which kinda sucks and because they are cheap my mum keeps buying them... So, I would like to build ones I'm able to repair and customize :). However I have absolutely NO idea where to begin and what exactly I'm searching for... I'm lacking the skills and knowledge on the 3 fronts !

  • What hardware I'm looking for ?
  • What kind of electronics ?
  • What programming language to glue everything together?
  • .... ?

I'm not afraid to get my hands dirty and learn how to micro-solder, learn some coding skills to get everything neatly glued together software wise, learn the necessary hardware or other important and necessary stuff to achieve this goal ! I'm looking for every good and reliable advice to get me started !

One thing though, If i have to learn some hardware/low level coding skills I would prefer a language that would be useful for other stuff in the long run.

Thank you in advance and I'm already sorry if I'm very slow to respond, I'm not native and the flood amount of information I will probably get, will surpass my ability to respond to everyone right away.

Also every other directions are welcome, like:

  • how to repair the old ones? Do I need to flash their proprietary software/hardware?

Thank you !

 

Hello everyone !

I'm seeding/cross-post this in 3 communities because I think I will get better answers in each respective one (Hardware, coding, electronics).

As the title say I'm want to learn to build from the ground up those cheap solar led/optic fiber lightning, here some images to get what I mean:

They come in bundles but after awhile they just die out without repair ability which kinda sucks and because they are cheap my mum keeps buying them... So, I would like to build ones I'm able to repair and customize :). However I have absolutely NO idea where to begin and what exactly I'm searching for... I'm lacking the skills and knowledge on the 3 fronts !

  • What hardware I'm looking for ?
  • What kind of electronics ?
  • What programming language to glue everything together?
  • .... ?

I'm not afraid to get my hands dirty and learn how to micro-solder, learn some coding skills to get everything neatly glued together software wise, learn the necessary hardware or other important and necessary stuff to achieve this goal ! I'm looking for every good and reliable advice to get me started !

One thing though, If i have to learn some hardware/low level coding skills I would prefer a language that would be useful for other stuff in the long run.

Thank you in advance and I'm already sorry if I'm very slow to respond, I'm not native and the flood amount of information I will probably get, will surpass my ability to respond to everyone right away.

Also every other directions are welcome, like:

  • how to repair the old ones? Do I need to flash their proprietary software/hardware?

Thank you !

 

YouTube link: https://youtu.be/wVyu7NB7W6Y

Invidious link: https://inv.nadeko.net/watch?v=wVyu7NB7W6Y

Sorry for the formatting... Tried to remove the URL for better readability, but there seems some kind of bug.


TLDW

  • hack phones remotely just knowing it's phone number
  • Intercept 2FA sms
  • Intercept phone calls
  • Reroute phone calls
  • Geolocation of a target

I dunno if it has already been posted/discussed here but this kinda blew my mind ! Sorry there's a lot of clickbait but the general subject is interesting...

I never heard of SS7 and have actually no idea how the whole phone system communication works but that's kinda scary...

Yes we are probably not the first target with this "hack" nor is it as easy as exposed in this video and nor do we have 14k $ to spend on this, but that's not out of reach for some people. I mean it's not as expensive as Pegasus and people with the mean and some good stable income can probably misuse this system for targeting specific vulnerable people (example in the video).

 

Hello everyone :).

Trying to keep it short, cause after 2 days of troubleshooting I'm a bit tired and really confused on what happened here... Maybe it's my lack of understanding about Legacy BIOS/UEFI/EFI/Bootloader/GRUB... But that was really an odd "issue" that resolved by itself?

Intro


My 15 years old laptop, an Asus N76V, is still going strong, though It's purpose is not the same as a few years back. It's working great as a mini-server to host docker containers, DNS server, firewall, wireguard tunnel...

Space left on my volume group (LVM) was getting tight so I decided to install a new Samsung 1TB SSD into it's second slot. As easy as it is, I though this would not take more than 30 minutes...

Old SSD just vanished from boot option in bios.


After booting into BIOS to see if the new SSD is recognized, everything seems okay, however my primary SSD containing the Bootloader and system just vanished as boot option?? I was not to angry about it because if something strange had happened and everything got wiped for whatever reason, I still have my daily backups, so the troubleshooting begins...

Things I have tried to bring my primary SSD back as boot option in BIOS


Though my secondary and new SSD shows as boot option, it's empty and has no Bootloader or system installed.

  1. Change a lot of things in the BIOS

From AHCI to password protection and disable anti-theft, secure boot... I think I changed every option possible (related to SSD) in the BIOS and reverted back to default without my primary SSD showing again as boot option.

  1. Clear CMOS

Opening the case and taking out the little battery to clear the CMOS had not effect at all either.

  1. boot into Debian rescue mode, boot-repair, chroot session

From a Live session USB tried a few things in rescue mode and even in the Live session used the boot-repair tool and tried to manually fix the EFI/Bootloader in a chrooted environment, because Yes even if I couldn't boot into my system my data was still there an safe !!

  1. Disconnect new SSD, swap bay position.

Even when I removed the new SSD from it's SATA connection or swaped SATA bay connections, the primary SSD didn't showed back into the BIOS....

Guess I have to reinstall Debian on my new SSD?


So I gave up on trying to fix the Bootloader and primary SSD and just went for a fresh Debian install on my new SSD.

Nothing uncommon during the installation process except that during the partition all my volume groups and logical volumes from my primary SSD are visible, so I left them alone and created a new VG and LVs. I do as usual a LVM manual partioning with EXT4 filesystem with separate /boot partition, however I forgot to set the ESP partition (was getting a bit late and got a bit on my nerves...). Installation went without issues.

And then It came back....


So booting into my BIOS to see if my new SSD boot position is okay... Ohhhhh and what a surprise to see my primary SSD back as boot choice... however my new SSD isn't there anymore (expected as I forgot to set an ESP partition... And Bootloader is from my primary SSD).

Booting into my system I'm greeted by the Bootloader? GRUB? With 2 choices

  • My old linux OS
  • My new linux OS

Some kind of relief and happy moment after 2 days...

Kinda curious what happens If set the /boot/EFI partition on secondary SSD


I got kinda confused on what happened here so to further confuse myself I reformatted and reinstalled Debian on the secondary disk with the correct /boot/EFI partition.

And I got even more confused... The boot priority in BIOS only shows the secondary SSD as boot option...!?

Questions


What happend here?

  1. Why did my primary SSD (which had a proper Bootloader and clean system) disappeared as boot option in the BIOS as soon as I installed a new blank SSD and didn't came back even when the new one was unplugged from SATA?

  2. Why does my BIOS only shows 1 disk as possible boot device when both have proper Bootloader and system?

  3. Does 1 external Bootloader suffice to make both system work?!

Not sure about the last question... so maybe I'm looking more or less to sharpen my understanding of a Bootloader/EFI/UEFI/GRUB... And any hint to a good resource, book, eBook to get a better understanding is really appreciated :)


I know those are a lot of words (and I said I will keep it short...) but I think the context is important here, and I'm not able to express correctly my issue/thoughts without it.

Thanks in advance for those who beared with me and read the whole text and can hint me to the correct direction to a better understanding on what happened.

 

Edit

My question was very badly written but the new title reflect the actual question. Thanks to 3 very friendly and dedicated users (@harsh3466 @tuna @learnbyexample) I was able to find a solution for my files, so thank you guys !!!

For those who will randomly come across this post here are 3 possible ways to achieve the desired results.

Solution 1 (https://lemmy.ml/post/25346014/16383487)

#! /bin/bash
files="/home/USER/projects/test.md"

mdlinks="$(grep -Po ']\((?!https).*\)' "$files")"
mdlinks2="$(grep -Po '#.*' <<<$mdlinks)"

while IFS= read -r line; do
	#Converts 1.2 to 1-2 (For a third level heading needs to add a supplementary [0-9]) 
	dashlink="$(echo "$line" | sed -r 's|(.+[0-9]+)\.([0-9]+.+\))|\1-\2|')"
	sed -i "s/$line/${dashlink}/" "$files"

	#Puts everything to lowercase after a hashtag
	lowercaselink="$(echo "$dashlink" | sed -r 's|#.+\)|\L&|')"
	sed -i "s/$dashlink/${lowercaselink}/" "$files"

	#Removes spaces (%20) from markdown links after a hashtag
	spacelink="$(echo "$lowercaselink" | sed 's|%20|-|g')"
	sed -i "s/$lowercaselink/${spacelink}/" "$files"

done <<<"$mdlinks2"

Solution 2 (https://lemmy.ml/post/25346014/16453351)

sed -E ':l;s/(\[[^]]*\]\()([^)#]*#[^)]*\))/\1\n\2/;Te;H;g;s/\n//;s/\n.*//;x;s/.*\n//;/^https?:/!{:h;s/^([^#]*#[^)]*)(%20|\.)([^)]*\))/\1-\3/;th;s/(#[^)]*\))/\L\1/;};tl;:e;H;z;x;s/\n//;'

Solution 3 (https://lemmy.ml/post/25346014/16453161)

perl -pe 's/\[[^]]+\]\((?!https?)[^#]*#\K[^)]+(?=\))/lc $&=~s:%20|\d\K\.(?=\d):-:gr/ge'

Relevant links

https://mike.bailey.net.au/notes/software/apps/obsidian/issues/markdown-heading-anchors/#background


Hi everyone !

I'm in need for some assistance for string manipulation with sed and regex. I tried a whole day to trial & error and look around the web to find a solution however it's way over my capabilities and maybe here are some sed/regex gurus who are willing to give me a helping hand !

With everything I gathered around the web, It seems it's rather a complicated regex and sed substitution, here we go !

What Am I trying to achieve?

I have a lot of markdown guides I want to host on a self-hosted forgejo based git markdown. However the classic markdown links are not the same as one github/forgejo...

Convert the following string:

[Some text](#Header%20Linking%20MARKDOWN.md)

Into

[Some text](#header-linking-markdown.md)

As you can see those are the following requirement:

  • Pattern: [Some text](#link%20to%20header.md)
  • Only edit what's between parentheses
  • Replace space (%20) with -
  • Everything as lowercase
  • Links are sometimes in nested parentheses
    • e.g. (look here [Some text](#link%20to%20header.md))
  • Do not change a line that begins with https (external links)

While everything is probably a bit complex as a whole the trickiest part is probably the nested parentheses :/

What I tried

The furthest I got was the following:

sed -Ei 's|\(([^\)]+)\)|\L&|g' test3.md #make everything between parentheses lowercase

sed -i '/https/ ! s/%20/-/g' test3.md #change every %20 occurrence to -

These sed/regx substitution are what I put together while roaming the web, but it has a lot a flaws and doesn't work with nested parentheses. Also this would change every %20 occurrence in the file.

The closest solution I found on stackoverflow looks similar but wasn't able to fit to my needs. Actually my lack of regex/sed understanding makes it impossible to adapt to my requirements.


I would appreciate any help even if a change of tool is needed, however I'm more into a learning processes, so a script or CLI alternative is very appreciated :) actually any help is appreciated :D !

Thanks in advance.

 

Hello :)

There isn't any community about note taking where I could post my question and no this is not a "What's the best note taking app" question...

I'm getting tired of maintaining my Obsidian vaults... Somehow I'm fighting to get it right and obsidian seems to fight back. I've got 4 vaults of the same subject and I always end to make a mess out of it and make a fresh one... Also my notes a scattered in all direction and the more my knowledge base grows the less I seem to be able to find something...

This is probably a me problem rather than Obsidian issue. The way I'm taking notes are not compatible with Obsidian. IMO Obsidian's defaults configuration are bad and visually not appealing. Sure customization in Obsidian is "endless" but digging in the HTML code to change the style or adding plugins to somehow get something visually appealing seems more like a chore than actually taking notes.

Here I'm again roaming the web for a Note taking app the could fit my needs and after trying a lot of different apps (please don't suggest the already well known apps... I have probably already tried it...) I couldn't find something that fits my workflow.

The only one that looked great and simple was osmosnote but it isn't maintained anymore. There's also dendron but it's in maintenance mode. So there goes the only ones that looked promising from my perspective.


After giving it more thoughts, I was looking for something that could:

  • Keep my scripts updated
  • Simple markdown text
  • No database
  • Local first
  • Open source
  • If webapp self-hostable
  • Back-linking
  • Keep track of changes

Except for back-linking, a self-hosted Forgejo with git seems to fit all my needs, however I'm not sure if this is the right tool and I'm scared that in the long run I will mess it up the same way I did with Obsidian.

Does anyone here has some experience and is taking notes that way? I'm really curious on your experience and maybe your thoughts if it's feasible ? Practical ?

Please don't suggest Org.mode or Emacs ! They look very cool and very promising but they are WAY to much overkill ! And they also implement a totally new way of taking notes... Relearning on how to take notes will probably give me the last hit on abandoning to document anything !

Thank you for any helpful input !

 

cross-posted from: https://lemmy.ml/post/23615167

For better visibility I cross-post my question in this community.

Heyha ! I just came across a very odd issue/bug that somehow resolved by itself without knowing who or what was the culprit.

For context, YouTube doing his thing making nearly all public instances obsolete, I'm self-hosting a Piped instance in my homelab via Docker.

Everything is going smoothly, self-signed certs, traefik, accessible via Wireguard outside of my network, and and and !! LibreTube connects without any issues to my Piped instance on my Android phone and so does RiMusic.

However, in RiMusic when I was trying to access my synced Piped playlists, RiMusic went crazy and my playlist seemed to be in a query loop were I was unable to play any songs and was flickering alot.

  • Reboot the phone => Same behavior
  • Reboot the piped instance => Same behavior
  • Uninstall RiMusic/New docker piped instance => Same behavior
  • Flush everything from cache/playlist/configuration/data... => Same behavior

Nothing seems to resolve the issue software wise, next step check the logs (Interesting part):

My piped-nginx showed A HUGE amount of requests coming from my phone when accessing a Piped playlist:

"GET /playlists/d0e2c698-f3f4-435f-b2c9-96c6d3a88781 HTTP/1.1" 200 4161 "-" "ktor-client" "10.XXX.XXX.XXX"

Traefik also showed a lot of loadbalacing debug notifications something that never happens, because I'm the only user in my homelab setup !

My first though was that this is probably a RiMusic bug, but before reporting a report to GitHub, I did other debugging stuff.

  • Create an account and connect to a public piped instance
  • Create playlist/add some songs
  • Connect with RiMusic

The exact same behavior EXCEPT it stopped the loop after a few requests and made RiMusic usable again and was able to play my playlist without issues. Try again on my own instance but again, infinite loop, a lot of requests on Traefik and Piped-nginx. It even broke my Piped instance...

The only logical explanation is that the public piped instances have some request rate limiting (Yeah I know this is common practice and even mandatory on public instances). So here I go rate limiting my own requests to see if this could work as a temporary workaround while writing a GitHub bug report to RiMusic.

Adding some basic traefik labels just to give it a try:

labels:
  - "traefik.http.middlewares.test-ratelimit.ratelimit.average=10"
  - "traefik.http.middlewares.test-ratelimit.ratelimit.burst=20"

At first nothing happened but after a few docker compose -f down/up I was able to access my playlist from my own instance without any issues/bug/strangeness. Cool It works? So just out of curiosity I commented out the new traefik middelwares and restarted both container (Traefik/Piped). And .... RiMusic playlist connected to my piped instance works without the ratelimite lines... Wait what ??

What just happend ? I have absolutely no idea... I don't even know if the mentioned labels did anything... But everything works... No loading loop, No Traefik container overflown with loadblancer logs, No Piped-nginx with thousand request... It just vanished as it never existed in the first place.

I'm totally clueless except that somehow when accessing a playlist in private or public piped instance with RiMusic my phone went crazy with an infinite loop of api requests (Dunno if that's the correct term :/). Here Am I with no idea what actually happend...

And yes my phone is Heavely debloated and firewalled (Magisk,rethinkDNS) so those are not unknown requests from the web or any open source application, whats so ever !


Sorry for the long write up I hope It's readable and comprehensible. I just wanted to share my experience with you and If you also encountered some strange and inexplicable bug/issue that resolved by itself, feel free to share :).

PS: If someone has any good lead on what happened or some good insight where I should look next to get more out of this experience, I'm open to every good read !

 

Heyha ! I just came across a very odd issue/bug that somehow resolved by itself without knowing who or what was the culprit.

For context, YouTube doing his thing making nearly all public instances obsolete, I'm self-hosting a Piped instance in my homelab via Docker.

Everything is going smoothly, self-signed certs, traefik, accessible via Wireguard outside of my network, and and and !! LibreTube connects without any issues to my Piped instance on my Android phone and so does RiMusic.

However, in RiMusic when I was trying to access my synced Piped playlists, RiMusic went crazy and my playlist seemed to be in a query loop were I was unable to play any songs and was flickering alot.

  • Reboot the phone => Same behavior
  • Reboot the piped instance => Same behavior
  • Uninstall RiMusic/New docker piped instance => Same behavior
  • Flush everything from cache/playlist/configuration/data... => Same behavior

Nothing seems to resolve the issue software wise, next step check the logs (Interesting part):

My piped-nginx showed A HUGE amount of requests coming from my phone when accessing a Piped playlist:

"GET /playlists/d0e2c698-f3f4-435f-b2c9-96c6d3a88781 HTTP/1.1" 200 4161 "-" "ktor-client" "10.XXX.XXX.XXX"

Traefik also showed a lot of loadbalacing debug notifications something that never happens, because I'm the only user in my homelab setup !

My first though was that this is probably a RiMusic bug, but before reporting a report to GitHub, I did other debugging stuff.

  • Create an account and connect to a public piped instance
  • Create playlist/add some songs
  • Connect with RiMusic

The exact same behavior EXCEPT it stopped the loop after a few requests and made RiMusic usable again and was able to play my playlist without issues. Try again on my own instance but again, infinite loop, a lot of requests on Traefik and Piped-nginx. It even broke my Piped instance...

The only logical explanation is that the public piped instances have some request rate limiting (Yeah I know this is common practice and even mandatory on public instances). So here I go rate limiting my own requests to see if this could work as a temporary workaround while writing a GitHub bug report to RiMusic.

Adding some basic traefik labels just to give it a try:

labels:
  - "traefik.http.middlewares.test-ratelimit.ratelimit.average=10"
  - "traefik.http.middlewares.test-ratelimit.ratelimit.burst=20"

At first nothing happened but after a few docker compose -f down/up I was able to access my playlist from my own instance without any issues/bug/strangeness. Cool It works? So just out of curiosity I commented out the new traefik middelwares and restarted both container (Traefik/Piped). And .... RiMusic playlist connected to my piped instance works without the ratelimite lines... Wait what ??

What just happend ? I have absolutely no idea... I don't even know if the mentioned labels did anything... But everything works... No loading loop, No Traefik container overflown with loadblancer logs, No Piped-nginx with thousand request... It just vanished as it never existed in the first place.

I'm totally clueless except that somehow when accessing a playlist in private or public piped instance with RiMusic my phone went crazy with an infinite loop of api requests (Dunno if that's the correct term :/). Here Am I with no idea what actually happend...

And yes my phone is Heavely debloated and firewalled (Magisk,rethinkDNS) so those are not unknown requests from the web or any open source application, whats so ever !


Sorry for the long write up I hope It's readable and comprehensible. I just wanted to share my experience with you and If you also encountered some strange and inexplicable bug/issue that resolved by itself, feel free to share :).

PS: If someone has any good lead on what happened or some good insight where I should look next to get more out of this experience, I'm open to every good read !

8
submitted 7 months ago* (last edited 7 months ago) by N0x0n@lemmy.ml to c/cat@lemmy.world
 

Oupsii !

view more: next โ€บ