[-] pe1uca@lemmy.pe1uca.dev 12 points 1 month ago

I'm just annoyed by the regions issues, you'll get pretty biased results depending in what region you select.
If you try to search for something specific to a region with other selected you'll find sometime empty results, which shows you won't get relevant results about a search if you don't properly select the region.

Probably this is more obvious with non technical searches, for example my default region is canada-en and if I try "instituto nacional electoral" I only get a wiki page, an international site and some other random sites with no news, only when I change the region I get the official page ine.mx and news. For me this means kagi hides results from other regions instead of just boosting the selected region's ones.

[-] pe1uca@lemmy.pe1uca.dev 10 points 2 months ago

Are you sure your IP is only used by you?
AFAIK ISPs usually bundle the traffic of users to a few public IP addresses, so maybe the things you see are just someone else in your area going out from the same IP your ISP provides.

But I'm not actually sure if this is how it works, I might be wrong.

[-] pe1uca@lemmy.pe1uca.dev 11 points 7 months ago

IIRC most stuff can be done with vanilla JS in any modern browser.
Although, I've been doing little front-end work, and mostly for personal projects, nothing fancy nor production ready, so someone might have another opinion about using jQuery.

[-] pe1uca@lemmy.pe1uca.dev 13 points 7 months ago

You can host your own instance to share your opinions.
Still don't expect other instances for allowing content they don't want to be federated with, so you might have where to say something but it might not reach everyone.

[-] pe1uca@lemmy.pe1uca.dev 14 points 8 months ago* (last edited 8 months ago)

human faces, and body parts

Wait, what? Weren't there ads saying you could change the faces of your friends so they'll be smiling?

EDIT: yep, here it is https://youtu.be/1qjnB2uZMqw?t=30s

[-] pe1uca@lemmy.pe1uca.dev 13 points 9 months ago

Here's one I've been playing with https://github.com/jhj0517/Whisper-WebUI
The small model of fast Whisper has been amazing for the 3 options it gives (files, YT, or recording), tho I have in mind the limitations and I've only used it with somewhat clear audio.

[-] pe1uca@lemmy.pe1uca.dev 12 points 9 months ago

I saw this project https://github.com/pablouser1/ProxiTok
Haven't been able to selfhost it (not sure what I'm doing wrong yet), but the public instances are working fine.

5
submitted 9 months ago by pe1uca@lemmy.pe1uca.dev to c/montreal@lemmy.ca

J'ai étudié à Dawson, mais apparemment ils ont récemment changé le programme et les 6 niveaux qu'ils ont ne couvrent plus la moitié de ce qu'ils couvraient auparavant.

Quelle est une bonne école qui offre des cours de français?
Je ne sais pas si les résultats de Google auront le même problème et si je finirai par payer pour un cours mal conçu.
Si elle propose un cours en ligne, ce serait mieux.

Ou pour ceux qui connaissent les nouveaux cours de Dawson, sont-ils suffisants pour passer les examens de français au Québec?

29

In an API I have there's a requirement to use an authentication method other than OAuth2 or any kind of token generation which requires making an extra HTTP call.

With this in mind there's this https://www.xml.com/pub/a/2003/12/17/dive.html
I've only stored passwords as hashes and used functions like password_verify to know the user sent the proper credentials without actually knowing the password stored in DB.
WSSE requires to encrypt with SHA1 the credentials being sent, which means the API needs to retrieve the password in plain text to recreate the digest and compare it to the one sent by the user.
So, how should I be storing this password if the code needs it to recreate the hash?
Should I have something like a master password and store them encrypted instead of hashed?


Most of the information I've found about WSSE is very very old, and some implementations have it marked as deprecated, do you know any other type of standard authentication where the user can generate the token instead of having to make an extra HTTP call?

38

Is normal soap all I need?

Recently I read rinsing the chicken usually spreads more the bacteria we're trying to kill by cooking it, and I've been doing this in the sink.

So I'm wondering if even without rinsing the chicken the knives, cutting boards, even just my hands touching the raw chicken could be also spreading bacteria after washing them with only soap.

14

I just attached a new volume to my vps and usually I follow the instructions provided using parted and mkfs.ext4 but I decided to try ZFS.

The guides I've found online are all very different and I'm not sure if I did everything correct to know the data will be safe.
What I mean is running lsblk -o name,size,fstype,type,mountpoint shows this

NAME     SIZE FSTYPE   TYPE MOUNTPOINT
vdb      100G          disk
└─vdb1   100G ext4     part /mnt/storage
vdc      100G          disk
├─vdc1   100G          part
└─vdc9     8M          part

You can see the type and mountpoint of the previous volume are listed, but the ZFS' ones aren't.

Still I can properly access the ZFS pool I created and I also already copied some test data.

root@vps:~/services# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
local-zfs   99.5G  6.88G  92.6G        -         -     0%     6%  1.00x    ONLINE  -
root@vps:~/services# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
local-zfs   6.88G  89.5G     6.88G  /mnt/zfs

The commands I ran were these ones

parted -s /dev/vdc mklabel gpt
parted -s /dev/vdc unit mib mkpart primary 0% 100%
zpool create -o ashift=12 -O canmount=on -O atime=off -O recordsize=8k -O compression=lz4 -O mountpoint=/mnt/zfs local-zfs /dev/vdc

Does this look good?
Should I do something else? (like writing something to fstab)

The list of properties is very long, is there any one you recommend I should look into for a simple server where currently non-critical data is stored?
(I already have a separate backup solution, maybe I'll check to update it later)

14
Custom voice input service (lemmy.pe1uca.dev)
submitted 10 months ago* (last edited 10 months ago) by pe1uca@lemmy.pe1uca.dev to c/android@lemmy.world

Is there any keyboard which lets you configure the service used for voice input?
I'd like to set an URL to a selfhosted service to send my voice to be processed which then will return the transcription.

If no keyboard exists for this any app would do.
The idea is the app lets you stream the audio to the given service and will receive the response and show it for you to edit, similar on how google keyboard has the voice input.

Bonus points if it's open source :P

6
submitted 10 months ago* (last edited 10 months ago) by pe1uca@lemmy.pe1uca.dev to c/programming@programming.dev

I'm not sure in which community to ask this, if you know of a better one let me know.

I already have squid proxy working, if I set up my browser or curl to use the proxy all sites work properly.
But when I try to make a request with axios it doesn't work.

Here are the logs of squid
The first two lines are successful google connections are from a browser.
The 3rd line is a successful to google using curl.
The 4th line is a successful to ipify using curl.
The last two ones are the ones from node using axios

squid_proxy  | 1693406310.165  12043 127.0.0.1 TCP_TUNNEL/200 56694 CONNECT www.google.com:443 - HIER_DIRECT/142.250.217.132 -
squid_proxy  | 1693406310.166  10681 127.0.0.1 TCP_TUNNEL/200 47267 CONNECT apis.google.com:443 - HIER_DIRECT/142.250.176.14 -
squid_proxy  | 1693406325.551    497 127.0.0.1 TCP_TUNNEL/200 24778 CONNECT www.google.com:443 - HIER_DIRECT/142.250.217.132 -
squid_proxy  | 1693406336.829    403 127.0.0.1 TCP_TUNNEL/200 7082 CONNECT api.ipify.org:443 - HIER_DIRECT/64.185.227.156 -
squid_proxy  | 1693406361.410  12590 127.0.0.1 TCP_MISS/503 4358 GET https://api.ipify.org/? - HIER_NONE/- text/html
squid_proxy  | 1693406361.889    385 127.0.0.1 TCP_MISS/502 3948 GET https://www.google.com/ - HIER_DIRECT/142.250.217.132 text/html

The errors sent to axios are these:

# ipify
[No Error] (TLS code: SQUID_TLS_ERR_CONNECT+GNUTLS_E_FATAL_ALERT_RECEIVED)
SSL handshake error (SQUID_TLS_ERR_CONNECT)  
This proxy and the remote host failed to negotiate a mutually acceptable security settings for handling your request. It is possible that the remote host does not support secure connections, or the proxy is not satisfied with the host security credentials.  

# google
The system returned: [No Error]

My code looks like this

const axios = new Axios({
	proxy: {
		host: proxyIP,
		port: proxyPort,
		protocol: 'http'
	}
});

const ip = await axios.get('https://api.ipify.org?format=json');
console.log(ip.data);


const res = await axios.get('https://www.google.com');
console.log(res.data);

Any idea what might be happening?

I'm not sure if axios handles the connection in a different way since the logs from the browser show CONNECT and axios shows GET, but maybe that's because it's failing to actually connect and it only logs the request method.

29

I have an implementation for an internal API, the requirement is to implement some sort of basic authentication instead of oauth (generating a token).

Do you think there's any difference between using just an API key vs using a client id + secret?
For what I see it'd be just like saying "using a password" vs "using a user and a password".

13
submitted 11 months ago* (last edited 11 months ago) by pe1uca@lemmy.pe1uca.dev to c/pcgaming@lemmy.ca

I just reinstalled windows in a new SSD and forgot to export in some way the settings for my logitech devices.
The old SSD is still around but it's cumbersome to run windows from there just to copy the settings, so I mounted with and adapter and to read the settings files.

The LGHUB storage is an sqlite DB located in C:\Users\<user>\AppData\Local\LGHUB\settings.db
And you can just query select * from data which contains the settings as a json.

It'd be better to send it to a file sqlite3 settings.db 'select file from data' > data.json (mine was 7.5MB of data)

30

I've been creating separate accounts for some of my selfhosted services, some are to further sub-divide the data, but for sure I always have an admin account and the account I use day to day.

What's your account creation schema?
What do you think about creating multiple accounts for your selfhosted services?

142

The bug allows attackers to swipe data from a CPU’s registers. […] the exploit doesn’t require physical hardware access and can be triggered by loading JavaScript on a malicious website.

88
submitted 11 months ago by pe1uca@lemmy.pe1uca.dev to c/privacy@lemmy.ml

The bug allows attackers to swipe data from a CPU's registers. [...] the exploit doesn't require physical hardware access and can be triggered by loading JavaScript on a malicious website.

3

I'm trying to install munin in my ubuntu VPS
I used apt install munin-node munin to install it.

But I can't make the web interface work.
Since I already have pihole there I'm using lighttpd with this config (for the most part the default from the documentation)

server.modules += ( "mod_alias" )
server.modules += ( "mod_fastcgi" )
server.modules += ( "mod_rewrite" )

$SERVER["socket"] == ":8467" {

alias.url += ( "/munin-static" => "/etc/munin/static" )
alias.url += ( "/munin"        => "/var/cache/munin/www/" )
server.document-root = "/var/cache/munin/www/"

fastcgi.server += ("/munin-cgi/munin-cgi-graph" =>
                   (( "socket"      => "/var/run/lighttpd/munin-cgi-graph.sock",
                      "bin-path"    => "/usr/lib/munin/cgi/munin-cgi-graph",
                      "check-local" => "disable",
                   )),
                  "/munin-cgi/munin-cgi-html" =>
                   (( "socket"      => "/var/run/lighttpd/munin-cgi-html.sock",
                      "bin-path"    => "/usr/lib/munin/cgi/munin-cgi-html",
                      "check-local" => "disable",
                   ))
                 )

url.rewrite-repeat-if-not-file += (
                   "/munin/(.*)" => "/munin-cgi/munin-cgi-html/$1",
                   "/munin-cgi/munin-cgi-html$" => "/munin-cgi/munin-cgi-html/",
                   )
}

When the service is restarted it the log shows this error even when the files have 777 permissions

(gw_backend.c.1404) invalid "bin-path" => "/usr/lib/munin/cgi/munin-cgi-graph" (check that file exists, is regular file, and is executable by lighttpd)
(gw_backend.c.1404) invalid "bin-path" => "/usr/lib/munin/cgi/munin-cgi-html" (check that file exists, is regular file, and is executable by lighttpd)

When I try to run a config with the command lighttpd -D -f /etc/munin/lighttpd.conf
The output is this

(gw_backend.c.324) child exited: 1 unix:/var/run/lighttpd/munin-cgi-html.sock-3
(gw_backend.c.468) unlink /var/run/lighttpd/munin-cgi-html.sock-3 after connect failed: Connection refused
(gw_backend.c.324) child exited: 1 unix:/var/run/lighttpd/munin-cgi-graph.sock-0
(gw_backend.c.468) unlink /var/run/lighttpd/munin-cgi-graph.sock-0 after connect failed: Connection refused

Do you guys have any clue what might be happening?

[-] pe1uca@lemmy.pe1uca.dev 14 points 1 year ago

Oh wow, that's a lot!
Good to have something to read during commute.

[-] pe1uca@lemmy.pe1uca.dev 12 points 1 year ago

excessive resource use

What's the final size of the git folder?
What's the final time and CPU usage to process the different git commands?

[-] pe1uca@lemmy.pe1uca.dev 13 points 1 year ago

Lemmy instances only know about communities in other instances until a user searched for it, so a bigger instance like lemmy.ml is more likely to already have each community you search for.
But an instance like mine won't have them unless I manually search for them to be imported.

And also, in my instance I won't be getting updates from that community to see in the "all" feed until I subscribe, so my "subscriptions" will be the same as "all".

[-] pe1uca@lemmy.pe1uca.dev 11 points 1 year ago

a $2 base subscription with an extra $1 for notifications

This for me is the stupid part about this, reddit is forcing developers to paywall any functionality.
Want the app to automatically refresh your subs each day? $0.5 please.
Want faster loads of each sub? $1 (Sleig mentioned Apollo first pulls 20 posts and then 100 to improve loading, so 2 requests instead of 1).
You reached the average number of calls: do you want continue for $1?
(I'm guessing mods take more requests since they have to see more context about users, posts, comments) do you want mod tools? $2
Is mod mail a separate request? If so then: would you like to have it for $1 more?

And on and on with every new feature which requires more requests than normal.

view more: ‹ prev next ›

pe1uca

joined 1 year ago
MODERATOR OF