cx40

joined 1 year ago
[–] cx40@programming.dev 1 points 1 month ago* (last edited 1 month ago)

I'm not saying that there's a problem with doing things one way or another. Rather, I'm asking whether there's a problem with doing things differently that then led to this design decision to be made with Rust. I want to better understand how this language came to be.

[–] cx40@programming.dev 1 points 1 month ago (2 children)

The term “meta-programming” had me lost since I’m only familiar with that in reference to C++ templates (and Rust’s generics are more like templates).

Yes, like C++ template and macros. The kind of code that generates new code before being run.

So to answer your question as to why there are macros, it’s because you need to generate code based on the input. A function call can’t do that.

You can design a language where you don't need to generate code to accomplish this. My question isn't why this is necessary in Rust. My question is why Rust was designed such that this was necessary.

Someone mentioned elsewhere that this allows for compile-time type safety. I'm still trying to wrap my head around how that works.

[–] cx40@programming.dev 3 points 1 month ago (4 children)

I can see that. I'm coming in from the other extreme that is Python, where even the meta-programming is done in plain Python.

[–] cx40@programming.dev 1 points 1 month ago* (last edited 1 month ago) (7 children)

C++ was my first programming language. I remember the nightmare of dealing with dependencies and avoiding boost because it felt wrong to need a third part library for basic features. The toolchain for Rust is very nice (not just compared to C++, but all other languages I've worked with) and has so far been a huge joy to work with. The language itself too. I'm just curious about why the language likes to expose more of its features through meta-programming rather than directly in the language itself. Things like println! and format! being macros instead of functions, or needing a bunch of #[derive(Debug,Default,Eq,PartialEq)] everywhere for things that other language provide through regular code.

[–] cx40@programming.dev 6 points 1 month ago (10 children)

I'm not talking about what features are in the standard libraries vs third party libraries. I mean meta-programming as in the stuff that generates Rust code. Take console printing for example, we use a macro println! in Rust. Other languages provide an actual function (e.g. printf in C, System.out.println in Java, print in Python, etc). The code for my first project is also full of things like #[derive(Debug,Default,Eq,PartialEq)] to get features that I normally achieve through regular code in other languages. These things are still in the Rust standard library as I understand it.

-1
submitted 1 month ago* (last edited 1 month ago) by cx40@programming.dev to c/rust@programming.dev
 

Is it just me, or does Rust feel much more bare-bones than other languages? I just started learning it recently and this is the one thing that stood out to me, much more so than the memory management business. A lot of things that would normally be part of the language has to be achieved through meta-programming in Rust.

Is this a deliberate design choice? What do we gain from this setup?


Edits:

  1. Somehow, this question is being interpreted as a complaint. It's not a complaint. As a user, I don't care how the language is designed as long as it has a good user experience, but the curious part of my mind always wants to know why things are the way they are. Maybe another way to phrase my question: Is this decision to rely more on meta-programming responsible for some of the good UX we get in Rust? And if so, how?
  2. I'm using meta-programming to mean code that generates code in the original language. So if I'm programming in Rust, that would be code that generate more Rust code. This excludes compilation where Rust gets converted into assembly or any other intermediate representation.
[–] cx40@programming.dev 1 points 9 months ago

Thanks, that's a good start.

The bigger question for me is whether there's more to it than privacy and blurring out faces.

 

This is about Panoramax and not OSM, but I figured a local OSM community is a more appropriate place to ask than the one general Panoramax community.

I recently got myself a 360 camera and I'm looking into mapping out parts of my city and self hosting them through Panoramax. One of the requirements for federation (and I guess for making this data public at all) is that we follow any local laws surrounding publishing such data. Does anyone know where I can find information on what these local laws might be? Is it sufficient to just blur out faces or is there more to it? I'm in Montreal if that's relevant, though I do travel to different cities from time to time and might contribute from other places.

[–] cx40@programming.dev 2 points 10 months ago

That's also to make programming easier. Different programmers have different needs.

[–] cx40@programming.dev 1 points 10 months ago (2 children)

But the main benefits of static typing is in making the programming part easier. What do you gain from translating dynamically typed languages into a statically typed language?

[–] cx40@programming.dev 4 points 10 months ago

asked questions that made educators interpret that I enjoyed bending the logic of what they were teaching.

I had this problem too but mainly for math. I'd do well in classes and tests, but the material just didn't make sense to me. It wasn't until I studied real analysis that everything started to click.

[–] cx40@programming.dev 7 points 10 months ago (1 children)

A trick I've employed is to pretend to believe in something completely different. If it says "no, you're wrong" and goes on to tell me what I actually believe, then it's a good indicator that I might be on the right path.

[–] cx40@programming.dev 1 points 10 months ago

Tabs get in the way and force you to actually address them instead of ignoring them. In theory.

[–] cx40@programming.dev 1 points 10 months ago (1 children)

Do you know if there's a similar extension that allows you to export/import the tabs in some text format rather than saving to bookmarks? I'm currently using Tab Season Manager, but it takes way too many steps to accomplish this.

 

SnapRAID doesn't compute the parity in real time, so there's this window between making a change to the data and syncing where your data isn't protected. The docs say

Here’s an example, you acquire a file and save it to disk called ‘BestMovieEver.mkv’. This file sits on disk and is immediately available as usual but until you run the parity sync the file is unprotected. This means if in between your download and a parity sync and you were to experience a drive failure, that file would be unrecoverable.

Which implies that the only data at risk is the data that's been changed, but that doesn't line up with my understanding of how parity works.

Say we have three disks that store 1 bit of information and a parity drive: 101 parity 0. If we modify the data in the first disk (data 001 parity 0), then the data is out of sync. Say we now lose disk 2 (data 0?1 parity 0). How does it then recover that data? We're in an inconsistent state where the remaining data tells us that drive 2 used to hold 0^1^0=1 when it actually held a 0. So doesn't that mean that between any modifications and a sync operation, all your data in that disk region is now at risk? Does SnapRAID do anything special to handle this?

view more: next ›