lysdexic

joined 2 years ago
MODERATOR OF
[–] lysdexic@programming.dev 1 points 2 years ago (1 children)

But they obviously don’t understand the whole complexity ahead of the project start, so making nuanced decisions is not possible. They’d have to arbitrarily pick an architecture sizing in between.

The single most important decision is external interfaces, and establish service level agreements with clients.

Once the external interface is set, managers have total control over what happens internally. If they choose to, they can repeatedly move back and forth peeling out and merging in microservices. That's actually one of the main selling points of microservices: once an API gateway is in place, they are completely free to work independently on independent services and even their replacements.

Microservices are first and foremost an organizational tool.

[–] lysdexic@programming.dev 1 points 2 years ago

Microservices are not just about scaling and performance but it is a core advantage. To say they have “nothing” to do with it is outright false.

They have nothing to do with performance. You can improve performance with vertical scaling, which nowadays has a very high ceiling.

It's not a coincidence that startups are advised against going with microservices until they grow considerably. The growth is organizational, and not traffic.

Microservices are about modular design and decoupling units of code from each other.

Yes, but you're failing to understand that the bottleneck that's fixed by peeling off microservices is the human one faced by project managers. In fact, being forced to pay the microservices tax can and often does add performance penalties.

The problem with this approach is that switching from vertical to horizontal is extremely hard if you didn’t plan for it from the start.

I think you're missing the point that more often than not ain't going to need it.

In the rare cases you do, microservices is not a magic wand that fixes problems. The system requires far more architectural changes that go well beyond getting a process to run somewhere else.

[–] lysdexic@programming.dev 2 points 2 years ago

Using linting to prevent coupling between modules can give you some of the benefits of micro services without going all in.

My point was that modularizing an application and decoupling components does not, by any mean, give any of the benefits of microservices.

The benefits of microservices are organizational and operational independence. Where do you see coupling between components to play a role in any of these traits?

[–] lysdexic@programming.dev 1 points 2 years ago (1 children)

What do you mean by "boring" ?

[–] lysdexic@programming.dev 5 points 2 years ago (2 children)

Microservices are great if you have enough traffic that you can get an efficiency gain by independently scaling all those services. But if you aren’t deploying onto thousands of servers just to handle traffic volume, you probably don’t need 'em.

I don't think that's a valid take. Microservices have nothing to do with scaling or performance, at least for 99% of the cases out there. Microservices are a project- and team-management strategy. It's a way to peel out specific areas of responsibility from a large project, put together a team that is dedicated to that specific area of responsibility, and allow it to fully own and be accountable for the whole development life cycle, specially operations.

Being able to horizontally scale a service is far lower in the priority queue, and is only required once you exhaust the ability to scale vertically.

[–] lysdexic@programming.dev 7 points 2 years ago* (last edited 2 years ago) (2 children)

Someone in the thread mentioned that to get the benefits of micro services in a monolith, you can use a linting rule to prevent dependencies across modules

I don't think that makes any sense. The main benefit of microservices is organizational, more specifically how a single team gets to own all aspects of developing, managing, and operating a service.

Lower in priority, there's enabling regional deployments and improved reliability.

How are linting rules even expected to pull that off?

[–] lysdexic@programming.dev 7 points 2 years ago

Being against this specific proposal does not mean people are happy with the current state of things. Is it possible that this particular proposal is bad and does not address the issues? I mean, the first item of grievance is complaining about how StackOverflow curates content by removing duplicate questions and problems that cannot be reproduced, with vague complains that this is newbie-friendly. Is this a reasonable complaint? I don't think so.

[–] lysdexic@programming.dev 10 points 2 years ago (1 children)

You can’t both be a good meeting-place for experts and a good place for novices to get expert advice and an advertising venue.

I don't agree. There is no clear cutout between what means to be an expert and a novice. What content you're exposed to is the output of the service's support for user profiling and search. It is simply not possible to get rid of an important subset of your customer base without causing false positives and generate ill-will. Finally, we should keep in mind that yesterday's novice is today's expert.

[–] lysdexic@programming.dev 1 points 2 years ago* (last edited 2 years ago) (3 children)

I don’t see why we don’t have a build system where you simply have to give a project folder with the name of source file with the main() function, give the name of the output executable, the external dependecies to be called with gcc, and that’s it.

But we do. Check out CMake.

https://cmake.org/examples/

Originally CMake was a higher level abstraction over make, and generated makefiles based from the high-level project description. Nowadays it supports other build automation tools, such as Ninja, Visual Studio, Xcode, etc.

[–] lysdexic@programming.dev 2 points 2 years ago

What do you mean by "build step"? For example, does running a barreler count as building?

[–] lysdexic@programming.dev 1 points 2 years ago (1 children)

It seems that neither std::indirect_value nor std::polymorphic_value made it into C++23, thought. Is it worth it to add them as external components just to have const qualification working with std::unique_ptr?

[–] lysdexic@programming.dev 3 points 2 years ago* (last edited 2 years ago) (1 children)

I don't think the article makes a case to shoehorn git in each and every usecase that goes beyond tracking changes to project files.

For example, both git-issue and git-bug are an awkward interface to track issues whose main selling point is being git-based, which is not much to start with. They are focused on the persistence layer used to store ticket info when that's both a solved problem and irrelevant to the problem domain. To top that off, Git's main selling points are its distributed nature and ease to branch off/merge changes, which are not relevant for this problem domain, and the main value of issue tracking is to track the overall progress of a project and audit changes, and these tools offer a worse user experience than any of the tools they supposedly try to replace.

Given there are plenty of outstanding free tools that do a far better job at this in their free tier than any git-based alternative, I fail to see the real-world value of these projects.

If anyone is actually interested in a solution that bundles up revision control, issue tracking, and project management, they are far better off just onboarding onto tools such as Fossil.

view more: ‹ prev next ›