[-] kersplort@programming.dev 9 points 4 months ago* (last edited 4 months ago)

If you want to level up your game, find a new job, or grow into a new role, by all means take a course or training on your own time. All of the concerns that you listed are probably worth spending dedicated time to upskill on.

If you stay in this field for much longer, you're going to run into a lot of cases where the thing you've been doing is replaced with the New Thing. The New Thing will have a couple new ideas, but will also fundamentally handle the same concerns as your Old Thing, often in similar ways. Don't spend your free time chasing the New Thing without getting something out of it - getting paid, making a project that you wanted to make anyways, contributing to New Thing open source projects.

If you sink work into the New Thing without anyone willing to pay for it, that's fine, but it means that you might never get someone to pay for it. Most companies are more than willing to hire experienced Old Thing devs on New Thing jobs, and will give you some time to skill up.

[-] kersplort@programming.dev 6 points 11 months ago

My team has just decided to make working smokes a mandatory part of merging a PR. If the smokes don't work on your branch, it doesn't merge to main. I'm somewhat conflicted - on one hand, we had frequent breaks in the smokes that developers didn't fix, including ones that represented real production issues. On the other, smokes can fail for no reason and are time consuming to run.

We use playwright, running on github actions. The default free tier runner has been awful, and we're moving to larger runners on the platform. We have a retry policy on any smokes that need to run in a step by step order, and we aggressively prune and remove smokes that frequently fail or don't test for real issues.

34
submitted 11 months ago* (last edited 11 months ago) by kersplort@programming.dev to c/experienced_devs@programming.dev

End to end and smoke tests give a really valuable angle on what the app is doing and can warn you about failures before they happen. However, because they're working with a live app and a live database over a live network, they can introduce a lot of flakiness. Beyond just changes to the app, different data in the environment or other issues can cause a smoke test failure.

How do you handle the inherent flakiness of testing against a live app?

When do you run smokes? On every phoenix branch? Pre-prod? Prod only?

Who fixes the issues that the smokes find?

[-] kersplort@programming.dev 2 points 1 year ago

We use a little bit of property testing to test invariants with fuzzed data. Mutation testing seems like a neat inverse.

[-] kersplort@programming.dev 4 points 1 year ago

XML would be great if it wasn't for the extended XML universe of namespaces and imports.

[-] kersplort@programming.dev 6 points 1 year ago

"It takes years" seems like the most reasonable alternative to forcing my coworkers to TDD or not merge code.

I think that getting people into the benefits of testing is something that's really worthwhile. Building out some of these test suites - especially the end to end tests- were some really eye opening experiences that gave me a lot of insight into the product. "Submit a test with your bugfix" is another good practice - getting the error the first time prevents a regression from creeping in.

[-] kersplort@programming.dev 3 points 1 year ago

I've had some luck at using AI to get over the hump of the first "does this component work" test - it's easy to detect stuff that needs to be mocked and put in stub mocks using GPT. GPT is horrible at writing good tests, but often it's harder to write that first one than the meaningful tests.

43
submitted 1 year ago* (last edited 1 year ago) by kersplort@programming.dev to c/experienced_devs@programming.dev

I'm like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you're into that kind of thing, go for it. Coverage of lines of code doesn't matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.

I'm a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We've got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.

I have support from the CTO in getting more testing in, and I'm able to use testing to cover bugs and regressions, and there's solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.

The resistance is usually along lines like:

  • You shouldn't have to refactor to test something
  • We shouldn't use mocks, only integration testing works.
    • Repeat for test types N and M
  • We can't test yet, we're going to make changes soon.

How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?

[-] kersplort@programming.dev 7 points 1 year ago

His manager at least had the decency to warn him ahead of time about the PIP. Still - it seems mostly about forcing him out of his remote position.

[-] kersplort@programming.dev 4 points 1 year ago

Cloud, and really any vendor specific stuff, is tough to keep up with both in terms of learning and creating training materials. You're better off getting it straight from amazon, google or other practitioners. See if you can find some smaller conferences in your area, and if you can spend training budget on that.

Obviously $200 isn't much, but coursera might be a better bang for your buck than some other systems. Learning core skills will help you level up, while a lot of Udemy and similar content will just keep you on the same track you would have been on.

29

What's something you've gotten into your CICD pipeline recently that you like?

I recently automated a little bot for our GitHub CICD. It runs a few tests that we care about, but don't want to block deployment, and posts them on the PR. It uses gh pr comment --edit-last so it isn't spammint the channel. It's been pretty helpful in automating some of the more annoying parts of code review.

27
Site Stability (programming.dev)

The site's been down in the morning for the last couple days. Running a new server that gets attention is tough - do the admins for this site need anything from this community? Volunteer time? Money?

[-] kersplort@programming.dev 4 points 1 year ago* (last edited 1 year ago)

The overview is good, and the outline is good, but ultimately the problem is that the returns on big investments in DX are only really seen for companies at absolutely massive scale. There are far more companies that just burned through money getting a "nice" dev environment than there are companies that put a FTE or more into DX and saw worthwhile results.

It's worth some time for teams to think critically about their own tooling, but the trend towards DX as an end in and of itself has been a drag on many teams.

[-] kersplort@programming.dev 3 points 1 year ago

I still put them in gists, with no real tooling. I pull them in selectively when I get a new machine.

[-] kersplort@programming.dev 2 points 1 year ago* (last edited 1 year ago)

This is it. From another angle, setting clear boundaries on your time, delegating and trusting your team, and managing expectations are all powerful skills that need to be developed from the senior level and up. Clearly knowing and articulating your limits can lead to working on more valuable and meaningful work within those limits.

[-] kersplort@programming.dev 2 points 1 year ago

This is true, but it isn't good for OPs long term prospects either. Dumping a pile of money in a hole while taking on legal risk means that the losses are going to need to be made up somewhere.

view more: next ›

kersplort

joined 1 year ago