EnEnCode

joined 11 months ago
[–] EnEnCode@programming.dev 1 points 2 hours ago

Nice job! I initially thought about using reduced-row echelon form, but the matrices all had infinitely many real solutions so I couldn't figure out how to proceed. Instead, I scanned Wikipedia for a bit, found the two-phase simplex algorithm, spent two days getting it to work on paper and another day implementing it.

And the reward?I'm not even done because ~10% of the systems have non-integer optima, so I still need to use cutting planes to add extra constraints to get the integer optima.

[–] EnEnCode@programming.dev 2 points 2 days ago (1 children)

Good luck! If anything, yesterday's problem is motivating (if not challenging) because I'm trying to use it to learn the simplex algorithm (off mediocre online tutorials—doubly so considering I think the problem needs dual simplex [pun intended]). I've spent far more time with pencil and paper trying to understand the algorithm than actually try writing code for it.

[–] EnEnCode@programming.dev 2 points 2 days ago (1 children)

If you can explain why it works, not calculating half the paths isn't a hack—it's an optimization. ;)

[–] EnEnCode@programming.dev 1 points 2 days ago

Rust

Somehow got the right answer for part 1 iteratively, but it wasn't actually correct at all, so switched to the standard recursion + memoisation. Gotta love just chucking an iterator into the value type of a HashMap and going BRRRR (1.2 ms).

solution

pub fn day11(input: &str) -> (u64, u64) {
    let array = input.trim().lines();
    let mut conns = HashMap::new();
    for line in array {
        let mut iter = line.split_whitespace();
        conns.insert(&iter.next().unwrap()[0..3], iter);
    }

    fn find_path<'a>(
        start: &'a str,
        end: &'a str,
        conns: &HashMap<&'a str, SplitWhitespace<'a>>,
        devices: &mut HashMap<&'a str, u64>,
    ) -> u64 {
        let mut sum = 0;
        let Some(list) = conns.get(start) else {
            return 0;
        };
        for i in list.clone() {
            if i == end {
                sum += 1;
            } else if let Some(precomp) = devices.get(i) {
                sum += precomp;
            } else {
                sum += find_path(i, end, conns, devices);
            }
        }
        devices.insert(start, sum);
        sum
    }

    let part1 = find_path("you", "out", &conns, &mut HashMap::new());
    // If dac -> fft and fft -> dac are both non-zero, then there are loops. That makes an
    // infinite number of paths so the question posed would be meaningless.
    let dac_to_fft = find_path("dac", "fft", &conns, &mut HashMap::new());
    let part2 = if dac_to_fft == 0 {
        find_path("svr", "fft", &conns, &mut HashMap::new())
            * find_path("fft", "dac", &conns, &mut HashMap::new())
            * find_path("dac", "out", &conns, &mut HashMap::new())
    } else {
        find_path("svr", "dac", &conns, &mut HashMap::new())
            * dac_to_fft
            * find_path("fft", "out", &conns, &mut HashMap::new())
    };
    (part1, part2)
}

[–] EnEnCode@programming.dev 3 points 4 days ago* (last edited 4 days ago)

Rust

The problem today just flustered me. Sometimes I'm not convinced I've gotten any better at programming since 2022 (yes, I still used that exact hack of a combine_ranges for this problem) Runs on my Thinkpad P1 Gen2 in 180.9 +- 0.9 ms on external power (or 726.8 +- 2.1 ms on battery; can't remember how I configured performance/battery saver honestly lol)

solution

pub fn day09(input: &str) -> (u64, u64) {
    let mut part1 = 0;
    let mut part2 = 0;
    let mut squares = Vec::new();
    for line in input.trim().lines() {
        let (x, y) = line.split_once(',').unwrap();
        squares.push([x.parse::<u64>().unwrap(), y.parse::<u64>().unwrap()]);
    }

    let mut chunks = squares.chunks_exact(2).peekable();
    let mut lines = Vec::new();
    let peek = chunks.peek().unwrap();
    // dir is true if scanning vertically, false if horizontally
    let dir = peek[0][1] == peek[1][1];
    if !dir {
        debug_assert_eq!(peek[0][0], peek[1][0]);
    }

    // Each line is a tuple where .0 is the index in the scan direction and .1 is the pair of end
    // points for a line perpendicular to the scan direction
    for line in chunks {
        lines.push((
            line[0][dir as usize],
            [line[0][!dir as usize], line[1][!dir as usize]],
        ));
    }
    lines.sort();

    // rot is true if field 0 being less than field 1 enters the shape, false for exiting
    let rot = lines[0].1[0] < lines[0].1[1];

    for idx_1 in 0..squares.len() - 1 {
        'o: for idx_2 in (idx_1 + 1)..squares.len() {
            let s1 = squares[idx_1];
            let s2 = squares[idx_2];
            let area = (s1[0].abs_diff(s2[0]) + 1) * (s1[1].abs_diff(s2[1]) + 1);
            part1 = std::cmp::max(part1, area);

            if area <= part2 {
                continue;
            }
            // Think of "up" as the direction the scan line travels
            let floor = std::cmp::max(s1[dir as usize], s2[dir as usize]);
            let ceil = std::cmp::min(s1[dir as usize], s2[dir as usize]);
            let left_wall = std::cmp::min(s1[!dir as usize], s2[!dir as usize]);
            let right_wall = std::cmp::max(s1[!dir as usize], s2[!dir as usize]);
            let floor_idx = lines.binary_search_by_key(&floor, |(l, _)| *l).unwrap();
            let ceil_idx = lines.binary_search_by_key(&ceil, |(l, _)| *l).unwrap();
  
            // If a line contracts the valid range, check if that contraction intersects the walls,
            // since that necessitates uncolored tiles in the bounding box
            let skip = |line: [u64; 2]| {
                if rot != (line[0] < line[1]) {
                    return (left_wall..right_wall).contains(&line[0])
                        || (left_wall..right_wall).contains(&line[1]);
                }
                false
            };

            for line in &lines[ceil_idx..floor_idx] {
                if skip(line.1) {
                    continue 'o;
                }
            }
            let mut combo_ranges = Vec::new();
            // This `rev` is needed for correctness, although the full input does not seem to care
            // It actually hurts performance a lot :(
            for line in lines[0..=ceil_idx].iter().rev() {
                if skip(line.1) {
                    continue 'o;
                }
                if rot {
                    combo_ranges.push(line.1);
                } else {
                    combo_ranges.push([line.1[1], line.1[0]]);
                }
                combo_ranges = combine_ranges(combo_ranges);
                // If overlapping the target range results in no change of range, then the target
                // range is fully covered and this rectangle is valid
                let mut test_range = combo_ranges.clone();
                test_range.push([left_wall, right_wall]);
                if combo_ranges == combine_ranges(test_range) {
                    part2 = area;
                    continue 'o;
                }
            }
        }
    }
    (part1, part2)
}

[–] EnEnCode@programming.dev 2 points 5 days ago

I was the opposite. I stopped using hyprland because I found it utterly broken (Ctrl-X rant here). Didn't find out about the community until after I left. User of i3, sway, and niri. Thanks to Lemmy for first mentioning niri to me. :)

[–] EnEnCode@programming.dev 2 points 1 week ago

Well, I guess I've decided to do AoC this year. Trivial problem, though I shamelessly stole some range-merging code from AoC 2022 (it was when I was still well in the learning phase of Rust). If you can't look at old code and wonder WTF you were thinking, have you really gotten any better?

Solution (mostly uninteresting)

pub fn day05(input: &str) -> (u64, u64) {
    let mut part1 = 0;
    let mut part2 = 0;
    let mut ranges = Vec::new();
    let mut lines = input.trim().lines();
    for line in lines.by_ref() {
        if line.is_empty() {
            break;
        }
        let range = line
            .split_once('-')
            .map(|(l, h)| [l.parse::<u64>().unwrap(), h.parse::<u64>().unwrap()])
            .unwrap();
        ranges.push(range);
    }
    let ranges = combine_ranges(ranges);
    for idx in lines {
        for r in ranges.iter() {
            if (r[0]..=r[1]).contains(&idx.parse::<u64>().unwrap()) {
                part1 += 1;
                break;
            }
        }
    }
    for r in ranges {
        part2 += r[1] - r[0] + 1;
    }
    (part1, part2)
}

combine_ranges WARNING : Hideous, triggers Clippy, not blazingly fast

fn combine_ranges(ranges: Vec<[u64; 2]>) -> Vec<[u64; 2]> {
    let mut temp_ranges = ranges;
    let mut comp_ranges: Vec<[u64; 2]> = Vec::new();
    let mut run_again: bool = true;
    while run_again {
        run_again = false;
        comp_ranges = Vec::new();
        'outer: for i in 0..temp_ranges.len() {
            for j in 0..comp_ranges.len() {
                if temp_ranges[i][0] <= comp_ranges[j][0] && comp_ranges[j][1] <= temp_ranges[i][1]
                {
                    //temp covers all of comp
                    comp_ranges[j][0] = temp_ranges[i][0];
                    comp_ranges[j][1] = temp_ranges[i][1];
                    run_again = true;
                    continue 'outer;
                } else if comp_ranges[j][0] <= temp_ranges[i][0]
                    && temp_ranges[i][1] <= comp_ranges[j][1]
                {
                    //comp covers all of temp
                    run_again = true;
                    continue 'outer;
                } else if temp_ranges[i][0] <= comp_ranges[j][0]
                    && comp_ranges[j][0] <= temp_ranges[i][1]
                    && temp_ranges[i][1] <= comp_ranges[j][1]
                {
                    //grow left
                    comp_ranges[j][0] = temp_ranges[i][0];
                    run_again = true;
                    continue 'outer;
                } else if comp_ranges[j][0] <= temp_ranges[i][0]
                    && temp_ranges[i][0] <= comp_ranges[j][1]
                    && comp_ranges[j][1] <= temp_ranges[i][1]
                {
                    //grow right
                    comp_ranges[j][1] = temp_ranges[i][1];
                    run_again = true;
                    continue 'outer;
                }
            }
            comp_ranges.push(temp_ranges[i]);
        }
        temp_ranges = comp_ranges.clone();
    }

    comp_ranges
}

[–] EnEnCode@programming.dev 3 points 2 weeks ago (1 children)

I wasn't even planning on it this year (just have a number of hobby projects that are "unblocked", I'm not burnt out, and I could switch project if I were). But if it's only 12 days... we'll see.

[–] EnEnCode@programming.dev 7 points 1 month ago

Better than I could have ever put it. But to add my own thoughts, sudo-rs does not gain value just by being in Rust.

  1. It's a more lean sudo with compatability for common flags while intentionally not implementing niche/legacy options, including the one used by CVE-2025-32463, though if I did not need any of those flags at all, I would (and in past, have) just use opendoas.
  2. The project contains extensive testing and a harness versatile enough to also test the OG sudo and has caught regressions in it. The rewrite wanting parity with sudo's behavior has improved the original sudo; you can gain its benefits even if you won't/don't/can't run it. This is the main reason uutils hasn't convinced me of its worth yet.
  3. Having more eyes on sudo is just good. By translating it, they have to understand many of sudo's poorly-documented idiosyncracies and review all its relevant code and consider potential potential edge-cases. That's basically an audit.
[–] EnEnCode@programming.dev 1 points 11 months ago (1 children)

Rust

Definitely not Aho–Corasick cool or genius, but does get ~11ms on both parts. Gains over the other Rust solves are partly a less naïve approach to towel checking and partly getting it to work with 0 String allocations, which involves both an instructive lifetime annotation and the clever use of a niche standard library function.

Code

pub fn day19(input: &str) -> (u64, u64) {
    fn recur_helper<'a>(pat: &'a str, towels: &[&str], map: &mut HashMap<&'a str, u64>) -> u64 {
        let mut hits = 0;
        let mut towel_subset = towels;
        for l in 1..=pat.len() {
            let test_pat = &pat[0..l];
            match towel_subset.binary_search(&test_pat) {
                Ok(idx) => {
                    if pat[l..].is_empty() {
                        return hits + 1;
                    } else if let Some(&v) = map.get(&pat[l..]) {
                        hits += v;
                    } else {
                        let v = recur_helper(&pat[l..], towels, map);
                        map.insert(&pat[l..], v);
                        hits += v;
                    }

                    towel_subset = &towel_subset[idx..];
                    let end = towel_subset.partition_point(|&x| x.starts_with(test_pat));
                    towel_subset = &towel_subset[..end];
                    if towel_subset.is_empty() {
                        return hits;
                    }
                }
                Err(idx) => {
                    towel_subset = &towel_subset[idx..];
                    let end = towel_subset.partition_point(|&x| x.starts_with(test_pat));
                    towel_subset = &towel_subset[..end];
                    if towel_subset.is_empty() {
                        return hits;
                    }
                }
            }
        }
        hits
    }
    let mut part1 = 0;
    let mut part2 = 0;
    let mut lines = input.lines();
    let mut towels = lines.next().unwrap().split(", ").collect::<Vec<_>>();
    towels.sort();
    let mut map = HashMap::new();
    for pat in lines.skip(1) {
        let tmp = recur_helper(pat, &towels, &mut map);
        part2 += tmp;
        if tmp > 0 {
            part1 += 1;
        }
    }
    (part1, part2)
}

[–] EnEnCode@programming.dev 2 points 11 months ago (1 children)

Day 19 is definitely my proudest solve (though learning about Aho-Corasick dampens it a little bit 😅) with some clever ideas to both check towels more efficiently than the naïve approach while creating no Strings to avoid a bunch of heap allocation. Figuring out how to binary search without creating the target object, and merely knowledge of its properties and those of the list, meant I got to use a pretty niche library function and got my solution from 25 to 11 ms.

[–] EnEnCode@programming.dev 5 points 11 months ago (1 children)

Congrats to everyone! Not first year, though first year of at least trying every problem. Burnout definitely got me by day 20 though, ha ha ha... There was a lot of ugly (readability a backseat to "code writing speed"), a lot of bad (don't ask how long the test suite has to run for), but an occasional gem of good (my day 19 solution is some of the most dopamine from just writing code I've gotten. I'm only used to getting that much when it actually gets merged). I learned a little through the problems themselves, but I did learn a lot about writing macro_rules macros by creating a test suite generator and a benchmark generator. I also picked up some useful Git knowledge like --allow-unrelated-histories, interactive rebasing, --name-only, and using the reflog to help recover data (don't ask what happened on day 23). This year was a personal success. Till next year!

view more: next ›