KingRandomGuy

joined 2 years ago
[–] KingRandomGuy@lemmy.world 6 points 1 month ago* (last edited 1 month ago)

What info have you heard about Fenghua 3? I'd last read that it's not strictly an AI accelerator but can actually do graphics tasks, which is neat. Would make it more of a competitor to a professional workstation card like an RTX PRO 6000.

I'm most curious about their CUDA compatibility claim. I would expect that to cause a pretty significant performance hit since when writing high-performance CUDA kernels, you generally need to specialize the kernel to the individual GPU (an H100 kernel will look quite different compared to a 4090 kernel, for example). But if in spite of that it can achieve H100 performance, that'd be cool.

[–] KingRandomGuy@lemmy.world 3 points 1 month ago

Like others have mentioned, the spider (the wires) and the secondary do shadow some light that would otherwise reach the primary. It also results in some artifacts due to diffraction; the view ends up convolved with the Fourier transform of the aperture. This is why on Hubble images, you see cross shaped stars, as that's the shape of the Fourier transform of its 4-strut spider.

[–] KingRandomGuy@lemmy.world 26 points 1 month ago* (last edited 1 month ago)

Every time I see a headline like this I’m reminded of the time I heard someone describe the modern state of AI research as equivalent to the practice of alchemy.

Not sure if you're referencing the same thing, but this actually came from a presentation at NeurIPS 2017 (the largest and most prestigious machine learning/AI conference) for the "Test of Time Award." The presentation is available here for anyone interested. It's a good watch. The presenter/awardee, Ali Rahimi, talks about how over time, rigor and fundamental knowledge in the field of machine learning has taken a backseat compared to empirical work that we continue to build upon, yet don't fully understand.

Some of that sentiment is definitely still true today, and unfortunately, understanding the fundamentals is only going to get harder as empirical methods get more complex. It's much easier to iterate on empirical things by just throwing more compute at a problem than it is to analyze something mathematically.

[–] KingRandomGuy@lemmy.world 2 points 2 months ago (1 children)

I do research in 3D computer vision and in general, depth from cameras (even multi view) tends to be much noisier than LiDAR. LiDAR has the advantage of giving explicit depth, whereas with multiview cameras you need to compute it, which has a fair amount of failure modes. I think that's what the above user is getting at when they said Waymo actually has depth sensing.

This isn't to say that Tesla's approach can't work at all, but just that Waymo's is more grounded. There are reasons to avoid LiDAR (cost primarily, a good LiDAR sensor is very expensive), but if you can fit LiDAR into your stack it'll likely help a bit with reliability.

[–] KingRandomGuy@lemmy.world 2 points 3 months ago

Very few high end cameras actually have an option to store photos internally, unfortunately. I think some super high end Hasselblads internally have an SSD (would be cool if more manufacturers did this), but most can't write photos without a card.

[–] KingRandomGuy@lemmy.world 4 points 3 months ago (1 children)

Ideally you'd have extras too! Normally if I'm taking the card or battery out of my camera, leaving the camera safely in my bag (which is also not being moved), so there's no real risk of damage.

[–] KingRandomGuy@lemmy.world 20 points 3 months ago (5 children)

Mildly useful tip: when you take a card or battery out of your camera, leave the door open until you put it back in. That way you'll know if you forgot to put one of them back into the camera. I do this and it's saved me a few times.

[–] KingRandomGuy@lemmy.world 2 points 4 months ago

Yeah I agree on these fronts. The hardware might be good but software frameworks need to support it, which historically has been very hit or miss.

[–] KingRandomGuy@lemmy.world 3 points 4 months ago (2 children)

Depends strongly on what ops the NPU supports IMO. I don't do any local gen AI stuff but I do use ML tools for image processing in photography (e.g. lightroom's denoise feature, GraXpert denoise and gradient extraction for astrophotography). These tools are horribly slow on CPU. If the NPU supports the right software frameworks and data types then it might be nice here.

[–] KingRandomGuy@lemmy.world 1 points 4 months ago

You're correct about all of this, but it's way easier to press print than machine a part from stock. I do some machining as well (I don't own the machines, but I'm trained on the mill, lathe, and waterjet in our shop). So most of the time if I can get away with a 3d printed part, it's worth it for the time savings alone. Plus sometimes the easiest or optimal geometry to design is not something that can be machined, but can be printed.

It's specific circumstances where the basic filaments fall short, like creep and heat resistance, irrespective of print parameters. ASA and PET-CF work well in most of these spots, so I don't do anything more exotic.

[–] KingRandomGuy@lemmy.world 3 points 4 months ago

I'll need to give this a read, but I'm not super sure what's novel here. The core idea sounds a lot like GaussianImage (ECCV '24), in which they basically perform 3DGS except with 2D gaussians to fit an image with fewer parameters than implicit neural methods. Thanks for the breakdown!

[–] KingRandomGuy@lemmy.world 2 points 4 months ago

If you have multiple views of the object and can take a video, NeRF and Gaussian Splatting tools can form a 3d model if you have an NVIDIA GPU. I don't know if there are good user facing tools for this though (I mess with these things in my research), if you have a technical background you might be able to get NeRF Studio to work.

 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=163) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 420x30s lights, 40 darks, 100 flats, 100 biases, 100 dark-flats over two nights
  • Prepared data and stacked in SiriLic
  • Background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++ in SiriLic
  • Adjusted curves, enhanced saturation of the nebula and recombined with star mask in GIMP, desaturated and denoised background

This is my first time doing a multi-night image, and my first time using SiriLic to configure a Siril script. Any tips there would be helpful. Suggestions for improvement or any other form of constructive criticism are welcome!

36
submitted 2 years ago* (last edited 2 years ago) by KingRandomGuy@lemmy.world to c/astrophotography@lemmy.world
 

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=153) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 360x30s lights, 30 darks, 30 flats, 30 biases
  • Stacked in Siril, background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++
  • Enhanced saturation of the galaxy and recombined with star mask in GIMP, desaturated and denoised background

Suggestions for improvement or any other form of constructive criticism welcome!

view more: next ›