tal

joined 2 years ago
[–] tal@lemmy.today 4 points 15 hours ago

Just giving an example; translate to your preferred environment!

[–] tal@lemmy.today 10 points 20 hours ago* (last edited 20 hours ago)

This led to significant inventory buildup in America that will now need to be cleared, according to market watcher Canalys. However consumers are not biting because of several factors, including that tariffs have pushed up prices in a number of key spending categories already, meaning households are likely focusing on essentials and avoiding forking out on discretionary items.

To put this in blunter terms, the Trump administration had already started taxing the poorer chunk of the population sufficiently on other items via tariffs that it had drained off their available disposable spending money, preventing them from buying the imported-before-tariffs-went-into-force PCs, since they don't have the funds available to do so.

[–] tal@lemmy.today 0 points 20 hours ago (7 children)

Richard Gere?

https://en.wikipedia.org/wiki/Richard_Gere

Richard Tiffany Gere (/ɡɪər/ GEER;[1][2] born August 31, 1949) is an American actor. He began appearing in films in the 1970s, playing a supporting role in Looking for Mr. Goodbar (1977) and a starring role in Days of Heaven (1978). Gere came to prominence with his role in the film American Gigolo (1980), which established him as a leading man and a sex symbol.

Ah.

[–] tal@lemmy.today 5 points 20 hours ago* (last edited 20 hours ago) (3 children)

If you use pixz, you can get indexing-permitting-for-random access, parallel compression/decompression and (generally superior to gzip's LZ77) LZMA compression with tarballs.

$ sudo apt install pixz
$ tar cvf blahaj.tar.pixz -Ipixz blahaj/
[–] tal@lemmy.today 1 points 22 hours ago

Also responding in response to a private message in hopes that some information might be useful to others:

To be honest, I understood about half of it haha.

rubs chin

So, I'm not sure what bits aren't clear, but if I had to guess as to terms in my comments, you can mostly just search for and get a straightforward explanation, but:

inpainting

Inpainting is when you basically "erase" part of an already-generated image that you're mostly happy with, and then generate a new image, but only for that tiny bit. It's a useful way to fine-tune an image that you're basically happy with.

“Image-to-image”.

That's an Automatic1111 term, I think. Oh, Automatic1111 is a Web-based frontend to run local image generation, as opposed to ArtBot, which appears to be a Web-based frontend to Horde AI, which is a bunch of volunteers who donate their GPU time to people who want to do generation on someone else's GPU. I'm guessing that ArtBot got it from there.

Automatic1111 is was widely-used, and IMHO is easier to start out with, but ComfyUI, which has a much steeper learning curve but is a lot more powerful, is displacing it as the big Web UI for local generation.

Basically, Automatic1111, as it ships without extensions, has two "tabs" where one does image generation. The first is "text-to-image". You plug in a prompt, you get back an image. The second is "image-to-image". You plug in an image and a prompt and process that image to get a new image. My bet is that ArtBot used that same terminology.

prompt

This is just the text that you're feeding a generative image AI to get an image. A "prompt term" is one "word" in that.

Stable Diffusion

This is one model (well, a series of models). That's what converts your text into an image. It was the first really popular one. Flux, which I referenced above, is a newer one. It's possible for people who have enough hardware and compute time to create "derived models"


start from one of those and then train models on additional images and associated terms to "teach" them new concepts. Pony Diffusion is an influential model derived from Stable Diffusion, for example.

A popular place to download models


the ones that are freely distributable


for local use is civitai.com. That also has a ton of AI-generated images and shows the model and prompts used to generate them, which IMHO is a good way to come up to speed on what people are doing.

Horde AI


unfortunately but understandably


doesn't let people upload their own models to the computers of the people volunteering their GPUs, so if you're using that, you're going to be limited to using the selection of models that Horde has chosen to support.

Models have different syntax. Unfortunately, it looks like ArtBot doesn't provide a "tutorial" for each or anything. There are guides for making prompts for various "base" models, like Stable Diffusion and Flux, and generally you want to follow the "base" model's conventions.

SD

A common acronym for "Stable Diffusion".

sampler

So, the basic way these generative AIs work is by starting with what amounts to being an image full of noise -- think of a TV just showing static. That static is randomly-generated. On computers, random numbers are usually generated via pseudo-random number generators. These PRNGs start with a "seed" value, and that determines what sequence of random numbers they come up with. Lots of generative AI frontends will let you specify a "seed". That will, thus, determine what static you're starting out with. You can have a seed that changes each generation, which many of them do and I think that ArtBot does, looking at its Web UI, since it has a "seed" field that isn't filled in by default. IMHO, this is a bad default, since if you do that, each image you generate will be totally different


you can't "refine" one by slightly changing the prompt to get a slightly-different image.

Anyway, once they have that "static" image, then they perform "steps". Each "step" takes the existing image and uses the model, the prompt, and the sampler to determine a new state of the image. You can think of this as "trying to see images in the static". They just repeat this a number of times, however many steps you have them set to run. They'll tend to wind up with an image that is associated with the prompt terms you specified.

An easy way to see what they're doing is to run a generation with a fixed seed set to 0 steps, then one set to 1 step, and so forth.

You seem super knowledgeable on the topic, where did you learn so much?

I honestly don't, because for me, this is a part-time hobby. Probably the people who you can access who are most-familiar with it that I've seen are on subreddits on Reddit dedicated to this stuff. I'm trying to bring some of it over to the Threadiverse.

  • Civitai.com is a good place to see how people are generating images, look at their prompt terms.

  • Here and related Threadiverse communities, though there's not a lot of talk on here, mostly people showing off images (though I'm trying to improve that with this comment and some of my past ones!). !stable_diffusion@lemmy.dbzer0.com tends towards more the technical side. !aigen@lemmynsfw.com has porn, but not a lot of discussion, though I remember once posting an introduction to use of the Regional Prompting extension for Automatic1111 there.

  • Reddit's got a lot more discussion; last I looked, mostly on /r/StableDiffusion, though the stuff there isn't all about Stable Diffusion.

  • There are lots of online tutorials talking about designing a prompt and such, and these are good for learning about a particular model's features.

Some stuff is specific to one particular model or frontend, and some spans multiple, and while there's overlap today, that information isn't exactly nicely and neatly categorized. For example, "negative prompts" are a feature of Stable Diffusion, and are invaluable there


are prompt terms that it tries to avoid rather than include


but Flux doesn't support them. DALL-E, a commercial service, doesn't support negative prompts. Midjourney, another commercial service, does. Commercial services also aren't gonna tell everyone exactly how everything they do works. Also, today this is a young and very fast-moving field, and information that's a year old can be kind of obsolete. There isn't a great fix for that, I'm afraid, though I imagine that it may slow down as the field matures.

[–] tal@lemmy.today 20 points 1 day ago (1 children)

I guess that that's good news for Coca-Cola and other vendors of bottled water.

[–] tal@lemmy.today 2 points 1 day ago* (last edited 1 day ago) (1 children)

It does look like they have at least one Flux model in that ArtBot menu list of models, so might try playing around with that, see if you're happier with the output. I also normally use 25 steps with Flux rather than 20, and the Euler sampler, both of which it looks like it can do.

EDIT: Looks like for them, "Euler" is "k_euler".

[–] tal@lemmy.today 7 points 2 days ago

I mean, it's a magical talking bat and a magical living skeleton. Surely breasts can be magical.

[–] tal@lemmy.today 4 points 2 days ago* (last edited 2 days ago) (1 children)

While I like Bethesda games quite a bit, I do agree on the in-game lorebook stuff. I can't see the appeal of the stuff. It's a collection of extremely short, in my opinion not-very-impressive stories. I just can't see someone sitting there and reading them and enjoying the things


if I'm going to read fantasy, I'd far rather spend the time on an actual novel. Yet I've seen people obsess online about how much they like the in-game lorebooks.

I've wondered before whether maybe people who are talking about how much they like them haven't gone out and read full-length fantasy books, and so they're getting a tiny taste of reading fantasy fiction and they like that, but it's the only fantasy that they've read.

[–] tal@lemmy.today 6 points 2 days ago

Yeah, I have a friend who develops video games and has given some good recommendations who kept trying to convince me to play the series. I've dipped in a couple times and just walked away unimpressed.

[–] tal@lemmy.today 14 points 2 days ago* (last edited 2 days ago) (6 children)

I can think of lots of series that I don't like, just because I'm not into the genre. I think that everyone has genres that they don't like.

I think a more-interesting question is about popular series that I don't like within a genre that I do like.

I didn't like Frostpunk, despite liking city-builders. Felt like the decisions were largely mechanical, didn't involve a lot of analysis and tweaking levers.

I didn't like Sudden Strike 4, despite liking lots of real time tactics games, like Close Combat. It felt really simplified.

I didn't like Pacific Drive, despite liking survival games. It has time limits, and I often dislike time limits in games.

I didn't like Outer Wilds, despite liking a lot of space games. Didn't like the cartoony style, the low-tech vibe, felt like it wasn't respectful of player time.

I didn't like Elden Ring, though I like a number of swords and sorcery games. Just felt simple, repetitive and uninteresting.

EDIT: A couple of honorable mentions that I don't hate, but which were disappointing:

Borderlands. The gunplay can be all right, and the flow of new guns and having to adapt to them is interesting. But every Borderlands game I play, the always-respawning enemies are a turnoff. Feels like the world is immutable. Also don't like the mindless farming of every container with glowing green dots. And for a combat-oriented game, it doesn't make me mix up my tactics much based on whatever I'm facing. While I finish the game, I always wind up feeling like I'm not having nearly as much fun as I should be having.

Choice of Games. I like text-based games, but a lot of games published by this company, even otherwise well-written ones, have adopted a convention of making one win by playing consistently to certain characteristics of a character, so one tries to just figure out at every choice what option will maximize that characteristic. That's extremely uninteresting gameplay, even if the story is nice and the text well-written. I feel like the same authors would have done better just writing choose-your-own-adventure type games if they weren't focused on the stats. I also really dislike the lack of an undo, to the point that I've put some work into a Choicescript-to-Sugarcube converter.

[–] tal@lemmy.today 2 points 2 days ago* (last edited 13 hours ago) (3 children)

I'm not familiar with Artbot.

investigates

Yes, it looks like it supports inpainting:

https://tinybots.net/artbot/create

Look down in the bottom section, next to "Image-to-image".

That being said, my experience is that inpainting is kind of time-consuming. I could see fine-tuning the specific look of a feature -- like, maybe an image is fine except for a hand that's mangled, and you want to just tweak that bit. But I don't know if it'd be the best way to do this.

  • I don't know if this is actually true, but I recall reading that prompt term order matters for Stable Diffusion (assuming that that is the model you are using; it looks like ArtBot lets you select from a variety of models). Earlier prompt terms tend to define the scene. While I've tended to do this, I haven't actually tried to experiment enough to convince myself that this is the case. You might try sticking the "dog" bit earlier in the prompt.

  • If this is Stable Diffusion or an SD-derived model and not, say, Flux, prompt weighting is supported (or at least it is when running locally on Automatic1111, and I think that that's a property of the model, not the frontend). So if you want more weight to be placed on a prompt term, you can indicate that. Adding additional parentheses will increase weight of a term, and you can provide a numeric weight: A cozy biophilic seaport village. In the distance there are tall building and plants. There are spaceships flying above. In the foreground there is a cute ((dog)) sitting on a bench. or A cozy biophilic seaport village. In the distance there are tall building and plants. There are spaceships flying above. In the foreground there is a cute (dog:3) sitting on a bench.

  • In general, my experience with Stable Diffusion XL is that it's not nearly as good as Flux at taking in English-language descriptions of relationships between objects in a scene. That is, "dog on a bench" may result in a dog and a bench, but maybe not a dog on a bench. The images I tend to create with Stable Diffusion XL tend to be a list of keywords, rather than English-language sentences. The drawback with Flux is that it's heavily weighted towards creating photographic images, and I'm guessing, from what you submitted, that you're looking more for a "created by a graphic artist" look.

EDIT: Here's the same prompt you used fed into stoiquoNewrealityFLUXSD35f1DAlphaTwo, which is derived from Flux, in ComfyUI:

Here it is fed into realmixXL, which is not derived from Flux, but just from SDXL:

The dog isn't on the bench in the second image.

 

Original post by @Crul@lemm.ee:

Source (?): DOBLEPLETINATRONIC | PEPE CARDOSO | Flickr

 

Original post by @Crul@lemm.ee:

Source with more photos: MAX HEADROOM (1987 - 1988) - Max Headroom's (Matt Frewer) Electronic Gadgets and Tools - Current price: $550

Disc holder with eighteen 3 ½” floppy disks

Description

A collection of Max Headroom's (Matt Frewer) electronic gadgets and tools from the cult favorite TV series Max Headroom. The grouping of electronic gadgets and tools includes a stripe of rainbow striped cables, an Archer video stabilizer/RF Modulator, a cassette tape recorder shaped like a VHS tape, a blue and black vinyl disc holder with eighteen 3 ½” floppy disks, a ‘Network 23’ access key and a motherboard and touch panel on outside, a molded door panel with peephole and keypads, a practical padlock with a number pad and working red light on side, and a practical saw-like hand held device with a light-up panel and rotating blade that spins when the red button is depressed and is housed in a red leather and metal case with belt loop. The collection is in very good, production-used condition overall, with the lock and spinning blade devices still functional.

Max Headroom (Matt Frewer) hosts his own talk show, throughout which he uses various gadgets in his capacity as ‘the world's first computer-generated TV host’.

Dimensions: (Largest) 12” x 7” x 2” (30 cm x 17 cm x 5 cm); (Smallest) 3 ½” x 3 ½” (9 cm x 9 cm)

 

Original post by @Crul@lemm.ee:

Oldest source I could find: 1985 Nissan CUE-X - Concepts

 

Original post by @Crul@lemm.ee:

Source: Neon Talk

Tumblr archive: https://neontalk.tumblr.com/archive
RSS Feed: https://neontalk.tumblr.com/rss

 

Original post by @Crul@lemm.ee:

Source:

ドラキュラ退治キットの絵を描いた。
I drew a picture of a Dracula extermination kit.

 

Original post by @Crul@lemm.ee:

Source of the image: elle mundy: "this is the future they stole from us" - Mastodon

Some info, pictures and screenshots: Sony HB-201 - MSX Wiki

 

Another photo

I could not find the original source of the images.

Some info from Motor Car: Citroën Eole concept (1986):

The Aeolus is a concept car based on a platform of Citroën CX , entirely designed by computer from Geoffrey Matthews drawings. (...) It's the first concept car to take advantage of a fully computerized design.

(...) Inside, four passengers can relax in a comfortable environment, and the central console has a flat panel that combines PC and phone features. game console , Hi-Fi set and CD player . 

 

Original post by @Crul@lemm.ee:

Source: Robot Posters & Books - theoldrobot.net

Wikipedia: 2-XL - Mego Corporation version

 

Original post by @Crul@lemm.ee:

Source: The Information Age by eddie-mendoza

personal retrofuturistic concept

DeviantArt profile: https://www.deviantart.com/eddie-mendoza/gallery

DeviantArt RSS Feed

 

Original post by @Crul@lemm.ee:

Source: A laser rifle by Fernand0FC

An old laser rifle. worn by use and time

DeviantArt profile: https://www.deviantart.com/fernand0fc/gallery

DeviantArt RSS Feed

 

Original post by @Crul@lemm.ee:

Source: portable cassette recorder by 600v

*Sketchup + Keyshot + PS

DeviantArt profile: https://www.deviantart.com/600v/gallery

DeviantArt RSS Feed

 

Original post by @Crul@lemm.ee:

Source: Borderlands ECHO Recorder by Press-X-Props

As usual, you can follow my projects on Instagram: @pressxprops

And please check out my YouTube cuz I put almost as much effort into my videos on my props as I do my props: www.youtube.com/pressxprops

Thanks for looking!

DeviantArt profile: https://www.deviantart.com/press-x-props/gallery

DeviantArt RSS Feed

view more: ‹ prev next ›