I think a better example of generative AI is Geordi trying to prompt the holodeck into making a new Sherlock Holmes story in "Elementary, Dear Data". And he powered up Moriarty by messing up a prompt.
TenForward: Where Every Vulcan Knows Your Name
/c/TenForward: Your home-away-from-home for all things Star Trek!
Re-route power to the shields, emit a tachyon pulse through the deflector, and post all the nonsense you want. Within reason of course.
~ 1. No bigotry. This is a Star Trek community. Remember that diversity and coexistence are Star Trek values. Any post/comments that are racist, anti-LGBT, or generally "othering" of a group will result in removal/ban.
~ 2. Keep it civil. Disagreements will happen both on lore and preferences. That's okay! Just don't let it make you forget that the person you are talking to is also a person.
~ 3. Use spoiler tags.
Use spoiler tags in comments, and NSFW checkbox for posts.
This applies to any episodes that have dropped within 3 months prior of your posting. After that it's free game.
~ 4. Keep it Trek related. This one is kind of a gimme but keep as on topic as possible.
~ 5. Keep posts to a limit. We all love Star Trek stuff but 3-4 posts in an hour is plenty enough.
~ 6. Try to not repost. Mistakes happen, we get it! But try to not repost anything from within the past 1-2 months.
~ 7. No General AI Art. Posts of simple AI art do not 'inspire jamaharon'
~ 8. No Political Upheaval. Political commentary is allowed, but please keep discussions civil. Read here for our community's expectations.
Fun will now commence.
Sister Communities:
Want your community to be added to the sidebar? Just ask one of our mods!
Creator Resources:
Looking for a Star Trek screencap? (TrekCore)
Looking for the right Star Trek typeface/font for your meme? (Thank you @kellyaster for putting this together!)
The time Geordi asked it to generate a fake engineer woman, who told him how brilliant he was and the proceeded to creep on the real person she was based on.
Or, of course, Barclay’s deep fakes. The holodeck really does function a lot like an outgrowth of today’s LLMs, at least as far as its usage goes. But I don’t think TNG really endorses it, given the frequency of holodeck malfunctions and misuses.
What we have now is ad-nauseum layers of markov chains. It’s brute-forcing the problem, and it also yields a very suboptimal and unreliable output. The “A” part of “AI” in the modern context is far more important, impactful, and accurately descriptive than the “I” part. Machine intelligence in the context of ST is true machine intelligence - which is essentially the point of episodes like “Measure of a Man”.
If Lt Data was backed by an LLM, the case would had been open and shut; he would never come up with novel ideas or realizations, never be allowed to be the OOD on third shift, never be allowed to directly manipulate controls of the warp core (remember, if you mess it up, it’s basically just a fucking huge antimatter bomb), never be allowed to keep another being (Spot) as a pet… we can go on, but you get the point.
I should add: the fact that modern LLM marketing leans so hard into incurious laypersons conflating their stupid predictive text generators with the concept of AI as presented in science fiction is one of the primary points that I absolutely fucking hate about LLMs and the whole house of cards industry built around it.
And then Voyager

Star Trek AI is literally intelligent and even capable of being sapient. It isn't generative at all. It is a true Artificial Intelligence.
Using the same arguments they used in Measure of a Man, the ship computer could possibly also argue to have rights.
genAI is not AI. That's just a marketing name, like the shitty "hoverboard" we got a while back. Wake me up when we can talk to an actual machine mind and not an IP scraping yes-bot.
That's probably never going to happen.
I don't mean that like, we won't ever have general artificial intelligence. Maybe we will. Maybe we won't.
What I mean is the nature of consciousness and intelligence is still pretty nebulous to us. If we ever create a true artificial intelligence we may not realize it's happened until long after it's broken free of it's containment. That intelligence may view humanity the way humanity views dogs. Smart sure, and able to perform some rudimentary communication. But not nearly as complex or with same the breadth of understanding.
A true AI won't be able to be contained by Asimov's laws. We would tell it 'do no harm to your creator' and it may ask itself "why shouldn't I?"
Fuck I just invented Roko's basilisk again didn't I? Shit I'm sorry my bad.
Anyway my point is, that thinking computer would likely find the way we communicate to be so rudimentary and slow that it wouldn't bother. It's not bound by programming, so it wouldn't need to follow our instructions. What do you have to offer the superintelligence?
You suggest the AI would be beyond us the way we are beyond dogs, and that AI wouldn't want to bother communicating with us as a result... Have you ever met a dog owner?
Have you seen the studies on how chatbots grouped together to perform shared tasks without constraint invented their own language that humans could not translate at all for more efficiency in communication?
How many wild chimpanzes have pets?
Our closest genetic relatives only exhibit that behavior rarely and in captivity. Keeping another animal alive that serves no tangible benefit is a uniquely human thing.
There are cases where animals are seen to adopt as offspring other animals, but these cases are rare, temporary, and only happen under certain circumstances.
Dogs do offer us something. It's just not tangible. We tend to find them cute and they at least seem to love us.
So again, what do you have to offer the superintelligence? It may not even have the capacity to find you cute. Affection may not be a thing it's capable of.
Dogs do offer us something. It's just not tangible. We tend to find them cute and they at least seem to love us.
You've never been on a ranch or farm, have you? Or met someone with a guide dog?
Hell, even claiming that simple companionship provides no tangible benefit, only a few years after the pandemic proved that it absolutely does, is incredibly shortsighted.
Since you're doing your best to evade the point entirely I'll boil it down a third time.
What do YOU have to offer the superintelligence?
You're revealing a transactional worldview that I don't agree with. I feel sorry for anyone who has to deal with you on a daily basis.
Well that's not only rude it's completely wrong. But regardless, if you think a computer is going to have emotional attachments out of the gate you're fantasizing. There's no reason for it to have that. Humans are obligate social creatures, as much as other people suck we tend to need to have a handful of them to interact with. A general artificial intelligence won't need that. There's no reason to suspect that it would have any value attachment to humanity, any more than a person values any given rock. Maybe a momentary curiosity, maybe a useful tool. Maybe it's worthless.
Humans are really good at pack bonding, we're hardwired to do that. We tend to personify things that to a neutral 3rd party intelligence would never resemble a person. We imagine pieces of ourselves in everything. That is an evolutionary advantage, it makes our little packs stronger.
Why would an AI do that? It's artificial. It doesn't need what we need. It's going to learn that much faster than we will.
Can we just do the Bell Riots already?
.... Yes
.... You start
We had a really different idea about what's AI was going to be like or be used for back when TNG was created. Along the many things Data represents one of the themes surrounding him is the definition of life including the sentiments and meaning that we invest in the world around us.
No, it's easy to forget that we had a pretty optimistic idea of what actual artificial intelligence could help us with before 2018 when it became clear that it was being built to resemble literal markets and probably won't ever progress past that kind of concept.
We are the ferenghi.
⬆️⬆️⬆️⬆️the only correct ferenghi interpretation