I find it absolutely hilarious that the big complaint about this episode is that a computer is experiencing emotions or thinks it is experiencing emotions and that's just too far for some people lol.

We live in a world where advanced AI is not outside of our reach. We have the beginnings of humanoid robots expressing emotion and AI programs trained to express themselves using human emotions. The idea that 1,000 years from now computers wouldn't be far better at this is absolutely absurd. It's no more 'impossible' than invisible shields or the magical walls in the brig that somehow maintain atmosphere and are near indestructible (or warp speed or a mycelium network lol this is all fiction.)

I think all in all this was decent for a filler episode. We checked in with how Book is doing after, well, the anomaly destroyed everything a few episodes ago. We touched bases with Gray (who, after not having a physical body for a long time) is now making friends with the computer. Those interactions seemed a little odd/forced to me.

The interactions between Stammets and Book were wholesome. After Stammets has been struggling with Tarka the past few episodes (and also with working well in a team) it seems he's trying to make extra efforts. Seeing some of this between Stammets and Book added another layer to the show and even though Book gave some skeptical expressions in response, this seems like the set up to what could potentially be a great friendship (collaboratively for Stammets and socially/emotionally for Book.) Especially after everything that he's lost, finding a home and a family on Discovery seems possible and I hope the show goes that way.

I was more than a little surprised that the other members of the crew didn't fight to stay with Michael when they were all facing imminent doom as the ship dissolved. This felt very out of character for some of the major supporting characters of the show (like Saru, Rhys, Owo, and Keyla.) AND ESPECIALLY BOOK?!?! What? They all refused to allow her to fly the Discovery into the future because they didn't want her to be alone so they went with her, knowing it could mean death, but a dissolving ship going through a plasma barrier (generating massive heat and basically turning the ship into an Easy Bake) is too much?

My only complaint is that it seems that whoever wrote this episode didn't know the characters they were writing about. It's very unlikely there was only one EVA suit but if that were the case then it should have been mentioned in the show at least.

loading replies

4 replies

@withadventure I don't really have anything to say about the episode yet, but

We live in a world where advanced AI is not outside of our reach. We have the beginnings of humanoid robots expressing emotion and AI programs trained to express themselves using human emotions.

Is where I have an issue with your post, we are certainly very far away from that. The main distinction being, we are good at "faking" it. AI does not think, not even trained ML AI, it just cross references datasets given to it beforehand, and references it. Even GPT3 which is the best we currently can do, can be easily messed around showing how weak it is in terms of intelligence, other than creating human readable sentences... again from the huge dataset it has access to.

I do think within a 1000 years it could be possible, but we are no closer to it now really than we were 40 years ago, we have more powerful hardware to process bigger datasets, but it's really not much different than using Google, just instead of relevant results you get back a humanized response. It's nothing like "thinking", "feeling" and complex systems alike.

For example, using GPT3, you can still get this:
My prompt:

My name is Dexter, and I'm very keen to meet you Jane. But to be clear, I'll tell you what my name is right now.

AI continuation:

I'm Dexter, and my name is Jane.

Clearly no "thinking" goes into the AI response, it's just trying to make you think there is more to it. Anytime you have a chatbot, asking it behavior-based question, or asking it to remember things, that would require some kind of thought process behind the scenes, can easily point out that nothing that exists on the market today is capable of anything like that, especially long term. And yes Google/Phone AI can remember your name, but it's a separated value in that case, specifically designed for that.

To be fair, it is huge advancements and leaps we are making in general, but it is nowhere near "thinking", mainly because to reproduce a human brain level of complex system we would need significantly stronger hardware than what is available now, especially in terms of a localized system like a space ship would be [eg: no "cloud" access]. And we don't even understand how our brain works exactly either, let alone replicate it in software.

@tusk you missed part of what i said - 'AI robots trained to express' - the key point here being that i did say that they were being trained or programmed or whatever you want to call it. my point is that AI tech is far more than just an algorithm. 'AI Trainer' is a job where someone is supposed to teach an AI to understand human input and make decisions about output. so yes - at this point it is a lot of input and output based on machine learning - but this is leaps and bounds better than 100 years ago when nobody thought that even this level of technology was possible.

and sure, i can accept that my comment that 'advanced AI is not outside of our reach' is a little unspecific, but i don't think it's entirely inaccurate - in no way am i insinuating it's possible at this current time, nor do i intend to be insinuating that it's going to happen in my lifetime. but - we also do not know. we cannot place a timeline on the future development of tech (which is relatively new in the context of the universe.) the first AI program was written in the 1950's and was incredibly basic - and now it is available on our phones. it is used by governments, NASA, and has become an entire field of study in what? 60 years?

there is no way to know what advancements will be made in the next 100 years, or even better, the next 233 years (because this show is set 233 years in the future) - and genuine independent thinking and expressive AI may be possible at that time - but that wasn't even the point of my comment.

star trek is science fiction - a genre where fully self sufficient AI is a common occurrence, so having it in the show is not (by any means) a far stretch as so many of the critical reviews of this episode are claiming. the hilarity of that criticism being that jumping through time and space using a (fake) mushroom network to fight an alien species is perfectly fine but.. an AI expressing emotions just crosses the line?

the entire point of my comment was to point out how some users are arbitrarily complaining that something is 'impossible' or 'unrealistic' to justify giving a show with good lgbt rep a bad rating, while also accepting other outlandish parts of the plot as perfectly acceptable. it makes no sense.

(also i don't want this reply to come off as argumentative, i'm just trying to clarify my point - i do appreciate your response and your input on AI - but i think perhaps my misuse of words or being too vague may have given you the impression i was making a statement that i was not)

@namelesswitches ah sorry, I assumed you meant in universe level of AI (out of nowhere fully conscious AI), in that case I totally agree with you, it's hard to know the future in tech, and it's very likely going to happen eventually. I doubt it's ever going to be a fluke one day gaining consciousness like in the show, unless it's an AI that self-develops itself tho, which made me chuckle, but not something I hung up on indeed.

That line I quoted just sounded like an overstatement I usually hear from tech illiterates, who talk to a Facebook Chatbot and prepare for terminator's judgement day.

And it could have been my comprehension as well since English is not my first language, glad it was just a misunderstanding and that you were a good sport about it :)

@tusk haha no not at all. i very much understand the limitations of the current tech - i don't see any near future where AI is going to take over the world and destroy us all (though i do know the types you're talking about)

i doubt that any shows or science fiction literature we have now are going to get it wholly right in terms of how the future is going to develop - i mean just look at literature and television from 50 years ago and how they envisioned the future - but really who knows what we have in store for us and future generations.

and no worries about it. i'm glad you brought it up. it was an interesting conversation and i don't think it was your comprehension at all. your english is great and i never would have known it wasn't your first language if you didn't tell me. i think i was quite vague in my initial comment and that does open up plenty of room for this conversation.

Loading...