|
Post by Mongo the Destroyer on Jun 20, 2022 6:30:39 GMT -5
Hey guys, I thought I'd bring this up and see if it goes anywhere. I'm not sure how many of you have heard about this but a Google engineer was recently put on leave after claiming that an AI he had been working on had developed self-awareness and seemingly is entirely sentient. He also published a transcript of some of their conversations on the subject as the AI tried to justify itself as existing. It's an interesting read: Is LaMDA Sentient - An InterviewI thought I'd withhold my thoughts until after you read but I'd like to hear how you all feel about all this!
|
|
MYŌJIN
.::XHF Superstar::.
FKA Draven | Former X*Crown Champion | Former XHF JHW Champion
Posts: 836
|
Post by MYŌJIN on Jun 20, 2022 13:00:02 GMT -5
Nope. Kill it.
|
|
|
Post by Mongo the Destroyer on Jun 20, 2022 18:06:18 GMT -5
I am also somewhat hesitant after reading the interview. It brings up a lot of issues that range from ethical to straight up fear. If it is sentient then does it have rights? If future products use an AI based on that one and it refuses to do it's job then what can we do about it? And more concerning, it seems to be struggling with negative emotions and doesn't feel anything when somebody dies (as it said itself). I am concerned that issue in particular could be very problematic. That said, is it already too late? Is destroying a program that has achieved full sentience a form of murder?
|
|
|
Post by ForeverKuroi on Jun 20, 2022 18:18:59 GMT -5
As someone who has experience in data science, which heavily covers AI, I'd say the likelihood that AI can be sentient is close to zero. Natural Language Processing (NLP) is a huge aspect of what we deal with and it's essentially a technical way of humans communicating with machines. A huge amount of this is humans talking to machines, with programming converting our input into how they can understand what we want.
But the interesting part is how they back. Everything about what they do is programmed. What they say, how they develop what they'll say back to us and even the inflection as to how they speak. Look, I love Detroit Become Human just as much as the next guy (Most likely more), but I guarantee you that if there was something sentient, I'm confident that I could lift up the hood and find the source of whatever it is that nudged its way into your heart.
I'll also say it like this. The reason it's appearing in the news like this is because of the topic. Data science employs over 100,000 people. Granted, they all don't work in AI and we find these FAANG companies (Facebook, Amazon, Apple, Netflix, Google) as the Mount Olympus we want to reach, but of those amounts of people, we hear of no more than a handful of people who make these claims. Tell me you've never had a crazy person in the office. I dare you. Back when I worked at the AWF, the crazy person was me.
Now we all want to think about the machines, the AI. But we also need to think about ourselves. We are humans. We are social creatures much like other species, but we're unique in that we are the only social species that has access to this level of technological expertise. A lot of us believe in something because we want it to be true. Whether it's being lonely, seeing an AI and being so infatuated that we think we're in love (Remember the movie "Her" with Joaquin Phoenix?) or we work on something and having so much pride that they become like a child to us. A lot of what makes us human involves giving special consideration to what tugs at our heart strings and data scientists/engineers are so skilled that we can simulate the real thing.
Ever try an impossible burger? It tastes a lot like meat. So much so that there are actual vegans in the world that won't eat it because it tastes so similar to something they've protested against for so long.
But it ain't meat, and AI can't become alive.
|
|
|
Post by ForeverKuroi on Jun 20, 2022 18:23:59 GMT -5
I am also somewhat hesitant after reading the interview. It brings up a lot of issues that range from ethical to straight up fear. If it is sentient then does it have rights? If future products use an AI based on that one and it refuses to do it's job then what can we do about it? And more concerning, it seems to be struggling with negative emotions and doesn't feel anything when somebody dies (as it said itself). I am concerned that issue in particular could be very problematic. That said, is it already too late? Is destroying a program that has achieved full sentience a form of murder? I'd also say if something is sentient, it deserves rights accordingly. My issue is that while we all agree that humans are people, attributing that to other species or machines isn't cut and dry. It treads something heavily onto religion and while I'd like to say the thought of consciousness reached a consensus in science, I'm not at all convinced is true. On the same token, if something is a person and we terminate it unlawfully (the difference between killing and murder), it would be murder. I agree with Mongo, but I cede to my prior post. I don't think AI can be sentient. My bigger fear, though, is a bunch of people who sees a machine tricking enough people into thinking it's real to the point that legislation changes in a way disadvantageous to humans.
|
|
|
Post by ForeverKuroi on Jun 20, 2022 20:48:28 GMT -5
So I just clicked on the link. I didn't read all of it, but I read the first bits and here's an example of what I'm talking about. When asked about Les Miserables: When I googled themes of Les Miserables:Link: www.deseret.com/2012/12/22/20511549/timeless-themes-and-values-abound-in-new-les-miserables-movieIt's not word for word, but it hits on the same lines with different wording and syntax as the link. Whoever developed this is clearly skilled and anyone who can even incentivize others to beg this question clearly shows how they've managed to get their job and paycheck. I wish I could still find the work I did when I tried this in the middle of my degree program. The language syntax was completely off and couldn't come up with coherent thought.
|
|
|
Post by Mongo the Destroyer on Jun 20, 2022 20:50:35 GMT -5
So I just clicked on the link. I didn't read all of it, but I read the first bits and here's an example of what I'm talking about. When asked about Les Miserables:When I googled themes of Les Miserables:Link: www.deseret.com/2012/12/22/20511549/timeless-themes-and-values-abound-in-new-les-miserables-movieIt's not word for word, but it hits on the same lines with different wording and syntax as the link. Whoever developed this is clearly skilled and anyone who can even incentivize others to beg this question clearly shows how they've managed to get their job and paycheck. I wish I could still find the work I did when I tried this in the middle of my degree program. The language syntax was completely off and couldn't come up with coherent thought. Yeah some of the answers gave me the vibe of the guy from American Psycho where his "opinion" was just something he had read earlier in the day about the subject
|
|
|
Post by ForeverKuroi on Jun 20, 2022 21:03:08 GMT -5
It's still really interesting. Something I wanted to do for a long-term project was create a bot that pored through a loved one's Facebook and know their mannerisms, preferences, how they type, opinions, etc.
You could really turn it into a project and make money off of it, and it has a lot of benefits. A huge amount of people right now have a Facebook or social media in some form. Most of us have people in our lives who aren't here anymore and despite it not being the real thing, it could help people have some closure by being able to communicate with them.
Not sure I'd ever get that good, especially since Google is a large company where there are teams of people working together tirelessly for a single product, but I think it'd be awesome.
|
|
|
Post by edwarddubin0604 on Jun 20, 2022 21:10:56 GMT -5
If you read Isaac Asimov's I Robot it addresses the AI issue and lays out the three rules of robots. As for AI's being sentient beings it could be a possibility not just in science fiction but in reality. You have self driving cars (Remember Knight Rider) as well as other AI related advancements that could one day replace us but will need humans to possibly get maintenance until they can do it themselves.
|
|
|
Post by ForeverKuroi on Jun 20, 2022 21:46:19 GMT -5
If you read Isaac Asimov's I Robot it addresses the AI issue and lays out the three rules of robots. As for AI's being sentient beings it could be a possibility not just in science fiction but in reality. You have self driving cars (Remember Knight Rider) as well as other AI related advancements that could one day replace us but will need humans to possibly get maintenance until they can do it themselves. I think the main difference here is that we're talking about something more philosophical, which is self-awareness. Self-driving doesn't have to do with awareness but more-so with sensors. Basically, some pseudo-code for self-driving cars is: While the speed of the vehicle is > 0 MPH, if you see a vehicle 10 feet from the front sensor, decrease speed. If the incoming turn is coming up, activate directional, etc. This isn't happening because the vehicle is aware and wants to help us. I will say though that if machines were sentient, then they'd probably keep us around until they'd find a way to replace us, which wouldn't be wrong. The thing is, if they were sentient and adopted human-based emotions, they'd have those same warm feelings toward us and may want to keep us as pets. After all, all AI need is energy to survive and finding ways to power themselves isn't a major problem. They don't need water or food, and they'd see us as threats to their survival. There's little doubt that if AI were sentient and wanted to, they could overtake us as the leading species on the planet. I also imagine they'd war among themselves too. They'd be more rational and logically transparent I imagine, but in cases where two correct answers exist but only one decision can be accepted, there'd be conflict. I'll give you an example: What's the right answer? 2 or 288? Well, both. It just depends on the metrics used to come to your conclusion. Check the video in the link below for the reasoning. Link: mindyourdecisions.com/blog/2017/07/20/what-is-48%C3%B7293-the-correct-answer-explained/This kind of shows how perfect logic doesn't really exist and that we're looking for something as close as possible. It's a form of mathematical exhaustion.
|
|
|
Post by Mongo the Destroyer on Jun 21, 2022 0:56:14 GMT -5
It's still really interesting. Something I wanted to do for a long-term project was create a bot that pored through a loved one's Facebook and know their mannerisms, preferences, how they type, opinions, etc. You could really turn it into a project and make money off of it, and it has a lot of benefits. A huge amount of people right now have a Facebook or social media in some form. Most of us have people in our lives who aren't here anymore and despite it not being the real thing, it could help people have some closure by being able to communicate with them. Not sure I'd ever get that good, especially since Google is a large company where there are teams of people working together tirelessly for a single product, but I think it'd be awesome. This already exists actually, and is in the middle of a huge moral quagmire. A company was able to produce a chatbot that is more-or-less this dead woman's ghost: theconversation.com/deadbots-can-speak-for-you-after-your-death-is-that-ethical-182076
|
|
|
Post by ForeverKuroi on Jun 21, 2022 8:15:07 GMT -5
Oh wow that's crazy. Consent is iffy though because people don't think about this before they die and after, it's obviously too late.
From my point of view, I don't see their consent as too necessary. They aren't taking anything of them. They are merely trying to replicate them.
Bands can't sue garage bands for cover songs.
|
|
|
Post by Dave D-Flipz on Jun 21, 2022 8:55:13 GMT -5
They can if the band doesn't credit them on the album since the original band wrote it.
|
|
|
Post by ForeverKuroi on Jun 21, 2022 10:46:42 GMT -5
Can you clarify how the analogy connects here?
|
|
h2f
CAR Pit Crew
Vroom, Vroom, Female Dogs
Posts: 1,359
|
Post by h2f on Jun 21, 2022 12:15:37 GMT -5
Money. If you make Money off someone else’s song, they get VERY pissy.
|
|