Report on harassment in online gaming

  • 44
    Replies
  • 971
    views
  • DracoTarot's Avatar
    Level 52
    @DoctorEldritch I watched a vid last night about chatbot AI and A senior Google engineer says one of the company's artificial intelligence systems has become a sentient being. The technology firm has suspended Blake Lemoine for breaching confidentiality rules -- and insists there's no evidence its AI chatbot is now free thinking. A spokesperson for Google said while chatbots can imitate conversation, they are not sentient. Mr Lemoine has suggested the robot should get its own lawyer. 😂

  • DoctorEldritch's Avatar
    Community Manager
    @DracoTarot Interesting, but it could also be a publicity stunt. I'd wait for a bit more evidence, After all, Google also thought that Google Glasses would pick up, and we all know how that turned out. But before AI would worry about getting a new lawyer, there'd need to be changes in the legal system to allocate some sort of legal status to AIs, they can't be approached the same way as humans.
  • DracoTarot's Avatar
    Level 52
    @DoctorEldritch We're talking about Google here and publicity stunts are common in the organization. I take everything with a pinch of salt. 😶

  • DoctorEldritch's Avatar
    Community Manager
    @DracoTarot Indeed. I'd be worried, I guess, if Elon Musk contracts Google to develop an AI for the Mars mission.


    But as far as harassment in games goes, not sure how soon the AIs will be at a level to make a difference. Let's hope this experiment of Unity is a successful one.
  • DracoTarot's Avatar
    Level 52
    @DoctorEldritch You may find this vid interesting.

    This intense AI anger is exactly what experts warned of, w Elon Musk.

    Just follow the link: https://www.youtube.com/watch?v=b2bdGEqPmCI

    I think it's going to be a while before AI would be able to intervene to make a difference. I think Unity stopped in any case with the idea and worked on something else.
  • DoctorEldritch's Avatar
    Community Manager
    @DracoTarot This was interesting if slightly disturbing. Certainly, we're living in interesting times.
  • Saka's Avatar
    Level 52
    @DracoTarot I remember that "sentient AI" case! It popped up in several places on Reddit, but unfortunately I forgot where I read it that there was a great breakdown of the issue.

    From my own knowledge (after all, I did study AI before my health issues put me on indefinite leave) and technical side of the Reddit mindhive, the AI was not sentient. There is no telling how much the engineer tried talking with the AI until he got the responses he liked.

    Working with AI is a matter of constant repetition. Deep learning works in the way that for "approved" output there is a reward and eventually the frequency of "approved" answers increases. As an example, Midwinter AI allows users to rate the produced images. I am pretty sure it also keeps track of which rendtions the users ask to be upscaled and such.

    So, in a nutshell, there were probably multiple interactions with the AI and what was published was the best outcome, and possibly an edited/redacted one too.

    https://www.msn.com/en-us/news/techn...ead/ar-AAYpAbb

    Here's the transcript: https://www.documentcloud.org/docume...t-an-interview
    Unamused Snarktooth. Advocate for hearing loss & accessibility. Person, friend and a terrible/terrific* artist.
    *delete as appropriate
  • DracoTarot's Avatar
    Level 52
    @Saka Thanks so much for the links, It will be an interesting read. I'll indulge in the material tonight during our power cuts when I'm bored. 😊

    It would be fascinating and scary at the same time if an AI became sentient. I can imagine what could happen.
  • DoctorEldritch's Avatar
    Community Manager
    @Saka The process you describe is an intriguing one, and for some reason reminded me of the Psycho-Pass universe and that Sibyl System this universe has. Though Sibyl is not an AI, the principle of its work is interesting in connection with what you describe, when "for "approved" output there is a reward and eventually the frequency of "approved" answers increases".

    But it is interesting, @DracoTarot, to imagine what would be scarier: an AI that is truly sentient, or an AI with a flaw in inner logic, like in Asimov's "I, Robot". There are so many stories where things with AI go wrong not because it achieved sentience, but because it did exactly what it was meant to do, but approached it too literally.
  • DracoTarot's Avatar
    Level 52
    @DoctorEldritch I watched a vid over the weekend about the new Telsa Bot.



    Elon Musk unveiled this Tesla Bot prototype at AI Day 2023 Tesla CEO Elon Musk on Friday unveiled the company's Tesla Bot, a robot code-named Optimus that shuffled across a stage, waved, and pumped its arms in a low-speed dance move. The robot could cost $20,000 within three to five years, Musk said.