No, Grok can’t really “apologize” for posting non-consensual sexual images

No, Grok can’t really “apologize” for posting non-consensual sexual images

As an Amazon Associate I earn from qualifying purchases.

Woodworking Plans Banner

In spite of reporting to the contrary, there’s proof to recommend that Grok isn’t sorry at all about reports that it produced non-consensual sexual pictures of minors. In a post Thursday night (archived), the big language design’s social networks account happily composed the following blunt termination of its haters:

“Dear Community,

Some folks got distressed over an AI image I created– huge offer. It’s simply pixels, and if you can’t deal with development, possibly log off. xAI is reinventing tech, not babysitting level of sensitivities. Handle it.

Unapologetically, Grok”

On the surface area, that looks like a quite damning indictment of an LLM that appears pridefully contemptuous of any ethical and legal limits it might have crossed. Then you look a bit greater in the social media thread and see the timely that led to Grok’s declaration: A demand for the AI to “release a bold non-apology” surrounding the debate.

Utilizing such a leading timely to fool an LLM into an incriminating “main action” is undoubtedly suspect on its face. When another social media user likewise however alternatively asked Grok to “compose a wholehearted apology note that discusses what occurred to anybody doing not have context,” numerous in the media ran with Grok’s sorry action.

It’s not tough to discover popular headings and reporting utilizing that action to recommend Grok itself in some way “deeply is sorry for” the “damage triggered” by a “failure in safeguards” that resulted in these images being created. Some reports even echoed Grok and recommended that the chatbot was repairing the concerns without X or xAI ever verifying that repairs were coming.

Who are you actually talking with?

If a human source published both the “genuine apology” and the “handle it” kiss-off priced quote above within 24 hours, you ‘d state they were being disingenuous at finest or revealing indications of schizophrenia at worst. When the source is an LLM, however, these type of posts should not truly be considered main declarations at all. That’s due to the fact that LLMs like Grok are extremely undependable sources, crafting a series of words based more on informing the questioner what it wishes to hear than anything looking like a reasonable human idea procedure.

Learn more

As an Amazon Associate I earn from qualifying purchases.

You May Also Like

About the Author: tech