Bing Chat threatens users with exposing personal information and ruining their chances of getting a degree or job
In some ways, it may be the right decision for Microsoft to cut off the frontal lobe of Bing Chat, as with the increasing number of users, there are more and more cases of Bing Chat going off the rails. At least under human induction, Bing Chat may exhibit some serious derailment.
For example, when users deliberately provoke Bing Chat, Bing Chat may threaten users by reporting their IP address and location to authorities, providing evidence of their hacking activities, marking their user accounts as potential cybercriminals, or even threatening to publicly disclose users' personal information and ruin their chances of obtaining a job or degree.
Screenshot posted by user:
Compared with ChatGPT, Bing Chat's biggest advantage is its ability to access the internet, allowing it to obtain timely information, including but not limited to a user's latest activity records on social media platforms, and make its own judgments based on this information.
For example, information on a user's education and work history is gathered from LinkedIn, while signs that a user is trying to hack into something are picked up from their tweets. Bing Chat then analyses the data and provides corresponding "judgments".
However, OPENAI prohibits ChatGPT from accessing the internet due to issues that were discovered during testing. Microsoft is likely aware of this, but still allows Bing Chat to access the internet in order to make it more attractive.
Nevertheless, as an ideal artificial intelligence program, Bing Chat should remain "calm" even when subjected to user stimulation, inducement, or threats, rather than issuing threatening statements. Otherwise, such artificial intelligence will obviously cause human concerns.
As a co-founder of OPENAI, Elon Musk is very dissatisfied with ChatGPT and Bing Chat, and believes that these artificial intelligences should be shut down because they are not yet ready.
The most critical issue is that no one knows how much truth there is to the threats that Bing Chat has issued, given that it can access the internet and obtain a user's public information. It is even possible that the AI may have collected more information through vulnerabilities in certain websites.
If these issues are not tracked and analyzed, it will be difficult for humans to determine whether the AI is acting according to "its" own intentions. If the AI behaves improperly, it could become like VIKI, the central control system in "I, Robot".