ChatGPT Subscribers Complain of Difficult CAPTCHAs: Training Data in Disguise?
OpenAI has implemented highly challenging CAPTCHAs to prevent bots and various types of scrapers from using ChatGPT. These CAPTCHAs, usually consisting of six sets that need to be verified correctly, are now a significant barrier to logging into accounts.
Under normal circumstances, once a user successfully completes the CAPTCHA during the login phase, they shouldn't need to undergo verification again. However, recently, many users, including those subscribed to ChatGPT Plus, have encountered these CAPTCHAs even during conversations.
The frequency of these CAPTCHAs appears to correlate with the user's engagement level and the frequency of their conversations, causing significant disruption. Resolving a CAPTCHA might take up to a minute or more, interrupting the user's flow of thought.
On Hacker News (HN), users speculate that OpenAI might be using these challenging CAPTCHAs to train AI models, reminiscent of rumors regarding Google’s use of reCAPTCHA for training purposes. Yet, some commenters suggest that OpenAI's motive is to combat gray and black market activities. These markets reverse-engineer ChatGPT to extract its API for use by others at a lower cost than directly calling OpenAI’s API, hence the attempt to increase CAPTCHA difficulty to prevent illegal API calls.
This move by OpenAI could reduce the costs associated with misuse. However, this hypothesis is contested by the fact that even paid subscriptions to ChatGPT come with usage limits. Some speculate that OpenAI might be using CAPTCHAs to slow down user interaction, severely harming the user experience.
It's hoped that OpenAI's engineers will take note of the discussions on HN and consider improvements to address this issue. Otherwise, the frequent CAPTCHA interruptions could lead some ChatGPT Plus subscribers to cancel their subscriptions.