Professor’s Data Loss & ChatGPT Mockery | Social Media Backlash

by drbyos

Almost everyone has experienced a similar situation: the sudden disappearance of personal data, following accidental deletion or a system malfunction. However, a German professor recently reported the loss of a large quantity of academic documents after changing the settings of OpenAI’s ChatGPT. He shared his experience in the latest issue of the journal Nature. Instead of generating compassion, he became the target of mockery on social media, mainly because he was seen as too dependent on AI.

Professor Marcel Bucher, professor of plant sciences at the University of Cologne (Germany), said that he subscribed to the paid ChatGPT Plus plan two years ago and considers the tool a valuable assistant.

“Since subscribing to ChatGPT Plus, I have used it daily as an assistant – for writing emails, developing lesson plans, putting together research grant applications, proofreading scientific articles, preparing for lectures, creating exam questions, analyzing student work, and even as an interactive teaching tool,” Bucher wrote.

Using ChatGPT for article writing and data analysis is quite common in academia. (Illustration: Freepik)

He acknowledged that ChatGPT, like other major language models, sometimes provides inaccurate information. However, he appreciated its ability to retain context and its feeling of a “continuous and stable workspace”.

According to Gizmodo the problem started when he changed the data usage permissions settings.

According to the Nature article, in August Bucher temporarily disabled the “Accept data sharing” option to check whether he was still using all of ChatGPT’s features. At that point, his entire conversation history and project files were permanently deleted: two years of carefully organized academic documents, wiped out. No warning, no recovery button, just a blank page. Fortunately, he was able to save a small portion of the conversations, but the vast majority of the data was lost forever.

Initially, the professor thought it was a simple technical problem and believed he could recover the data. He reinstalled the app, tried different browsers, and changed settings several times, to no avail. When he contacted OpenAI, he only received an automated response unable to resolve the issue. Ultimately, even after contacting technical support, the conclusion remained the same: the data was lost.

If this story had happened a few years ago, Mr. Bucher would undoubtedly have aroused more compassion. But in 2026, as AI is increasingly criticized for producing low-quality, inaccurate and controversial content, many will rejoice at the “loss” of all AI data.

“A tragic and striking story: ‘ChatGPT erased everything I never did myself’,” one user of the social media platform Bluesky commented sarcastically.

Others were more virulent: “Next time, do the work you are paid to do yourself, instead of entrusting everything to a copy machine which harms the environment and is criticized for its many social consequences. »

Some even suspect that Professor Bucher himself may not have written the article published in Nature.

“This is the most ridiculous thing I’ve read in a long time. That said, there is no reason to believe that the person who posted this actually wrote it themselves,” another user commented.

Professor Bucher, however, made a valid point: he indicated that he himself was encouraged to use AI in his work, and that this was not an isolated case. Many large organizations are encouraging their employees to integrate AI into research and teaching, seeing this as an inevitable development.

He wrote that more and more people are using AI to write, plan, and teach; universities are also experimenting with integrating these tools into their programs. However, his example illustrates a fundamental weakness: these tools are not developed according to academic standards of reliability and accountability.

According to him, if a single click can erase years of accumulated data, then ChatGPT, at least in his experience, cannot yet be considered a secure tool for use in a professional environment.

The real impact of AI tools capable of generating content on the world of work remains unclear, especially as employees become increasingly reluctant, while many leaders work to promote their adoption. Whatever the future holds, many AI skeptics may well rejoice at any news of data loss due to over-reliance on the technology.

Source :

Related Posts

Leave a Comment