To Prevent Data Leakage, Big Techs Are Restricting The Use of AI Chatbots For Their Staff

big techs restrict use chatbots by staff

Need content for your business? Find top writers on WriterAccess!

Time is running out while governments and technology communities around the world are discussing AI policies. The main concern is keeping humanity safe against misinformation and all the risks it involves.

And the discussion is turning hot now that fears are related to data privacy. Have you ever thought about the risks of sharing your information using ChatGPT, Bard, or other AI chatbots?

If you haven’t then, you may not yet know that technology giants have been taking serious measures to prevent information leakage.

In early May, Samsung notified their staff of a new internal policy restricting AI tools on devices running on their networks, after sensitive data was accidentally leaked to ChatGPT.

The company is reviewing measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” said a Samsung spokesperson to TechCrunch.

And they also explained that the company will temporarily restrict the use of generative AI through company devices until the security measures are ready.

Another giant that adopted a similar action was Apple. According to the WSJ, Samsung’s rival is also concerned about confidential data leaking out. So, their restrictions include ChatGPT as well as some AI tools used to write code while they are developing similar technology.

Even earlier this year, an Amazon lawyer urged employees not to share any information or code with AI chatbots, after the company found ChatGPT responses similar to the internal Amazon data.

In addition to the Big Techs, banks such as Bank of America and Deutsche Bank are also internally implementing restrictive measures to prevent the leakage of financial information.

And the list keeps growing. Guess what! Even Google joined in.

Even you Google?

According to Reuters’ anonymous sources, last week Alphabet Inc. (Google parent) advised their employees not to enter confidential information into the AI chatbots. Ironically, this includes their own AI, Bard, which was launched in the US last March and is in the process of rolling out to another 180 countries in 40 languages.

Google’s decision is due to researchers’ discovery that chatbots could reproduce the data inputted through millions of prompts, making them available to human reviewers.

Alphabet warned their engineers to avoid inserting code in the chatbots as AI can reproduce them, potentially generating a leakage of their technology’s confidential data. Not to mention, favoring their AI competitor, ChatGPT.

Google confirms it intends to be transparent about the limitations of its technology and updated the privacy notice urging users “not to include confidential or sensitive information in their conversations with Bard.”

100k+ ChatGPT accounts on Dark Web Marketplace

Another factor that could generate sensitive data exposure is, as AI chatbots are becoming more and more popular, employees around the world who are adopting them to optimize their routines. Most of the time without any cautiousness or supervision.

Yesterday Group-IB, a Singapore-based global cybersecurity solutions leader, reported that they found more than 100k compromised ChatGPT accounts infected with saved credentials within their logs. This stolen information has been traded on illicit dark web marketplaces since last year. They highlighted that by default, ChatGPT stores the history of queries and AI responses, and the lack of essential care is exposing many companies and their employees.

Governments push regulations

Not only companies fear information leakage by AI. In March, after identifying a data breach in OpenAI that allows users to view titles of conversations from other users with ChatGPT, Italy ordered OpenAi to stop processing Italian users’ data.

The bug was confirmed in March by OpenAi. “We had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating. A small percentage of users were able to see the titles of other users’ conversation history. We feel awful about this.” said Sam Altman on his Twitter account at that time.

The UK published on its official website an AI white paper launched to drive responsible innovation and public trust considering these five principles:

  • safety, security, and robustness;
  • transparency and explainability;
  • fairness; 
  • accountability, and governance; 
  • and contestability and redress.

As we can see, as AI becomes more present in our lives, especially at the speed at which it occurs, new concerns naturally arise. Security measures become necessary while developers work to reduce dangers without compromising the evolution of what we already recognize as a big step toward the future.

Do you want to continue to be updated with Marketing best practices? I strongly suggest that you subscribe to The Beat, Rock Content’s interactive newsletter. We cover all the trends that matter in the Digital Marketing landscape. See you there!

Share
facebook
linkedin
twitter
mail

Subscribe to our blog

Sign up to receive Rock Content blog posts

Rock Content WriterAccess - Start a Free Trial

Order badass content with WriterAccess. Just as we do.

Find +15,000 skilled freelance writers, editors, content strategists, translators, designers and more for hire.

Want to receive more brilliant content like this for free?

Sign up to receive our content by email and be a member of the Rock Content Community!