Samsung has banned the use of generative AI tools like ChatGPT on its internal networks and company-owned devices over fears that uploading sensitive information to these platforms represents a security risk,. The rule was communicated to staff in a memo which describes it as a temporary restriction while Samsung works to “create a secure environment” to safely use generative AI tools.
The biggest risk factor is likely OpenAI’s chatbot ChatGPT, which has become hugely popular not only as toy for entertainment but as a tool to help with serious work. People can use the system to summarize reports or write response to emails — but that might mean inputting sensitive information, which OpenAI might have access too.
“We are temporarily restricting the use of generative AI”
The privacy risks involved in using ChatGPT vary based on how a user accesses the service. If a company is ChatGPT’s API, then conversations with the chatbotand are not used to train the company’s models. However, this is not true of text inputted into the general web interface using its default settings.
In, the company says it reviews conversations users have with ChatGPT to improve its systems and ensure that it complies with its policies and safety requirements. It advises users to not “share any sensitive information in your conversations” and notes that any conversations may also be used to train future versions of ChatGPT. The company recently rolled out a feature similar to a browser’s “incognito mode,” , which does not save chat histories and prevents them from being used for training.
Samsung is evidently worried about employees playing around with the tool and not realizing that it’s a potential security risk.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” said the company’s internal memo, reports Bloomberg. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.” As well as restricting the use of generative AI on company computers, phones, and tablets, Samsung is also asking staff not to upload sensitive business information via their personal machines.
“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” Samsung’s memo said. The South Korean tech giant confirmed the authenticity of the memo to Bloomberg. A spokesperson did not immediately respond to The Verge’s request for comment.
The ban comes after Samsung discovered that some of its staff “leaked internal source code by uploading it to ChatGPT,” according to Bloomberg. There are concerns that uploading sensitive company information to external servers operated by AI providers risks exposing it publicly, and limits Samsung’s ability to delete it after the fact. News of Samsung’s policy comes a little over a month after ChatGPT experienced a bug that, and , to other users of the service.
Samsung’s policy means it joins a host of other companies and institutions to have placed limits on the use of generative AI tools, though the exact reasons for the restrictions vary. JPMorgan has restricted their use over compliance concerns,, while other banks such as Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, and Wells Fargo have also either . New York City schools have banned ChatGPT over , while data protection and child safety concerns was cited as the reason for .
Samsung reportedly has plans for its employees to use AI tools eventually, but it sounds like it’s waiting to develop in-house solutions. Bloomberg notes that it’s working on tools to help with translation, summarizing documents, and software development.
Any generative AI restrictions do not apply to devices sold to consumers like laptops or phones.