Samsung banned employees from using generative AI tools such as ChatGPT, Bing, or Google Bard, fearing further leaks of sensitive data, Bloomberg reports.
The South Korean tech conglomerate notified its staff via a memo seen by Bloomberg. Samsung reportedly fears that any queries its employees enter to ChatpGPT or other generative AI models are stored externally and are difficult to retrieve or delete.
The company’s fears are hardly unfounded, as previous reports suggest that earlier this year, employees interacting with ChatGPT leaked Samsung’s sensitive data on three separate occasions.
The information that employees shared with the chatbot supposedly included the source code of software responsible for measuring semiconductor equipment. Ironically, the alleged leak came only 20 days after the South Korean conglomerate lifted an initial ban on ChatGPT that was put in place to avoid leaking confidential data.
Samsung’s memo to its employees supposedly told the staff that the key reason for the strict measures is security, since communication with generative AIs such as OpenAI’s Chat GPT can lead to a data breach.
Failure to comply with policy changes would result in “disciplinary action” or even termination of employment, Bloomberg reports.
Privacy concerns over ChatGPT’s security have been ramping up since OpenAI revealed that a flaw in its bot exposed parts of conversations users had with it, as well as their payment details in some cases.
As a result, the Italian Data Protection Authority has banned ChatGPT, while German lawmakers have said they could follow in Italy’s footsteps. Later, however, Italy lifted the ban, as the chatbot’s maker met the watchdog’s privacy demands.
The release of ChatGPT has prompted a race in the tech sector to release intelligent chatbots. Google has launched its ChatGPT rival Bard, while Chinese tech giant Baidu has unveiled its own chatbot, Ernie Bot. Both met with mixed reviews from early adopters.