SOUTH KOREA: Samsung has banned its employees’ use of generative AI tools, including ChatGPT, due to security concerns. The move comes after the company found that its staff had uploaded sensitive code to ChatGPT in April, which led to setbacks in adopting such technology in the workplace.
In a memo issued to its staff, Samsung stated that it is worried about data transmitted to AI platforms such as Google Bard and Bing, which are stored on external servers and challenging to retrieve and delete, posing risks of exposure to other users.
The memo also referenced a recent survey conducted by Samsung, which found that 65% of respondents believe that AI services pose security risks.
Samsung’s new policy prohibits employees from using generative AI systems on company-owned devices and internal networks. The company emphasized the importance of adhering to its security guidelines; failure to do so may lead to disciplinary action, including termination of employment.
While generative AI tools like ChatGPT have gained popularity for enhancing productivity and efficiency, there have been growing concerns about security risks. Italy had banned the use of ChatGPT due to privacy fears, while some Wall Street banks, including JPMorgan Chase & Co., Bank of America Corp., and Citigroup Inc., have restricted or prohibited its use.
Samsung is reviewing security measures to create a safe environment for using generative AI to enhance productivity and efficiency. However, the company is temporarily restricting the use of such technology until these measures are in place.
The new policy does not affect Samsung’s consumer devices like Android smartphones and Windows laptops.
The usage of generative AI technologies has recently brought up ethical issues in addition to security ones. The technology can be used to create malevolent content like deepfakes, fake news, and other types of deceptive material, which can have major repercussions on users.
Also Read: Paraguay’s Conservatives Score a Big Election Victory, Easing Taiwan Concerns