Recently, Samsung allowed employees to use ChatGPT as a tool to solve work-related problems. This led to the leaking of confidential information. Also, Open AI, the developer of the neural network, explains in its documentation that the bot cannot be used to work with confidential data, as the system is trained in interacting with users. And the content the system receives may appear in its responses to other users.
It seems that all three Samsung employees did not know this, as one turned to the chatbot with the task of checking the source code of the secret database for errors, the other asked it to optimize the code, and the third went to the recording. transferred the meeting to ChatGPT and asked the neural network to create the protocol.
Samsung has implemented a number of measures, including limiting the length of ChatGPT worker requests to kilobytes or 1024 characters of text. It is reported that the company has created its own chatbot to prevent problematic situations.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.