GRC Viewpoint

ChatGPT Now May Contain Sensitive Data Fed By Employees, Security Fears Prominent

Employees are providing sensitive corporate data and privacy-protected information to large language models. ChatGPT is a recent example.

Security could be a concern with ChatGPT. Employees are feeding critical information to the AI-powered chatbot. 

READ MORE: Microsoft A.I. chatbot Is Now On Bing app

Concerns that the data may be incorporated into the models of artificial intelligence (AI) services and that information may be retrieved later if adequate data security needs to be implemented.

The security lapse is due to the possibility of the data being retrieved with the correct queries later.

Industry sources say that enterprises are taking preventive actions. Some of them have imposed restrictions on the use of ChatGPT.

READ MORE: Mintelium: Staying Ahead of the HR Tech Revolution with A Blockchain Powered Approach

The Platform may be gathering far more information than users putting them at risk of legal trouble as more software companies connect their products to ChatGPT.

“Prudent employers will include — in employee confidentiality agreements and policies — prohibitions on employees referring to or entering confidential, proprietary, or trade secret information into AI chatbots or language models, such as ChatGPT. But, on the flip side, since ChatGPT was trained on wide swaths of online information, employees might receive and use information from the tool that is trademarked, copyrighted, or the intellectual property of another person or entity, creating legal risk for employers,” says Karla Grossenbacher, a partner Seyfarth Shaw, a law firm.

Related Articles

Latest Articles