UNITED STATES: Google’s parent firm, Alphabet Inc., has taken a proactive position by warning its staff against using chatbots, including its own Bard programme. The move comes as Alphabet prepares to launch Bard globally, aiming to tap into the lucrative market of generative artificial intelligence chatbots.
Sources familiar with the matter have revealed that Alphabet is particularly concerned about the risk of leaking confidential information, prompting the need for precautionary measures.
Alphabet has implemented a longstanding policy to protect its sensitive data and has advised employees against entering any confidential materials into AI chatbots. Chatbots like Bard and ChatGPT leverage generative AI technology to engage in realistic conversations with users.
However, recent research has unveiled that these AI models can inadvertently reproduce the data they absorbed during their training, thus posing a significant risk of leaks. This has led Alphabet to warn its programmers to avoid using chatbot-generated computer code directly, further reducing possible risks.
Responding to inquiries on the matter, Alphabet has confirmed that although Bard may present undesired code suggestions, it still serves as a valuable tool for programmers. The company emphasizes the importance of transparency in acknowledging the limitations of its technology.
Alphabet’s concerns also reflect its commitment to avoiding potential business harm arising from the competition with ChatGPT, a chatbot backed by OpenAI and Microsoft Corp.
This cautionary approach by Alphabet aligns with the emerging security standard adopted by corporations, which emphasizes the need to inform personnel about the potential risks associated with publicly-available chat programs. Major companies such as Samsung, Amazon.com, and Deutsche Bank have already established guidelines and protocols regarding the use of AI chatbots to safeguard sensitive information.
According to a recent survey conducted by the networking site Fishbowl, approximately 43 percent of professionals were already utilizing ChatGPT or similar AI tools as of January, often without informing their superiors. Google, in particular, instructed its staff testing Bard prior to its launch in February not to disclose internal information to the chatbot, highlighting the company’s commitment to data security.
Furthermore, Google has engaged in extensive discussions with Ireland’s Data Protection Commission to address regulators’ concerns regarding privacy. This follows a Politico report suggesting that the launch of Bard in the European Union was postponed pending the provision of additional information regarding its impact on privacy.
Businesses are using AI chatbot technology more and more for a variety of functions, which has raised worries about the inclusion of private or protected information in chats.
To address these concerns, some companies, including Cloudflare, have developed software that allows businesses to tag and restrict data from flowing externally, thus minimizing the potential risks of data leaks.
In response to the mounting privacy concerns, both Google and Microsoft are now offering conversational tools to their business customers with a focus on prioritizing data protection. While the default setting in Bard and ChatGPT is to save conversation history, users have the option to delete it, offering an additional layer of control over their data.
As Alphabet takes a proactive stance in cautioning its employees and implementing measures to mitigate data leak risks, it sets a precedent for other companies to prioritize data security when utilizing AI chatbot technology. By maintaining transparency and providing guidelines, businesses can navigate the potential pitfalls while harnessing the benefits of this rapidly evolving field.
Also Read: Google Empowers Online Shopping with AI-Powered Virtual Try-On Tool