14.5 C
Madrid
Sunday, November 24, 2024

LaMDA: Everyone Can Now Chat With the Public-facing Google AI Chatbot

The public is being urged to provide input on the AI conversational chatbots that Google and Meta have launched

Must read

Russell Chattaraj
Russell Chattaraj
Mechanical engineering graduate, writes about science, technology and sports, teaching physics and mathematics, also played cricket professionally and passionate about bodybuilding.

UNITED STATES: The public can now sign up to interact with Google’s experimental artificial intelligence (AI) chatbot, which was built using the company’s contentious language model.

Early LaMDA (Language Model for Dialogue Applications) previews, Google has already stated, “may display erroneous or inappropriate content.”

- Advertisement -

Google’s “AI Test Kitchen” app allows users to learn about, experiment with, and provide feedback on the company’s cutting-edge AI technologies.

Our objective is to jointly learn, develop, and ethically innovate on AI. We’ll gradually start accepting small groups of people”, the firm claimed.

- Advertisement -

“AI Test Kitchen” is “meant to give you a feel of what it may be like to have LaMDA in your hands,” according to Alphabet and Google CEO Sundar Pichai. “These language models’ capacity to produce an unlimited number of alternatives indicates their potential, but it also means that they occasionally make mistakes.”

“And while we’ve made significant advancements in accuracy and safety in the most recent version of LaMDA, we’re still just getting started,” Google noted. “We’ve strengthened the AI Test Kitchen’s security with many layers. The risk has been reduced but not entirely removed by this work,” It added.

- Advertisement -

The public is being urged to provide input on the AI conversational chatbots that Google and Meta have launched.

The initial reports are unsettling since the Meta chatbot BlenderBot 3 believed Donald Trump would always be president of the United States and that Mark Zuckerberg is “creepy and manipulative.”

All conversational AI chatbots are known to occasionally replicate and produce hazardous, biased, or insulting remarks, according to Meta’s statement from last week. The business stated in a blog post that “BlenderBot can still make crude or inappropriate comments, which is why we are collecting input that will help make future chatbots better.”

A developer was suspended by Google last month for violating the company’s confidentiality agreement when he claimed that the conversation AI used by the tech giant has feelings, emotions, and subjective experiences, making it “sentient.”

LaMDA was also questioned by Lemoine, and their responses were shocking and unexpected.

Also Read: Google Will Promote Original and Relevant Content from Now

Author

  • Russell Chattaraj

    Mechanical engineering graduate, writes about science, technology and sports, teaching physics and mathematics, also played cricket professionally and passionate about bodybuilding.

- Advertisement -

Archives

spot_img

Trending Today