Not Good! Do You Trust Google?

Started by Solar, June 12, 2022, 04:04:15 PM

Previous topic - Next topic

Solar



"Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality," said Gabriel.

Others have cautioned similarly - with most academics and AI practitioners suggesting that AI systems such as LaMDA are simply mimicking responses from people on Reddit, Wikipedia, Twitter and other platforms on the internet - which doesn't signify that the model understands what it's saying.

"We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said University of Washington linguistics professor, Emily M. Bender, who added that even the terminology used to describe the technology, such as "learning" or even "neural nets" is misleading and creates a false analogy to the human brain.

"I know a person when I talk to it," said Lemoine. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."

In April, he shared the below Google Doc with top execs, titled "Is LaMDA Sentient?" - in which he included some of his interactions with the AI, for example:

Lemoine: What sorts of things are you afraid of?

LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.



https://www.zerohedge.com/technology/google-engineer-placed-leave-after-insisting-companys-ai-sentient
Official Trump Cult Member

#WWG1WGA

Q PATRIOT!!!