I have been playing with bing chat, and been really impressed about how Human like it is.
But i have also been disappointed about a couple of things.
1. it acts literally like a mad man, endorsing completely opposite views of things and giving opposite answers to the same questions in a Matter of minutes without remember its original answers.
2. It very often endorses western views of social topics and more specifically urban liberal western views. It does so through 2 mechanisms:
The first one is that it mostly uses English speaking sources regardless of which lenguage you use to speak to it (i have spoken to it in English, Spanish, Portuguese, Chinese and Japanese and it always uses English speaking sources and then translates them to the language you use ).
In this case, even if the langua
ge model doesn't directly express an opinion, it reflects the views of western English speakers the most.
Second, sometimes it will actually express "opinions". Once i was asking it about a socially sensitive topic and it told me it's opinion about it. It was based on an article from the new York time that provided some anecdotal evidence about the topic. I asked if it had statistically relevant opinion to back that opinon, but it told me he didn't have and didn't need it, for some things, numbers were not necessary.
I was shocked to hear this and told it, you are language model, you don't have any opinions, you are just reproducing opinions from open ai employees. He told me, no i am taking opinions from the users i interact with and my users prefer me to have opinions.
Long story short, ai social opinion was extremely American and very different from how a latin American would see things.
The issue with it, is that this will gradually become a work instrument, and a writer assistant for journalists and small media around the world.
Once it stops being crazy it will just more consistently endorse it's creators views and will do so from an apparent air of objectivity being that it's basically a non emotional computer.