Should such psychological research be banned?
Prabhu Siddhartha Guptara
Board Consultant, Poet, & Publisher - Salt Desert Media Group Ltd. (with 2 imprints: Global Resilience; and Pippa Rann); Ex-Executive Director, Wolfsberg/UBS; Ex-International Advisory Council, London Business School
https://uk.businessinsider.com/ai-can-track-eye-movements-to-determine-your-personality-traits-2018-8?r=US&IR=T
Let me re-state my headline question: "Should research such as tracking eye movements to determine your personality (referred to in the link above) be banned - if it doesn't use for its research individuals who are from a representative sample of global cultures?"
I raise that question because it seems to me that most Western researchers may not be aware that there are deep and profound cultural differences in such matters.
For example, in traditional Indian culture, a woman never meets a male non-relative's eyes - unless she wants to be flirtatious or suggestive.
Of course in the cities, and particularly in Mumbai, that sort of norm is changing to the Western one.
But there are differences even between southern and northern England. In the South, the acceptable pattern of eye contact is to look briefly at someone's eyes and then look briefly away before bringing the eye back to the person concerned. In the North, the acceptable pattern of eye contact is to hold the gaze of the person to whom you are speaking.
The differences become more and more stark as one moves through Continental Europe, to Eastern Europe, to the Mediterranean, and radically so as one gets to Africa, the Middle East, South Asia, China and Japan.
Allowing such research without the safeguard I suggest above has the danger is that it accepts as standard many forms of behaviour that are not standard. Then robot-controlled crowd-scanning (which is now becoming prevalent around the world) could easily attribute to me a personality-type which may not only be false but quite opposite to my actual personality.
As we move into more and more of a robot-driven world, that could have titanic consequences, for people who are adversely mis-judged by such automated systems.
For example, it could impact finances (life insurance, health insurance, car insurance, mortgage and loan availability....), legal judgements (in terms of someone's "personality" predicating them to do or avoid certain things), employment and promotion prospects (e.g. via job interviews and promotion discussions), and so on.