Are You Mistrustful Of AI? You May Also Have Attachment Insecurity, According To New Research


Researchers at the University of Kansas Department of Psychology released information from a study that highlights an association between attachment security and trust in Artificial Intelligence (AI). The paper was published in the Science Direct journal in October, and details the researchers findings that trust in AI can be increased via attachment security priming, i.e. through visualization and guided imagery.  

One takeaway from the study is that how people, feel, think, and behave in relationships – their attachment style — predicts trust. Further, the research found that it was possible to predict an individual having less trust in AI if that person shows signs of attachment anxiety in the way they view relationships in general. In people that exhibit attachment security, there was typically increased trust in AI. The study also found that exposure to positive affect cues did not have the same effect on trust in AI. 

Omri Gillath, the corresponding author of the publication, notes that 178 females and 70 males were participants in the study. More than 88% of participants were white, and almost 7% were Latino/Hispanic. The ages of participants were between 18 and 80 years of age with the median being 21.5 years. The study notes that there is a general mistrust in AI, based on prior research conducted in this field. “Although the prevalence of and the roles AI plays have increased exponentially, many people still have a hard time trusting AI. For example, in one survey on whether people trust AI, only 44% of participants said that if a self-driving Uber car picked them up, they would get in. A more recent survey by SurveyUSA (2020) of 1200 adults showed similar results, such that 48% of the people said they would never get in a self-driving vehicle.” 

For the purposes of this study, AI was defined as the simulation of human intelligence processes by machines, especially computer systems. The study participants were provided with the following explanation of the term when answering questionnaires about their comfort level with AI. “Examples of AIs include personal helpers (like Siri and Alexa), medical diagnostic aids, and self-driving vehicles.”

The researchers proposed that there are likely multiple reasons that people do not trust AI, such as a lack of transparency and not understanding how decisions are made by AI, the possibility of people being replaced in their jobs because of AI, or even the risk of death. The stigma around robotics also plays a role in fear. “Liang and Lee (2017) using a nationally representative sample showed that over 25% of the people experience heightened fear of autonomous robots and artificial intelligence (FARAI),” the University of Kansas study reads. “They further showed that this fear was positively correlated with other fears such as fear of drones or of becoming unemployed. Regardless of the reason, a lack of trust can result in reduced cooperation, efficiency, and productivity.” 

It is this supposed block to progress that the researchers were in part looking to understand and find solutions for. The authors posited that attachment insecurity would be associated with less trust in AI and that exposing people to attachment anxiety-related cues will decrease trust in AI. The conclusions of the research are that there are similarities in the ways that humans think about each other, and the way they view AI technology.

The study authors believe this information can be helpful in affecting the adoption and success of AI technologies in both individuals, in the workplace and in public settings. It can also be extrapolated from this research that having a human population that trusts each other may also be beneficial to the acceptance of non-humans operating and interacting in our society. And for that and many other reasons, increasing trust in our communities has never seemed more important.

Artificial Intelligence is being implemented in industries all over the world and is a central theme of the research undertaken at UCIPT. Our work in the HOPE study is using data to assess and shift behavioral outcomes among HIV and other populations.

Leave a comment

Your email address will not be published. Required fields are marked *