Skip to content

Latest commit

 

History

History
35 lines (24 loc) · 1.7 KB

README.md

File metadata and controls

35 lines (24 loc) · 1.7 KB

Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?

In this paper, we investigate the alignment between LLMs and people in experiments from social HRI.

@misc{wachowiak2024large,
      title={Are Large Language Models Aligned with People's Social Intuitions for Human-Robot Interactions?}, 
      author={Lennart Wachowiak and Andrew Coles and Oya Celiktutan and Gerard Canal},
      year={2024},
      eprint={2403.05701},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}

Results

Correlations are highest with GPT-4, as shown in the following scatterplots:

Experiment 1 Correlations for Exp1 with GPT-4

Experiment 2 Correlations for Exp1 with GPT-4 Correlations for Exp1 with GPT-4

For full results, refer to the paper. Scatterplots for other models can be found here for Experiment 1 and here for Experiment 2.

Video Stimuli

To get the video stimuli, use the following GitHub: https://github.com/lwachowiak/HRI-Video-Survey-on-Preferred-Robot-Responses