Online peer-to-peer marketplaces are great places to sell unwanted goods, but they rely on trust between users to operate effectively. High-quality photographs help reduce uncertainty about products, leading to better purchasing experiences. But how can we tell the difference between high- and low-quality images, and what do they mean for sales?
Cx faculty members Mor Naaman and Serge Belongie, along with Cx alumnae Kimberly Wilber and Xiao Ma and researchers from Cornell Tech and other institutions, tackled this question with a combination of user surveys and machine learning. They gathered images - low and high quality user images, and professional stock photographs - from consumer re-sale sites like eBay. Participants annotated the images by quality, and their annotations were used to train a convolutional neural network to predict the quality of future images.
They then used the automatic quality ratings to predict important marketplace outcomes like probability of sale and perceived trustworthiness. Sales were predicted using data from eBay, while trustworthiness was rated by Amazon Mechanical Turk workers. The results show that high-quality user-generated images were more trustworthy than the low-quality ones - but they were also more trustworthy than the professional stock images. Higher quality images were also more likely to result in sales.
So what does this mean for online marketplaces? The most effective images are ones that are taken by real people, rather than stock photos - as long as they're clear and accurate. The results also demonstrates the importance of computational understandings of text and images in studying online interactions.
This research was presented at WACV 2019 in Hawaii, and is available online at ArXiv. You can also read more about it in the Cornell Chronicle.