Show simple item record

dc.contributor.authorJaiswal, Amit Kumaren
dc.contributor.authorLiu, Haimingen
dc.contributor.authorFrommholz, Ingoen
dc.date.accessioned2020-01-21T11:51:08Z
dc.date.available2020-01-21T11:51:08Z
dc.date.issued2020-03-17
dc.identifier.citationJaiswal AK, Liu H, Frommholz I (2020) 'Utilising information foraging theory for user interaction with image query auto-completion', European Conference on Information Retrieval - Lisbon, Springer.en
dc.identifier.isbn9783030454388
dc.identifier.doi10.1007/978-3-030-45439-5_44
dc.identifier.urihttp://hdl.handle.net/10547/623791
dc.description.abstractQuery Auto-completion (QAC) is a prominently used feature in search engines, where user interaction with such explicit feature is facilitated by the possible automatic suggestion of queries based on a prefix typed by the user. Existing QAC models have pursued a little on user interaction and cannot capture a user’s information need (IN) context. In this work, we devise a new task of QAC applied on an image for estimating patch (one of the key components of Information Foraging Theory) probabilities for query suggestion. Our work supports query completion by extending a user query prefix (one or two characters) to a complete query utilising a foraging-based probabilistic patch selection model. We present iBERT, to fine-tune the BERT (Bidirectional Encoder Representations from Transformers) model, which leverages combined textual-image queries for a solution to image QAC by computing probabilities of a large set of image patches. The reflected patch probabilities are used for selection while being agnostic to changing information need or contextual mechanisms. Experimental results show that query auto-completion using both natural language queries and images is more effective than using only language-level queries. Also, our fine-tuned iBERT model allows to efficiently rank patches in the image.
dc.description.sponsorshipQUARTZen
dc.language.isoenen
dc.publisherSpringeren
dc.relation.urlhttps://www.springerprofessional.de/en/utilising-information-foraging-theory-for-user-interaction-with-/17887616
dc.relation.urlhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148231/
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectinformation foragingen
dc.subjectQuery Auto Completionen
dc.subjectinteractive information retrievalen
dc.subjectInformation Foraging Theoryen
dc.subjectinformation retrievalen
dc.subjectG500 Information Systemsen
dc.titleUtilising information foraging theory for user interaction with image query auto-completionen
dc.title.alternativeProceedings European Conference on Information Retrieval (ECIR 2020)en
dc.typeConference papers, meetings and proceedingsen
dc.contributor.departmentUniversity of Bedfordshireen
dc.date.updated2020-01-21T11:41:19Z
dc.description.notepolicy at springer.com/gp/open-access/publication-policies/self-archiving-policy - 12 m embargo
html.description.abstractQuery Auto-completion (QAC) is a prominently used feature in search engines, where user interaction with such explicit feature is facilitated by the possible automatic suggestion of queries based on a prefix typed by the user. Existing QAC models have pursued a little on user interaction and cannot capture a user’s information need (IN) context. In this work, we devise a new task of QAC applied on an image for estimating patch (one of the key components of Information Foraging Theory) probabilities for query suggestion. Our work supports query completion by extending a user query prefix (one or two characters) to a complete query utilising a foraging-based probabilistic patch selection model. We present iBERT, to fine-tune the BERT (Bidirectional Encoder Representations from Transformers) model, which leverages combined textual-image queries for a solution to image QAC by computing probabilities of a large set of image patches. The reflected patch probabilities are used for selection while being agnostic to changing information need or contextual mechanisms. Experimental results show that query auto-completion using both natural language queries and images is more effective than using only language-level queries. Also, our fine-tuned iBERT model allows to efficiently rank patches in the image.


Files in this item

Thumbnail
Name:
ECIR2020-2preprint.pdf
Size:
2.692Mb
Format:
PDF
Description:
preprint

This item appears in the following Collection(s)

Show simple item record

http://creativecommons.org/licenses/by-nc-nd/4.0/
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by-nc-nd/4.0/