Header Information

NPRP 7 - 1313 - 1 - 245
NPRP 07
Qatar University
Award Tech. Completed
01 Apr 2015
Dr. Tamer Elsayed
3 Year(s)
04 Oct 2018
New
Efficient and Scalable Evaluation for Searching Massive Arabic Social Media and Web Collections

Project Summary
Search engines provide critical infrastructure for ensuring an efficient and effective information-driven economy in the 21st century, connecting people to whatever sought-after information needle they seek in today’s ever-larger information haystack. To achieve search quality in practice, we must continually revise, extend, and fine-tune search engine algorithms against a ceaseless evolution of ever-more massive information repositories and varied types of content people are searching in practice. This in turn requires accurate, efficient, affordable, and scalable methodology for evaluating the quality of search results. Without such evaluation scaffolding, we cannot even measure the effectiveness of existing search engines, let alone assess future innovations and potential improvements to current state-of-the-art search algorithms. Unfortunately, the massive scale of information repositories being searched today represents a fundamental challenge to current state-of-the-art methodology for evaluating the quality of search results. Because search engines must effectively support a great multitude of different users, types of information needs, and different ways of articulating these information needs through search queries, search engine evaluation must be conducted over many different queries seeking different types of information. Moreover, we must also conduct experiments with the same scale of information archives being searched in practice, which requires tremendous human labor in judging the relevance of an enormous number of search results for each query. Unfortunately, the manual effort required by judging so many results has become increasingly infeasible. While recent advances in evaluation methodology have greatly reduced the number of human relevance judgments required for accurate evaluation, such judging remains a major scalability bottleneck. In order to ensure continuing advances in search engine technology, we will investigate a range of techniques for improving the cost, efficiency, and scalability of search engine evaluation. We will focus particularly on Arabic language search (queries and/or documents), including English language as well for comparison purposes. In terms of information being searched, we will focus on providing large datasets out of the Arabic Web and social media. With regard to methodology, we will focus on evaluation techniques requiring minimal or no human judgments. Specifically, we will refine and integrate previously independent lines of prior research on “rank fusion”, “pseudo-test collections”, and “crowdsourcing”. Results are expected to significantly increase the quality and generality of blind evaluation techniques in order to reduce current search engine evaluation cost and time.
An important expected result from this project will be the further establishing Qatar as a leading global hub for research on emerging problems in Arabic language technology in general and Arabic IR in particular. We will pursue this objective in several ways: 1)Fostering the emergence of a world-class academic research team at QU, with the close collaboration with one of the top researchers in the field and the expansion in human capacity and needed research infrastructure 2)Full alignment with important research priorities of both QNRS and QCRI. 3)Drawing upon unique assets in Qatar, most notably QCRI, but also faculty with related interests in other Qatari universities. QCRI has expressed their support and interest in collaboration on parts of this work (see attached Letter of Support from QCRI) and we are planning to assist with the resulting document and test collections and the associated access and search tools, even after completion of the project, to help push their Arabic retrieval research forward. We will also encourage further development of this potential by organizing a shared-task evaluation workshop either at TREC or QU (using other funds) in the third year of the project in which we expect international research teams to participate. This workshop will help in gaining more academic reputation in the international IR community for QU and Qatar.
Information retrieval; Evaluation of search engines; Arabic; Crowdsourcing; Test collections
Applied research
1. Natural Sciences
1.2 Computer and Information Sciences
Computer Sciences
Yes
No

Institution
Qatar University
Qatar
Submitting Institution
University of Texas at Austin
United States
Collaborative Institution

Personnel
Lead PI
Dr. Tamer Elsayed
Qatar University
Co-Lead PI
Dr. Tamer Elsayed
Qatar University
PI
Dr. Matthew Lease
University of Texas at Austin

Outputs/Outcomes
Conference Paper
ArabicWeb16: A New Crawl for Today’s Arabic Web
Reem Suwaileh, Mucahid Kultlu, Nihal Fathima, Tamer Elsayed, and Matthew Lease
DOI:10.1145/2911451.2914677
Conference Paper
Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments
Tyler McDonnell, Matthew Lease, Mucahid Kutlu, Tamer Elsayed
DOI:10.2016/aaai.hcomp.139
Journal Paper
Query performance prediction for microblog search
Maram Hasanain, Tamer Elsayed
ISSN:03064573
Journal Paper
Intelligent topic selection for low-cost information retrieval evaluation: A New perspective on deep vs. shallow judging
Mucahid Kutlu, Tamer Elsayed, Matthew Lease
ISSN:03064573
Journal Paper
EveTAR: building a large-scale multi-task test collection over Arabic tweets
Maram Hasanain, Reem Suwaileh, Tamer Elsayed, Mucahid Kutlu, Hind Almerekhi
ISSN:15737659
Conference Paper
Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization
Ye Zhang, Matthew Lease, Byron C. Wallace
DOI:10.18653/v1/P17-2024
Conference Paper
The Many Benefits of Annotator Rationales for Relevance Judgments
Tyler McDonnell, Mucahid Kutlu, Tamer Elsayed, Matthew Lease
DOI:10.24963/ijcai.2017/692
Conference Paper
Crowd vs. Expert: What Can Relevance Judgment Rationales Teach Us About Assessor Disagreement?
Mucahid Kutlu, Tyler McDonnell, Yassmine Barkallah, Tamer Elsayed, and Matthew Lease
DOI:10.1145/3209978.3210033
Conference Paper
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to Ensure Quality Relevance Annotations.
Tanya Goyal, Tyler McDonnell, Mucahid Kutlu, Tamer Elsayed, and Matthew Lease.
DOI:10.18/41-hcomp18-full
Conference Paper
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collections Accurately and Affordably
Mucahid Kutlu, Tyler McDonnell, Aashish Sheshadri, Tamer Elsayed, and Matthew Lease
DOI:10.2167/10-desires18
Conference Paper
DART: A Large Dataset of Dialectal Arabic Tweets
Israa Alsarsour, Esraa Mohamed, Reem Suwaileh, and Tamer Elsayed
DOI:10.2018/3666-lrec18
Conference Paper
iArabicWeb16: Making a LargeWeb Collection More Accessible for Research
Khaled Yasser, Reem Suwaileh, Abdelrahman Shouman, Yassmine Barkallah, Mucahid Kutlu, and Tamer Elsayed
DOI:10.2018/75.lrec18.osact3
Conference Paper
Overview of the CLEF-2018 CheckThat! Lab on automatic identification and verification of political claims
Preslav Nakov, Alberto Barrón-Cedeño, Tamer Elsayed, Reem Suwaileh, Lluís Màrquez, Wajdi Zaghouani, Pepa Atanasova, Spas Kyuchukov, Giovanni Da San Martino
DOI:10.1007/978-3-319-98932-7-32
Conference Paper
bigIR at CLEF 2018: Detection and Verification of Check-Worthy Political Claims
Khaled Yasser, Mucahid Kutlu, and Tamer Elsayed
DOI:10.1007/10-checkthat-clef18
Conference Paper
BroDyn’18: Workshop on Analysis of Broad Dynamic Topics over Social Media
Tamer Elsayed, Walid Magdy, Mucahid Kutlu, Maram Hasanain, Reem Suwaileh
DOI:10.2018/1.brodyn.ecir18
Conference Paper
When Rank Order Isn’t Enough: New Statistical-Significance-Aware Correlation Measures
Mucahid Kutlu, Tamer Elsayed, Maram Hasanain, and Matthew Lease
DOI:10.2018/10.cikm18.full
Conference Paper
Re-ranking Web Search Results for Better Fact-Checking: A Preliminary Study
Khaled Yasser, Mucahid Kutlu, and Tamer Elsayed
DOI:10.2018/4.cikm18.short
Online Paper
Efficient Test Collection Construction via Active Learning
Mustaf Rahman, Mucahid Kutlu, Tamer Elsayed, Matthew Lease
DOI:10.1801/arxiv.05605v1
Online Paper
Correlation and Prediction of Evaluation Metrics in Information Retrieval
Mucahid Kutlu, Vivek Khetan, Matthew Lease
DOI:10.1802/arxiv.00323v1
Conference Paper
Correlation, Prediction and Ranking of Evaluation Metrics in Information Retrieval
Soumyajit Gupta, Mucahid Kutlu, Vivek Khetan, and Matthew Lease
DOI:10.2019/1.full.ecir19
Journal Paper
Annotator Rationales for Labeling Tasks in Crowdsourcing
Mucahid Kutlu, Tyler McDonnell, Tamer Elsayed, Matthew Lease
ISSN:61312012