1 |
Information access evaluation -- multilinguality, multimodality, and interaction: 5th International Conference of the CLEF Initiative, CLEF 2014, Sheffield, UK, September 15-18, 2014, Proceedings
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Correlation between Similarity Measures for Inter-Language Linked Wikipedia Articles
|
|
|
|
BASE
|
|
Show details
|
|
3 |
External Plagiarism Detection using Information Retrieval and Sequence Alignment - Notebook for PAN at CLEF 2011.
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Diversity in Photo Retrieval: Overview of the ImageCLEFPhoto Task 2009
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Overview of iCLEF 2009: Exploring Search Behaviour in a Multilingual Folksonomy Environment.
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Exploring the Effects of Language Skills on Multilingual Web Search.
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Overview of iCLEF 2008: Search Log Analysis for Multilingual Image Retrieval.
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Overview of the ImageCLEFphoto 2007 Photographic Retrieval Task.
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Overview of the ImageCLEFphoto 2008 photographic retrieval task
|
|
|
|
BASE
|
|
Show details
|
|
12 |
iCLEF 2006 Overview: Searching the Flickr WWW Photo-Sharing Repository.
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Overview of the ImageCLEF 2006 Photographic Retrieval and Object Annotation Tasks.
|
|
|
|
BASE
|
|
Show details
|
|
14 |
User experiments with the Eurovision cross-language image retrieval system
|
|
|
|
BASE
|
|
Show details
|
|
15 |
GeoCLEF: the CLEF 2005 cross-language geographic information retrieval track overview
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Overview of the ImageCLEFmed 2006 Medical Retrieval and Medical Annotation Tasks.
|
|
|
|
Abstract:
This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The best–performing systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.
|
|
URL: https://doi.org/10.1007/978-3-540-74999-8_72 http://eprints.whiterose.ac.uk/78565/ http://eprints.whiterose.ac.uk/78565/8/WRRO_78565.pdf
|
|
BASE
|
|
Hide details
|
|
|
|