<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>2227-1899</journal-id>
<journal-title><![CDATA[Revista Cubana de Ciencias Informáticas]]></journal-title>
<abbrev-journal-title><![CDATA[Rev cuba cienc informat]]></abbrev-journal-title>
<issn>2227-1899</issn>
<publisher>
<publisher-name><![CDATA[Editorial Ediciones Futuro]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S2227-18992018000200001</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[Human-Computer Interaction as a basis for assessing Geographic Information Retrieval Systems.]]></article-title>
<article-title xml:lang="es"><![CDATA[Interacción Persona-Computador como base para la evaluación de Sistemas de Recuperación de Información Geográfica.]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Puebla Martínez]]></surname>
<given-names><![CDATA[Manuel Enrique]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Perea Ortega]]></surname>
<given-names><![CDATA[José Manuel]]></given-names>
</name>
<xref ref-type="aff" rid="A02"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Simón Cuevas]]></surname>
<given-names><![CDATA[Alfredo]]></given-names>
</name>
<xref ref-type="aff" rid="A03"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Universidad de las Ciencias Informáticas  ]]></institution>
<addr-line><![CDATA[ La Habana]]></addr-line>
<country>Cuba</country>
</aff>
<aff id="A02">
<institution><![CDATA[,Universidad de Extremadura  ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
<country>España</country>
</aff>
<aff id="A03">
<institution><![CDATA[,Instituto Superior Politécnico José Antonio Echeverría  ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
<country>Cuba</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>06</month>
<year>2018</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>06</month>
<year>2018</year>
</pub-date>
<volume>12</volume>
<numero>2</numero>
<fpage>1</fpage>
<lpage>14</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://scielo.sld.cu/scielo.php?script=sci_arttext&amp;pid=S2227-18992018000200001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://scielo.sld.cu/scielo.php?script=sci_abstract&amp;pid=S2227-18992018000200001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://scielo.sld.cu/scielo.php?script=sci_pdf&amp;pid=S2227-18992018000200001&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[In recent years, researches related to Geographic Information Retrieval Systems as a specific field of Information Retrieval has continued to attract the attention of the research community by holding several assessing forums. However, these forums provide sets of tests comprised of text documents and queries that are ready to evaluate non&#8208;interactive systems. This framework reduces the possibilities of carrying out a more thorough evaluation of these systems because it is not considering several important features such as the diversity provided by different information sources or the human&#8208;computer interaction. The aim of this paper is to describe a new approach to evaluate interactive Geographic Information Retrieval Systems, which main novelty is to consider the user’s knowledge generated by the human&#8208;computer interaction as well as the spatial information provided by different data sources. The proposed method will require generating a set of tests from three main data sources (Geonames, Wikipedia, and OpenStreetMap), as well as a set of queries that will consist of a tuple of three components: the object type, the spatial relationship and the geographic object. As a result, the proposed evaluation approach integrates the two most commonly used strategies to evaluate IR systems, which are focused on the system and the end user, by applying several user satisfaction techniques and usability tests. As a main conclusion, we pointed out that the evaluation process of Geographic Information Retrieval systems should consider the user’s knowledge generated by the human-computer interaction as well as the spatial information provided by different and heterogeneous data sources.]]></p></abstract>
<abstract abstract-type="short" xml:lang="es"><p><![CDATA[En los últimos años, dentro del área de Recuperación de Información, el área de investigación relacionada con los Sistemas de Recuperación de Información Geográfica ha seguido atrayendo la atención de la comunidad investigadora mediante la celebración de varios foros de evaluación. Sin embargo, estos foros proporcionan colecciones de prueba compuestas de documentos de texto y consultas que están listas para evaluar sistemas no interactivos. Este marco de evaluación reduce las posibilidades de llevar a cabo una evaluación más completa de estos sistemas debido a que no se están considerando varias características como la diversidad proporcionada por diferentes fuentes de información o la interacción hombre-computadora. El objetivo de este trabajo es describir un nuevo enfoque para evaluar sistemas interactivos de Recuperación de Información Geográfica, cuya principal novedad es considerar el conocimiento del usuario generado por la interacción hombre-computadora así como la información espacial proporcionada por diferentes fuentes de datos. El método propuesto requerirá la generación de una colección de pruebas a partir de tres fuentes de datos principales (Geonames, Wikipedia y OpenStreetMap), así como un conjunto de consultas que constarán de una tupla de tres componentes: el tipo de objeto, la relación espacial y Objeto geográfico. Como resultado, el enfoque de evaluación propuesto integra las dos estrategias más utilizadas para evaluar los sistemas de IR, que se centran en el sistema y en el usuario final, aplicando varias técnicas de satisfacción de usuario y pruebas de usabilidad. Como conclusión principal, señalamos que el proceso de evaluación de los sistemas de Recuperación de Información Geográfica debe considerar el conocimiento del usuario generado por la interacción hombre-computadora, así como la información espacial proporcionada por fuentes de datos diferentes y heterogéneos.]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[Evaluation of Geographic Information Retrieval]]></kwd>
<kwd lng="en"><![CDATA[Geographic Information Retrieval]]></kwd>
<kwd lng="en"><![CDATA[Interactive Information Retrieval]]></kwd>
<kwd lng="en"><![CDATA[Human-Computer Interaction Information Retrieval]]></kwd>
<kwd lng="es"><![CDATA[evaluación de la recuperación de información geográfica]]></kwd>
<kwd lng="es"><![CDATA[recuperación de información geográfica]]></kwd>
<kwd lng="es"><![CDATA[recuperación de información interactiva]]></kwd>
<kwd lng="es"><![CDATA[recuperación de información basado en la interacción humano-computador]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[ <p align="right"><font face="Verdana, Arial, Helvetica, sans-serif" size="2"><B>ART&Iacute;CULO  ORIGINAL</B></font></p>     <p>&nbsp;</p>     <p><font size="4"><strong><font face="Verdana, Arial, Helvetica, sans-serif">Human-Computer  Interaction as a basis for assessing Geographic Information Retrieval Systems.</font></strong></font></p>     <p>&nbsp;</p>     <p><font size="3"><strong><font face="Verdana, Arial, Helvetica, sans-serif">Interacci&oacute;n Persona-Computador como base para la  evaluaci&oacute;n de Sistemas de Recuperaci&oacute;n de Informaci&oacute;n Geogr&aacute;fica.</font></strong></font></p>     <p>&nbsp;</p>     <p>&nbsp;</p>     <P><font size="2"><strong><font face="Verdana, Arial, Helvetica, sans-serif">Manuel  Enrique Puebla Mart&iacute;nez<strong><sup>1</sup></strong>, Jos&eacute;  Manuel Perea Ortega<strong><sup>2*</sup></strong>, Alfredo  Sim&oacute;n Cuevas</font></strong><font face="Verdana, Arial, Helvetica, sans-serif"><strong><sup>3</sup></strong></font></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><sup>1</sup>Universidad de las Ciencias Inform&aacute;ticas. Carretera a San Antonio de los  Ba&ntilde;os, Km. 2 &frac12;. Torrens, municipio de La Lisa. La Habana, Cuba. mpuebla@uci.cu</font>    <br>   <font size="2" face="Verdana, Arial, Helvetica, sans-serif"><sup>2</sup>Universidad de Extremadura, Avda. de Elvas,  s/n, Badajoz, Espa&ntilde;a. jmperea@unex.es</font>    ]]></body>
<body><![CDATA[<br>   <font size="2" face="Verdana, Arial, Helvetica, sans-serif"><sup>3</sup>Instituto Superior Polit&eacute;cnico Jos&eacute; Antonio  Echeverr&iacute;a, Cujae. Calle 114, No. 11901. e/ Ciclov&iacute;a y Rotonda, Marianao, La  Habana, Cuba. asimon@ceis.cujae.edu.cu</font>    <br> </p>     <P><font face="Verdana, Arial, Helvetica, sans-serif"><span class="class"><font size="2">*Autor para la correspondencia: </font></span></font><font size="2" face="Verdana, Arial, Helvetica, sans-serif"> <a href="mailto:jmperea@unex.es">jmperea@unex.es</a><a href="mailto:jova@uci.cu"></a></font><font face="Verdana, Arial, Helvetica, sans-serif"><a href="mailto:losorio@ismm.edu.cu"></a> </font>     <p>&nbsp;</p>     <p>&nbsp;</p> <hr>     <P><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>ABSTRACT</b> </font>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In recent years, researches  related to Geographic Information Retrieval Systems as a specific field of Information Retrieval  has continued to attract the attention of the research community by holding  several assessing forums. However, these forums provide sets of tests comprised  of text documents and queries that are ready to evaluate non&#8208;interactive  systems. This framework reduces the possibilities of carrying out a more  thorough evaluation of these systems because it is not considering several important  features such as the diversity provided by different information sources or the  human&#8208;computer interaction. The aim of this paper is to describe a new approach  to evaluate interactive Geographic Information Retrieval Systems, which main  novelty is to consider the user&rsquo;s knowledge generated by the human&#8208;computer  interaction as well as the spatial information provided by different data  sources. The proposed method will require generating a set of tests from three  main data sources (Geonames, Wikipedia, and  OpenStreetMap), as well as a set of queries that will consist of a tuple of  three components: the object type, the spatial relationship and the geographic object. As a result, the  proposed evaluation approach integrates the two most commonly used strategies  to evaluate IR systems, which are focused on the system and the end user, by  applying several user satisfaction techniques and usability tests. As a main  conclusion, we pointed out that the evaluation process of Geographic  Information Retrieval systems should consider the user&rsquo;s knowledge generated by  the human-computer interaction as well as the spatial information provided by  different and heterogeneous data sources.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>Key words<span lang=EN-GB>:</span></b></font> <font size="2" face="Verdana, Arial, Helvetica, sans-serif">Evaluation of Geographic Information Retrieval, Geographic Information  Retrieval, Interactive Information Retrieval, Human-Computer  Interaction Information Retrieval.</font></p> <hr>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>RESUMEN</b></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">En los &uacute;ltimos a&ntilde;os, dentro del &aacute;rea de Recuperaci&oacute;n de Informaci&oacute;n, el  &aacute;rea de investigaci&oacute;n relacionada con los Sistemas de Recuperaci&oacute;n de  Informaci&oacute;n Geogr&aacute;fica ha seguido atrayendo la atenci&oacute;n de la comunidad  investigadora mediante la celebraci&oacute;n de varios foros de evaluaci&oacute;n. Sin  embargo, estos foros proporcionan colecciones de prueba compuestas de  documentos de texto y consultas que est&aacute;n listas para evaluar sistemas no  interactivos. Este marco de evaluaci&oacute;n reduce las posibilidades de llevar a  cabo una evaluaci&oacute;n m&aacute;s completa de estos sistemas debido a que no se est&aacute;n  considerando varias caracter&iacute;sticas como la diversidad proporcionada por  diferentes fuentes de informaci&oacute;n o la interacci&oacute;n hombre-computadora. El  objetivo de este trabajo es describir un nuevo enfoque para evaluar sistemas  interactivos de Recuperaci&oacute;n de Informaci&oacute;n Geogr&aacute;fica, cuya principal novedad  es considerar el conocimiento del usuario generado por la interacci&oacute;n  hombre-computadora as&iacute; como la informaci&oacute;n espacial proporcionada por  diferentes fuentes de datos. El m&eacute;todo propuesto requerir&aacute; la generaci&oacute;n de una  colecci&oacute;n de pruebas a partir de tres fuentes de datos principales (Geonames,  Wikipedia y OpenStreetMap), as&iacute; como un conjunto de consultas que constar&aacute;n de  una tupla de tres componentes: el tipo de objeto, la relaci&oacute;n espacial y Objeto  geogr&aacute;fico. Como resultado, el enfoque de evaluaci&oacute;n propuesto integra las dos  estrategias m&aacute;s utilizadas para evaluar los sistemas de IR, que se centran en  el sistema y en el usuario final, aplicando varias t&eacute;cnicas de satisfacci&oacute;n de  usuario y pruebas de usabilidad. Como conclusi&oacute;n principal, se&ntilde;alamos que el  proceso de evaluaci&oacute;n de los sistemas de Recuperaci&oacute;n de Informaci&oacute;n Geogr&aacute;fica  debe considerar el conocimiento del usuario generado por la interacci&oacute;n hombre-computadora,  as&iacute; como la informaci&oacute;n espacial proporcionada por fuentes de datos diferentes  y heterog&eacute;neos.</font></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><b>Palabras clave<span lang=EN-GB>: </span></b>evaluaci&oacute;n de la recuperaci&oacute;n de informaci&oacute;n geogr&aacute;fica, recuperaci&oacute;n de  informaci&oacute;n geogr&aacute;fica, recuperaci&oacute;n de informaci&oacute;n interactiva, recuperaci&oacute;n  de informaci&oacute;n basado en la interacci&oacute;n humano-computador.</font></p> <hr>     <p>&nbsp;</p>     <p>&nbsp;</p>     <p><font size="3" face="Verdana, Arial, Helvetica, sans-serif"><b>INTRODUCTION</b></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Whilst  the evaluation methods of classical Information Retrieval (IR) systems have  already been widely studied since the middle of last century, the analysis of  the user&rsquo;s impact and their interactions with the information systems are not yet  well established (Kelly &amp; Sugimoto, 2013). The difference between classical  IR and Interactive Information Retrieval (IIR) focuses on how the system  retrieves relevant documents so that while IR systems are concerned about whether  relevant documents are retrieved, IIR systems focus on whether people can use  the system to retrieve relevant documents. Furthermore, Kelly (2009) points out  that the main change in the study of interactive systems with real users  involves the non&#8208;applicability of the Cranfield model because it  requires defining the relevance level of the documents for a specific query.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Within  the IR field, the research area related to GIR systems has continued to attract  the attention of the research community in recent years, by holding several  assessment forums such as GeoCLEF (Mandl et al. 2009) and NTCIR&#8208;GeoTime  (Gey et al. 2011). However, none of these forums provide a valid set of tests  to evaluate GIR systems based on Human&#8208;Computer interaction (HC&#8208;GIR  systems), since they were focused on non&#8208;interactive  GIR systems. For this reason, the aim of this paper is to describe an approach  to evaluate HC&#8208;GIR systems, which also includes generating a new set  of tests compatible with the main features of these systems. In this context,  we would like to point out several differences between HC&#8208;GIR  systems and non&#8208;interactive GIR systems that should be considered when  the sets of tests provided by the aforementioned forums are used to evaluate an  HC&#8208;GIR  system: </font></p> <ul>       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">HC&#8208;GIR  systems are not only focused on retrieving information from a corpus of text  documents as with classical GIR systems  but also they should take advantage of other information sources such as  cartographic data sources or even the user knowledge.</font></p>   </li>       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">HC&#8208;GIR  systems are not focused on classifying text documents into two levels of relevance  (relevant or not relevant) as in classical GIR evaluation forums, but they are rather  focused on retrieving geographic objects that can be classified in multiple  levels of relevance (Multidimensional Relevance).</font></p>   </li>       ]]></body>
<body><![CDATA[<li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">HC&#8208;GIR  systems allow the improvement of their data sources due to the user system  interaction, facilitating the retrieval of geographic information. Usually,  different users from a specific geographic area know better their own geography  and, therefore, a retrieved geographical object could have different levels of relevance  for the same query at different times or even for different users depending on  their geographic location.</font></p>   </li>       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">HC&#8208;GIR  systems are usually focused on generating new geographic knowledge due to the  user interaction with the system and the use of data sources generated by  users, also known as User Generated Content (UGC). All this human knowledge  certainly helps HC&#8208;GIR systems to improve the information retrieval.</font></p>   </li>     </ul>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In  this paper, a new approach to evaluate HC&#8208;GIR  systems is described, which includes generating a new set of tests for these  systems. Figure 1 shows the overview of a generic HC&#8208;GIR  system, whose knowledge base is a geographic domain ontology enriched by  automatic and semi&#8208;automatic mechanisms that gather information due to  the user&#8208;system interaction. Moreover, an automated process  that integrates the information into the geographic ontology carries out the  information extraction from different data sources. The integration process is  completed by another filtering process, which is supported by a geographic  ontology editor that allows the user&#8208;system interaction. Finally,  as shown on top of <a href="#f01">Figure 1</a>, the system includes a visual query editor, which  allows the user to classify the relevance of the geographical objects retrieved  by the system, and the actual Geographic Information System (GIS) that wraps  around the system.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The HC&#8208;GIR system shown in <a href="#f01">Figure 1</a> prioritizes the retrieval of existing  geographical objects on the real world, instead of non&#8208;existing ones, although it could be perfectly compatible to the  retrieval of such objects. Note that if the object is geographic, then it  should have a related spatial location in  the real world, but this does not imply the physical presence of the object in  our geography. The overall success of the information retrieval lies in using  data sources that contain the user needs along with a successful analysis of  these needs. </font></p>     <p align="center"><img src="/img/revistas/rcci/v12n2/f0101218.jpg" alt="f01" width="514" height="343"><a name="f01"></a></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Although  the common meaning of the concept corpus refers to a collection of documents, in this paper we consider the definitions  proposed by Kelly (2009), who defines a collection as a set of topics, corpus  and relevant judgments, while corpus is considered the set of documents or information  objects to which users have access.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Recently,  the main challenges and developments carried out in the GIR field over the  years have been analyzed (Purves 2014). The conclusions are not very  encouraging, pointing out the scarce use of geographical knowledge bases, which  has led to no significant improvements. One of the challenges is related to the  methods used to evaluate the success of the GIR systems. Another relevant  conclusion is that GeoCLEF was not a successful evaluation forum because the teams  focused on performing simple adjustments based on the query strategy or  sometimes on the relevance ranking formula applied in the system. According to  Purves (2014), the evaluation of geographic relevance, especially at local  level, is a more and more challenging and essential research field nowadays.  Purves also points out the need to develop effective interfaces that help users  find what they want. Another major challenge in GIR is to disambiguate unknown,  small or geographic areas with low&#8208;level detail, as discussed  in Purves (2014) and Palacio et al. (2015).</font></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Furthermore,  Borlund (2013) points out the need for research on multidimensional and dynamic  relevance rankings for GIR, which justifies the importance and novelty of this paper.  Palacio et al. (2015) propose a new  approach to develop a set of tests by using UGC (User Generated Content), as  well as the queries and the relevance judgments. They conclude that using UGC  is promising to evaluate GIR systems, especially for queries related to geographical  areas with a low&#8208;level detail.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The  user satisfaction is being increasingly important in IR systems, which should  be designed to meet the user&rsquo;s requirements. What are these needs and  requirements in a GIR system? Perea&#8208;Ortega (2010) points out  that the main goal of a GIR system is to calculate the user satisfaction  regarding the response of the system for the information need. According to  Kelly &amp; Sugimoto (2013), some IIR papers have evaluated a single system  instead of performing several experiments, where the objective is to examine  the effects of an independent variable (e.g. the system) into one or more  dependent variables (e.g. performance and usability) thus being two elements  compared at least. Traditional usability tests are examples of this type of  evaluation, which is normally carried out with a single version of the system,  with the aim of identifying potential usability problems.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Regarding  the existing evaluation forums for IIR, TREC is  one of the most relevant. Over the years, it has provided several tracks for  the IIR task: Interactive Track (TRECs 03&#8208;11),  the Hard Track (TRECs 12&#8208;14), and ciQA (TRECs 15&#8208;16).  Recently, some tracks have focused on scenarios that involve spatial and  temporal information. For instance, the Contextual Suggestion Track involves  complex information needs, which are highly dependent on context and user  interests, and these contexts include latitude and longitude coordinates, as  well as a temporal component. According to Kelly (2009), these tracks provided  different sets of tests, but none of them was successful in establishing a  generic collection, which allows teams to make feasible comparisons between the  IIR systems. In that context, the author suggests four basic types of measures  for IIR evaluation: context, interaction, performance and usability. Finally,  another related evaluation forum is NTCIR&#8208;GeoTime  (Gey et al. 2011), where GIR is evaluated for Asian and English languages but  including temporal aspects of the  retrieval process. The remaining issues were similar to those obtained by the  GeoCLEF track (Mandl et al. 2009).</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The  aim of this research is to describe a new approach to evaluate interactive GIR  systems, whose main novelty is to consider the user&rsquo;s knowledge generated by  the human-computer interaction as well as the spatial information provided by  different data sources. The rest of the paper is organized as follows: Section  2 describes the proposed method; the results of the proposed method are presented in Section 3; Section 4 discusses  the proposed approach and, finally, the conclusions are expounded in Section 5. </font></p>     <p>&nbsp;</p>     <p><font face="Verdana, Arial, Helvetica, sans-serif"><strong><font size="3">PROPOSED METHOD</font></strong></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The  aim of the proposed method is to evaluate the HC&#8208;GIR  model shown in <a href="#f01">Figure 1</a> by using the two existing evaluation models in GIR: the  CLEF non&#8208;interactive model and the TREC interactive model. The  proposed approach will require generating a new set of tests in which three  main data sources will be used: Geonames, Wikipedia, and OpenStreetMap. According to Palacio et al.  (2015), these sources can be considered as User Generated Content (UGC) so we  can state that our evaluation method makes use of user knowledge somehow. Then,  the judgments of relevance will be generated by taking into account these  corpora along with the topics or queries that should be defined previously.  Finally, the results of each query will be classified into three levels: &ldquo;relevant&rdquo;, &ldquo;not very relevant&rdquo; and &ldquo;not  relevant&rdquo;.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Regarding  the queries that will be used to evaluate the system, they will consist of a  tuple of three components: the object type, the spatial relationship, and the geographic object. Each query should be  related to a set of valid results (in this case geographical objects), sorted  by a relevance score according to the related geographical area and the  information available on the corpora. This way, all queries should refer to  existing spatial objects in the geographical area, thus facilitating the  elaboration of the relevance judgments. Furthermore, a geographic ontology  should support the evaluation of the results for a specific visual query.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Similar  to the strategy proposed by Kelly (2009), a usability study will be performed  in our evaluation approach, where the results are compared to predetermined  population parameters defined from the results of similar studies. This  usability study is based on a well&#8208;known experimental design  called &quot;Solomon four&#8208;group&rdquo; (McCambridge et al.  2011). Besides, the calculation of the &quot;precision&quot; measure is also  carried out for the proposed approach in order to perform a feasible comparison  with the systems evaluated during the GeoCLEF campaign and other interactive  TREC tracks. In this sense, the comparisons will be performed in two different  usability levels from the user point of view: usability levels to submit the  information need and usability levels to visualize and understand the query  results.&nbsp; </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Another  issue of the proposed approach is related to the evaluation of the results  provided by the HC&#8208;GIR system. In order to tackle this task, the use of  human experts is required, these being  the basis of the application of the &ldquo;expert criteria&rdquo; method (Delphi method) to  evaluate the consensus of the experts about the quality of the proposal.  Regarding the user satisfaction, the Iadov technique (L&oacute;pez-Rodr&iacute;guez &amp;  Gonz&aacute;lez-Maura, 2002) will be applied. This method consists of five questions  based on the relationships that are established between three closed questions  that are intercalated within a questionnaire and whose relationship the user  does not know.</font></p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">An  analysis of the context in which the information retrieval occurs by experts  and standard users will be necessary for a deeper analysis of the results. This  will include measures used to characterize the individuals, such as age,  intelligence, creativity, memory or cognitive style, and others used to  characterize the searching situation, such as familiarization with the topics,  the current geographic location or the time used for searching. In the same  way, it will be necessary to measure and analyze the level of interaction  between users and the system. The interaction measures used are the following:  number of queries, number of search results viewed, number of objects and  geographical concepts viewed, number of geographic objects defined by the user  as relevant, and length of the query. For  example, if the user visualizes many concepts or geographic objects to meet the  information need, then this would be a negative indicator. However, if the user  visualizes few objects, then this would be a positive result because the goal  is to help the user find what he looks for in the shortest time possible.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">To  sum up, the proposed approach to evaluate HC&#8208;GIR  systems consists of the following steps:</font></p> <ol start="1" type="1">       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Generate the set of tests from the three main data sources proposed:       Geonames, Wikipedia and       OpenStreetMap.</font></p>   </li>       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Select several human experts of related fields to test the system by       acting as evaluators and end users. Apply the Delphi method among the       human experts to obtain a consensus on the quality of the system.</font></p>   </li>       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Select several standard users to interact with the system and       retrieve geographical information. Then, apply the Iadov technique and the       Satisfaction Questionnaire for User Interface (QUIS) (Naeini &amp;       Mostowfi, 2015) to measure the user satisfaction from the retrieved       results.</font></p>   </li>       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Define and calculate the traditional &quot;precision&quot; measure       used in common non-interactive GIR systems and compare the achieved results       with those obtained by other systems that participated in those evaluation       campaigns such as GeoCLEF. Furthermore, the two existing variants of the       precision measure described in Kelly (2009) (Interactive TREC precision       and Interactive user precision) will be also calculated in order to       compare the achieved results with those obtained by the participants in       the TREC Interactive Track.</font></p>   </li>       ]]></body>
<body><![CDATA[<li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Run  usability tests and compare the results with those obtained by other systems  that participated in previous evaluation campaigns. The USE questionnaire  (Usefulness, Satisfaction, and Ease of use) defined by Lund (2001) and the SUMI  questionnaire (Software Usability Measurement Inventory) will be used to  measure the usability.</font></p>   </li>     </ol>     <p>&nbsp;</p>     <p><font face="Verdana, Arial, Helvetica, sans-serif"><strong><font size="3">RESULTS</font></strong></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In  order to make the system achievable and capable of obtaining initial results,  we have reduced the size of the corpora generated to the specific geographical  area of Cuba.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Assuming  that the HC&#8208;GIR system is supported by an  ontology whose conceptualization is partially shown in <a href="/img/revistas/rcci/v12n2/f0201218.jpg" target="_blank">Figure 2</a>, a relevant  result for the query &quot;Hospitals in the east of Cuba&quot; could be  &quot;Celia S&aacute;nchez Manduley Hospital&quot; because it is a direct instance of  the concept &quot;hospital&quot; and it is geographically located in  &quot;Cuban East&quot;. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">However,  another result with a lower degree of relevance could be &quot;Ren&eacute; Vallejo  Polyclinic&rdquo;, because the concept &quot;Polyclinic&quot; (a concept to which the  instance &quot;Ren&eacute; Vallejo&quot; belongs) is semantically related to the  concept &quot;Hospital&quot; by the hypernym relationship. Moreover, &quot;Ren&eacute;  Vallejo Polyclinic&rdquo; is spatially related to &quot;Cuban East&quot; by means of  the topological relationship &quot;belongs&quot;. Finally, the town  &quot;Manzanillo&quot; can be considered belonging to &quot;Cuban East&quot;  because the spatial relationship &quot;belongs&quot; is transitive.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">These  results show the importance of using a geographic ontology to support the  evaluation of the results for a specific visual query in the HC&#8208;GIR system. Moreover, the semantic and spatial relationships established  within the ontology help to define a more accurate degree of relevance for the  results provided by the HC&#8208;GIR system regarding a  specific query. </font></p>     <p>&nbsp;</p>     ]]></body>
<body><![CDATA[<p><font face="Verdana, Arial, Helvetica, sans-serif" size="3"><B>DISCUSSION</B></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">In  our opinion, a controversial issue of the proposed approach is how to compare  the accuracy score obtained for a specific topic or query by using different sets  of tests. Could we perform a feasible comparison? The common formula to  calculate the precision for non&#8208;interactive GIR systems  consists of a ratio between relevant documents retrieved and documents  retrieved. Theoretically, if the sets of tests are generated correctly and the  geographical objects related to each query are sorted by a reliable relevance  ranking, the formula to calculate the accuracy should not be influenced by the  fact of working with different sets of tests. From our point of view, a GIR  system should return spatial objects of interest, but most of them are often  designed to fit a set of tests rather than focusing on the user needs. This is  what happened in the GeoCLEF campaign for example, where GIR systems returned  textual documents because it was a requirement of the set of tests. However, we  believe that spatial objects should be considered a priority in the geographic  domain and sets of tests should be adapted to the user&rsquo;s needs. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Furthermore,  it should be noted that the data sources used to generate the set of tests  during the first step of the proposed evaluation approach are not static as the  GeoCLEF ones, i.e., their content could change over time. Although the  geographic data that are stored in the  proposed sources are not highly variable, these can be accessed in real time.  However, since the end users can be also seen as data sources in our model, the  geographical features of the data could change during the evaluation process.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">According  to Purves (2014), the absence of a widely recognized set of tests for GIR is  one of the reasons for which the evaluations of GIR systems are often omitted.  The design and release of a set of tests to evaluate these systems require a huge effort by human annotators. In  this sense, we believe that the evaluation of the results for each query would  be less exhausting by following our approach  because spatial objects (mostly recognized in our real world) are usually  easier to relate to information needs represented by a visual query. This  approach is conceptually different to that used for traditional GIR  evaluations, where textual documents (most of them geographically unrecognized  in our real world) are related to the information need represented by a textual  query. Therefore, we consider that defining the level of similarity between  spatial objects is easier than among text documents and spatial objects.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Another issue to discuss is related to the  query generation. There are different alternatives to their advantages and drawbacks. Once the query is defined, both  GIR alternatives (textual and visual) will be able to perform the evaluation  regarding the accuracy precision for the non&#8208;interactive  variant. One of the limitations when comparing interactive  and non&#8208;interactive  IR systems has been the incapability of IIR systems to generate a query since one of its objectives is to avoid  the negative consequences of requiring a query from the user, although the user  involvement during the retrieval process is an exclusive feature of the IIR  systems.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">However, the HC&#8208;GIR system takes advantage  of the new knowledge generated by the user, which can be considered an  improvement for successful information retrieval. This point is another  difference with the non-interactive GIR systems  since these systems do not use human knowledge during the retrieval process.    <br>   Finally,  according to the literature reviewed, our evaluation proposal could be considered  the first one that tries to compare non&#8208;interactive  GIR systems with HC&#8208;GIR systems. Nevertheless, from our point of view, in  order to make the common set of tests used to evaluate non&#8208;interactive  GIR system more reliable and demanded by the research community, two main  issues should be considered:</font></p> <ol start="1" type="1">       <li>         <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Remove the corpus from the set of tests and allow the GIR system to       choose the data sources to search.</font></p>   </li>       <li>         ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Generate the relevance judgments for each query depending on the       current geographical area of the user.</font></p>   </li>     </ol>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">This  new variant would further facilitate the ranking of the results. The only  requirement would be to know well the geographic area used in the set of tests  in order to generate a successful relevance judgment, i.e., the HC&#8208;GIR system would be evaluated positively whether their results are close  to reality or not. How to deal with this? It is one of the challenges to be  solved by the GIR research community currently. </font></p>     <p>&nbsp;</p>     <p><font face="Verdana, Arial, Helvetica, sans-serif" size="3"><B>CONCLUSIONS</B></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">This  paper describes a novel approach to evaluate Human-Computer Geographical  Information Retrieval systems (HC&#8208;GIR systems). The proposal  is focused on integrating the main findings provided by the most relevant  evaluation forums related to the Information Retrieval (IR) and Interactive  Information Retrieval (IIR) fields, such as TREC, CLEF and NTCIR. In addition, the proposed approach tries to lay the  foundations for a feasible comparison between HC&#8208;GIR  systems and traditional GIR systems like those presented in GeoCLEF. A brief  discussion about the main differences between HC&#8208;GIR  and IIR systems is also presented in this paper.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Nowadays  there is a lack of consensus within the GIR community on how to evaluate HC&#8208;GIR systems, mainly due to the current challenges that were also  discussed in this paper. In the literature reviewed, no evaluation forums were  found where HC&#8208;GIR systems can be analyzed  and evaluated. Only the proposal described in Bucher et al. (2005) can be  considered an evaluation approach related to that presented in this paper,  although the evaluation measures applied to the end users were not explained in  detail. In this sense, our proposed evaluation approach integrates the two most  used strategies to evaluate IR systems, which are focused on the system and the  end user, by applying several user satisfaction techniques and usability tests. </font></p>     <p>&nbsp;</p>     <p><font face="Verdana, Arial, Helvetica, sans-serif" size="3"><B>A</B><strong>CKNOWLEDGMENTS</strong></font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">This paper has been partially supported by a grant from the Fondo  Europeo de Desarrollo Regional (FEDER) and REDES project (TIN2015&#8208;65136&#8208;C2&#8208;1&#8208;R) from the Spanish  Government.</font></p>     ]]></body>
<body><![CDATA[<p>&nbsp;</p>     <p align="left"><font face="Verdana, Arial, Helvetica, sans-serif" size="3"><B>REFERENCES</B></font>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">BORLUND, P., 2013. Interactive  Information Retrieval: An Introduction. <em>Journal  of Information Science Theory and Practice</em>, 1 (3), pp.12-32. Korea  Institute of Science and Technology Information. </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">BUCHER, B. et al., 2005.  Geographic IR systems: requirements and evaluation. <em>ICC 05: Proceedings of  the 22nd International Cartographic Conference</em>, pp.11&ndash;16.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">GEY, F. C.,  LARSON, R. R., MACHADO, J. &amp; YOSHIOKA, M., 2011. NTCIR9-GeoTime Overview -  Evaluating Geographic and Temporal Search: Round 2, in Noriko Kando; Daisuke  Ishikawa &amp; Miho Sugimoto, ed., <em>&lsquo;NTCIR&rsquo;,</em> National Institute of Informatics (NII).</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">KELLY, D., 2009. Methods  for Evaluating Interactive Information Retrieval Systems with Users. <em>Foundations  and Trends&reg; in Information Retrieval</em>, 3(1&mdash;2), pp.1&ndash;224.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">KELLY, D. &amp; SUGIMOTO,  C. R., 2013. A systematic review of interactive information retrieval  evaluation studies, 1967-2006. <em>JASIST</em> 64 (4), pp. 745-770.</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">L&Oacute;PEZ-RODR&Iacute;GUEZ,  A. &amp; GONZ&Aacute;LEZ-MAURA, V., 2002. La t&eacute;cnica de Iadov. Una aplicaci&oacute;n para el  estudio de la satisfacci&oacute;n de los alumnos por las clases de educaci&oacute;n f&iacute;sica. <em>Revista Digital-Buenos Aires</em>, A&ntilde;o 8, No.  47. Available  at: http://www.efdeportes.com/efd47/iadov.htm</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">LUND, A., 2001. Measuring  Usability with the USE Questionnaire. <em>Usability Interface</em>, 8(2), pp.3&ndash;6.</font></p>     <!-- ref --><p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">MANDL, T., CARVALHO, P.,  NUNZIO, G. M. D., GEY, F. C., LARSON, R. R., SANTOS, D. &amp; WOMSER-HACKER, C., 2008. GeoCLEF 2008: The CLEF  2008 Cross-Language Geographic Information Retrieval Track Overview., in Carol  Peters; Thomas Deselaers; Nicola Ferro; Julio Gonzalo; Gareth J. F. Jones;  Mikko Kurimo; Thomas Mandl; Anselmo Pe&ntilde;as &amp; Vivien Petras, ed., <em>' CLEF'</em>, Springer,  pp. 808-821.    </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">MCCAMBRIDGE, J., BUTOR-BHAVSAR, K.,  WITTON, J., &amp; ELBOURNE, D., 2011. Can  Research Assessments Themselves Cause Bias in Behaviour Change Trials? A  Systematic Review of Evidence from Solomon 4-Group Studies. <em>PLoS ONE</em>, 6(10), e25223.  http://doi.org/10.1371/journal.pone.0025223</font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">NAEINI, H.S. &amp; MOSTOWFI, S., 2015. Using QUIS as a  measurement tool for user satisfaction evaluation (case study: vending  machine). <em>International Journal of  Information Science,</em> 5(1), 14&ndash;23. doi:10.5923/j.ijis.20150501.03</font></p>     <!-- ref --><p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">PALACIO, D.,  DERUNGS, C. &amp; PURVES, R.S., 2015. Development and evaluation of a geographic  information retrieval system using fine-grained  toponyms. <em>Journal of Spatial Information Science</em>, 11(2015).    </font></p>     <!-- ref --><p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">PEREA&#8208;ORTEGA, J.M., 2010. <em>Recuperaci&oacute;n  de Informaci&oacute;n Geogr&aacute;fica basada en m&uacute;ltiples formulaciones y motores de b&uacute;squeda</em>.  PhD Thesis. University of Ja&eacute;n.    </font></p>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">PURVES, R., 2014. Geographic Information  Retrieval: Are We Making Progress?, pp.1&ndash;6. Available at: http://spatial.ucsb.edu/wp-content/uploads/smss2014&#8208;Position&#8208;Purves.pdf. </font></p>     <p name="_ENREF_1">&nbsp;</p>     <p name="_ENREF_1">&nbsp;</p>     ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Recibido: 20/06/2017    <br> Aceptado: 13/03/2018</font></p>      ]]></body><back>
<ref-list>
<ref id="B1">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[BORLUND]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Interactive Information Retrieval: An Introduction.]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<volume>1</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>12-32</page-range><publisher-name><![CDATA[Korea Institute of Science and Technology Information]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B2">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[BUCHER]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
</person-group>
<source><![CDATA[Geographic IR systems: requirements and evaluation.]]></source>
<year>2005</year>
<page-range>11-16</page-range></nlm-citation>
</ref>
<ref id="B3">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[GEY]]></surname>
<given-names><![CDATA[F. C]]></given-names>
</name>
<name>
<surname><![CDATA[LARSON]]></surname>
<given-names><![CDATA[R. R]]></given-names>
</name>
<name>
<surname><![CDATA[MACHADO]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[YOSHIOKA]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<source><![CDATA[NTCIR9-GeoTime Overview - Evaluating Geographic and Temporal Search: Round 2]]></source>
<year>2011</year>
<publisher-name><![CDATA[National Institute of Informatics (NII)]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B4">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[KELLY]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Methods for Evaluating Interactive Information Retrieval Systems with Users.]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<volume>3</volume>
<numero>1-2</numero>
<issue>1-2</issue>
<page-range>1-224</page-range></nlm-citation>
</ref>
<ref id="B5">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[KELLY]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[SUGIMOTO]]></surname>
<given-names><![CDATA[C. R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A systematic review of interactive information retrieval evaluation studies, 1967-2006.]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<volume>64</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>745-770</page-range></nlm-citation>
</ref>
<ref id="B6">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[LÓPEZ-RODRÍGUEZ]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[GONZÁLEZ-MAURA]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
</person-group>
<source><![CDATA[La técnica de Iadov. Una aplicación para el estudio de la satisfacción de los alumnos por las clases de educación física.]]></source>
<year>2002</year>
<publisher-name><![CDATA[Revista Digital-Buenos Aires]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B7">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[LUND]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Measuring Usability with the USE Questionnaire.]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<volume>8</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>3-6</page-range></nlm-citation>
</ref>
<ref id="B8">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[MANDL]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[CARVALHO]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[NUNZIO]]></surname>
<given-names><![CDATA[G. M. D]]></given-names>
</name>
<name>
<surname><![CDATA[GEY]]></surname>
<given-names><![CDATA[F. C]]></given-names>
</name>
<name>
<surname><![CDATA[LARSON]]></surname>
<given-names><![CDATA[R. R]]></given-names>
</name>
<name>
<surname><![CDATA[SANTOS]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[WOMSER-HACKER]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<source><![CDATA[GeoCLEF 2008: The CLEF 2008 Cross-Language Geographic Information Retrieval Track Overview.]]></source>
<year>2008</year>
<page-range>808-821</page-range><publisher-name><![CDATA[Springer]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B9">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[MCCAMBRIDGE]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[BUTOR-BHAVSAR]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[WITTON]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[ELBOURNE]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Can Research Assessments Themselves Cause Bias in Behaviour Change Trials?]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<volume>6</volume>
<numero>10</numero>
<issue>10</issue>
</nlm-citation>
</ref>
<ref id="B10">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[NAEINI]]></surname>
<given-names><![CDATA[H.S.]]></given-names>
</name>
<name>
<surname><![CDATA[MOSTOWFI]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Using QUIS as a measurement tool for user satisfaction evaluation (case study: vending machine).]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<volume>5</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>14-23</page-range></nlm-citation>
</ref>
<ref id="B11">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[PALACIO]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[DERUNGS]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[PURVES]]></surname>
<given-names><![CDATA[R.S]]></given-names>
</name>
</person-group>
<source><![CDATA[Development and evaluation of a geographic information retrieval system using fine-grained toponyms.]]></source>
<year>2015</year>
<volume>11</volume>
<publisher-name><![CDATA[Journal of Spatial Information Science]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B12">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[PEREA&#8208;ORTEGA]]></surname>
<given-names><![CDATA[J.M.]]></given-names>
</name>
</person-group>
<source><![CDATA[Recuperación de Información Geográfica basada en múltiples formulaciones y motores de búsqueda.]]></source>
<year>2010</year>
<publisher-name><![CDATA[University of Jaén]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B13">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[PURVES]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<source><![CDATA[Geographic Information Retrieval: Are We Making Progress?]]></source>
<year>2014</year>
<page-range>1-6</page-range></nlm-citation>
</ref>
</ref-list>
</back>
</article>
