<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>1815-5928</journal-id>
<journal-title><![CDATA[Ingeniería Electrónica, Automática y Comunicaciones]]></journal-title>
<abbrev-journal-title><![CDATA[EAC]]></abbrev-journal-title>
<issn>1815-5928</issn>
<publisher>
<publisher-name><![CDATA[Universidad Tecnológica de La Habana José Antonio Echeverría, Cujae]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S1815-59282019000300051</article-id>
<title-group>
<article-title xml:lang="es"><![CDATA[Evaluación de Rasgos Acústicos para el Reconocimiento Automático del Habla en Escenarios Ruidosos usando Kaldi]]></article-title>
<article-title xml:lang="en"><![CDATA[Evaluation of Acoustic Features for the Automatic Speech Recognition in Noise Scenarios using Kaldi]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Ramírez Sánchez]]></surname>
<given-names><![CDATA[José Manuel]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Montalvo Bereau]]></surname>
<given-names><![CDATA[Ana Rosa]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Calvo de Lara]]></surname>
<given-names><![CDATA[José Ramón]]></given-names>
</name>
<xref ref-type="aff" rid="Aff"/>
</contrib>
</contrib-group>
<aff id="Af1">
<institution><![CDATA[,CENATAV-DATYS  ]]></institution>
<addr-line><![CDATA[ La Habana]]></addr-line>
<country>Cuba</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>12</month>
<year>2019</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>12</month>
<year>2019</year>
</pub-date>
<volume>40</volume>
<numero>3</numero>
<fpage>51</fpage>
<lpage>71</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://scielo.sld.cu/scielo.php?script=sci_arttext&amp;pid=S1815-59282019000300051&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://scielo.sld.cu/scielo.php?script=sci_abstract&amp;pid=S1815-59282019000300051&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://scielo.sld.cu/scielo.php?script=sci_pdf&amp;pid=S1815-59282019000300051&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="es"><p><![CDATA[RESUMEN La presente investigación evaluará el impacto de los Coeficientes Cepstrales en la Frecuencia Mel (MFCC) y los coeficientes Predictores Perceptuales Lineales (PLP), en la tasa de errores de reconocimiento de palabras (WER) de sistemas dedicados al Reconocimiento Automático del Habla (RAH). La experimentación se realizará con señales de voz en idioma español, en escenarios con niveles de ruido desconocidos y utilizando la herramienta del estado del arte Kaldi. El artículo concluye aportando evidencias a favor de los MFCC como rasgo acústico más robusto ante la tarea del RAH en escenarios ruidosos con respecto a los PLP; haciendo notar que ambos rasgos se comportar de manera similar en escenarios poco ruidosos y el impacto de los PLP en la reducción de los tiempos empleados por los sistemas dedicados al RAH.]]></p></abstract>
<abstract abstract-type="short" xml:lang="en"><p><![CDATA[ABSTRACT The present investigation will evaluate the impact of Mel Frequency Cepstral Coefficients (MFCC) and the Perceptual Linear Predictors (PLP) coefficients, in the word error rate (WER) of systems dedicated to Automatic Speech Recognition (ASR). The experimentation will be done with voice signals in Spanish language, in scenarios with unknown noise levels and using the Kaldi state of the art tool. The article concludes by providing evidence in favor of the MFCC as acoustic feature more robust in the task of ASR in noisy scenarios with respect to the PLP; also both features behave similarly in low noise scenarios and the impact of PLP in reducing the time spent by systems dedicated to ASR.]]></p></abstract>
<kwd-group>
<kwd lng="es"><![CDATA[Reconocimiento Automático del Habla]]></kwd>
<kwd lng="es"><![CDATA[Rasgos Acústicos]]></kwd>
<kwd lng="es"><![CDATA[Kaldi]]></kwd>
<kwd lng="en"><![CDATA[Automatic Speech Recognition]]></kwd>
<kwd lng="en"><![CDATA[Acoustic Features]]></kwd>
<kwd lng="en"><![CDATA[Kaldi]]></kwd>
</kwd-group>
</article-meta>
</front><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Davis]]></surname>
<given-names><![CDATA[SB]]></given-names>
</name>
<name>
<surname><![CDATA[Mermelstein]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences]]></article-title>
<source><![CDATA[IEEE Trans on ASSP]]></source>
<year>1980</year>
<volume>4</volume>
<numero>5</numero>
<issue>5</issue>
<page-range>357-66</page-range></nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hermansky]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Perceptual linear predictive analysis of speech]]></article-title>
<source><![CDATA[J Acoust Soc Am]]></source>
<year>1990</year>
<volume>87</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>1738-52</page-range></nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Peinado]]></surname>
<given-names><![CDATA[AM]]></given-names>
</name>
<name>
<surname><![CDATA[Segura]]></surname>
<given-names><![CDATA[JC]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Speech Recognition with HMMs]]></article-title>
<source><![CDATA[Speech Recognition Over Digital Channels: Robustness and Standards]]></source>
<year>2006</year>
<publisher-name><![CDATA[John Wiley &amp; Sons Ltd]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Droppo]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Acero]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Environmental Robustness]]></article-title>
<person-group person-group-type="editor">
<name>
<surname><![CDATA[Benesty MMS]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Huang]]></surname>
<given-names><![CDATA[Yiteng]]></given-names>
</name>
</person-group>
<source><![CDATA[Springer Handbook of Speech Processing]]></source>
<year>2008</year>
<page-range>653-77</page-range><publisher-loc><![CDATA[Berlin ]]></publisher-loc>
<publisher-name><![CDATA[Springer]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Pylkkönen]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<source><![CDATA[LDA Based Feature Estimation Methods for LVCSR]]></source>
<year>2006</year>
<conf-name><![CDATA[ International Conference on Spoken Language Processing Interspeech]]></conf-name>
<conf-loc> </conf-loc>
<page-range>389-92</page-range><publisher-loc><![CDATA[Pittsburgh, Pennsylvania, USA ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gales]]></surname>
<given-names><![CDATA[MJF]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Maximum Likelhood Linear Transformations for HMM-Based Speech Recognition]]></article-title>
<source><![CDATA[Computer, Speech &amp; Language]]></source>
<year>1998</year>
<volume>12</volume>
<page-range>75-98</page-range></nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hermansky]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Morgan]]></surname>
<given-names><![CDATA[N.]]></given-names>
</name>
<name>
<surname><![CDATA[Bayya]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Kohn]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<source><![CDATA[RASTA-PLP speech analysis technique]]></source>
<year>1992</year>
<conf-name><![CDATA[ Proc IEEE Int Conf Acoust ICASSP-92]]></conf-name>
<conf-loc> </conf-loc>
<publisher-loc><![CDATA[San Francisco, CA, USA ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Povey]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Yao]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[A basis representation of constrained MLLR transforms for robust adaptation]]></article-title>
<source><![CDATA[Comput Speech Lang]]></source>
<year>2012</year>
<volume>26</volume>
<page-range>35-51</page-range></nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Miyajima]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
<name>
<surname><![CDATA[Watanabe]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
<name>
<surname><![CDATA[Kitamura]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
<name>
<surname><![CDATA[Katagiri]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[Speaker Recognition Based on Discriminative Feature Extraction - Optimization of Mel-Cepstral Features Using Second-Order All-Pass Warping Function]]></source>
<year>1999</year>
<conf-name><![CDATA[ Proc Eurospeech, 6th]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Schafer]]></surname>
<given-names><![CDATA[RW]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Homomorphic Systems and Cepstrum Analysis of Speech]]></article-title>
<person-group person-group-type="editor">
<name>
<surname><![CDATA[Benesty]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Sondhi]]></surname>
<given-names><![CDATA[MM]]></given-names>
</name>
<name>
<surname><![CDATA[Huang]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
</person-group>
<source><![CDATA[Springer Handbook of Speech Processing]]></source>
<year>2008</year>
<page-range>161-80</page-range><publisher-loc><![CDATA[Berlin ]]></publisher-loc>
<publisher-name><![CDATA[Springer]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Fletcher]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Audittory patterns]]></article-title>
<source><![CDATA[Rev Mod Phys]]></source>
<year>1940</year>
<volume>12</volume>
<page-range>47-65</page-range></nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Furui]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[IEEE Trans on ASSP]]></source>
<year>1981</year>
<page-range>254-72</page-range></nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Furui]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[IEEE Trans on ASSP]]></source>
<year>1986</year>
<volume>34</volume>
<page-range>52-9</page-range></nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hanson]]></surname>
<given-names><![CDATA[B.A.]]></given-names>
</name>
<name>
<surname><![CDATA[Applebaum]]></surname>
<given-names><![CDATA[T.H]]></given-names>
</name>
</person-group>
<source><![CDATA[Robust speaker-independent word recognition using static, dynamic and acceleration features: experiments with lombard and noisy speech]]></source>
<year>1990</year>
<conf-name><![CDATA[ Proc of ICASSP]]></conf-name>
<conf-loc> </conf-loc>
<publisher-loc><![CDATA[Alburquerque, NM, USA ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mariani]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Cole]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<source><![CDATA[Survey of the State of the Art in Human Language Technology]]></source>
<year>1997</year>
<publisher-loc><![CDATA[Cambridge ]]></publisher-loc>
<publisher-name><![CDATA[Cambridge University Press and Giardini]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Olsen]]></surname>
<given-names><![CDATA[PA]]></given-names>
</name>
<name>
<surname><![CDATA[Ramesh]]></surname>
<given-names><![CDATA[AG]]></given-names>
</name>
</person-group>
<source><![CDATA[Extended MLLT for Gaussian Mixture Models]]></source>
<year>2001</year>
<conf-name><![CDATA[ Transactions in Speech and Audio Processing]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Psutka]]></surname>
<given-names><![CDATA[JV]]></given-names>
</name>
<name>
<surname><![CDATA[Matoúsek]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
<name>
<surname><![CDATA[Mautner]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<source><![CDATA[Benefit of Maximum Likelihood Linear Transform (MLLT) Used a Different Levels of Covariance Matrices Clustering in ASR Systems]]></source>
<year>2007</year>
<conf-name><![CDATA[ TSD 2007]]></conf-name>
<conf-loc> </conf-loc>
<page-range>431-8</page-range><publisher-loc><![CDATA[Berlin ]]></publisher-loc>
<publisher-name><![CDATA[-Verlag Springer]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Young]]></surname>
<given-names><![CDATA[SJ]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[HMMs and Related Speech Recognition Technologies]]></article-title>
<person-group person-group-type="editor">
<name>
<surname><![CDATA[Benesty MMS]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Huang]]></surname>
<given-names><![CDATA[Yiteng]]></given-names>
</name>
</person-group>
<source><![CDATA[Springer Handbook of Speech Processing]]></source>
<year>2008</year>
<page-range>539-55</page-range><publisher-loc><![CDATA[Berlin ]]></publisher-loc>
<publisher-name><![CDATA[Springer]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Povey]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<source><![CDATA[The Kaldi Speech Recognition Toolkit]]></source>
<year>2011</year>
<conf-name><![CDATA[ IEEE Workshop on Automatic Speech Recognition and Understanding]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B20">
<label>20</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kim]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[Stern]]></surname>
<given-names><![CDATA[RM]]></given-names>
</name>
</person-group>
<source><![CDATA[Power-normalized cepstral coefficients (PNCC) for robust speech recognition]]></source>
<year>2016</year>
<conf-name><![CDATA[ IEEE/ACM Transactions on audio, speech, and language processing]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B21">
<label>21</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Tachioka]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Watanabe]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Hershey]]></surname>
<given-names><![CDATA[JR]]></given-names>
</name>
</person-group>
<source><![CDATA[Effectiveness of discriminative training and feature transformation for reverberated and noisy speech]]></source>
<year>2013</year>
<conf-name><![CDATA[ Proc ICASSP]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B22">
<label>22</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Alam]]></surname>
<given-names><![CDATA[M.J.]]></given-names>
</name>
<name>
<surname><![CDATA[O&#8217;Shaughnessy]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Kenny]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<source><![CDATA[A novel feature extractor employing regularized MVDR spectrum estimator and subband spectrum enhancement technique]]></source>
<year>2013</year>
<conf-name><![CDATA[ 8thInternational Workshop on Systems, Signal Processing and their Applications (WoSSPA)]]></conf-name>
<conf-loc> </conf-loc>
<publisher-loc><![CDATA[Algiers, Algeria ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B23">
<label>23</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Umit]]></surname>
<given-names><![CDATA[HY]]></given-names>
</name>
<name>
<surname><![CDATA[Satya]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<source><![CDATA[Perceptual MVDR-based cepstral coefficients (PMCCS) for robust speech recognition]]></source>
<year>2003</year>
<conf-name><![CDATA[ Proc ICASSP]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B24">
<label>24</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Tachioka]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
</person-group>
<article-title xml:lang=""><![CDATA[Prior-based binary masking and discriminative methods for reverberant and noisy speech recognition using distant stereo microphones]]></article-title>
<source><![CDATA[Journal of Information processing]]></source>
<year>2017</year>
<volume>25</volume>
<page-range>407-16</page-range></nlm-citation>
</ref>
<ref id="B25">
<label>25</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Young]]></surname>
<given-names><![CDATA[SJ]]></given-names>
</name>
</person-group>
<source><![CDATA[The HTK Book]]></source>
<year>2006</year>
<publisher-loc><![CDATA[Cambridge ]]></publisher-loc>
<publisher-name><![CDATA[Cambridge University Engineering Department]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B26">
<label>26</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lamere]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<source><![CDATA[The CMU SPHINX-4 Speech Recognition System]]></source>
<year>2003</year>
<conf-name><![CDATA[ Proc ICASSP]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B27">
<label>27</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Stuttgart]]></surname>
<given-names><![CDATA[B-WCSU]]></given-names>
</name>
</person-group>
<source><![CDATA[OASIS-Open-Source Automatic Speech Recognition In Smart Devices]]></source>
<year>2014</year>
<publisher-loc><![CDATA[Germany ]]></publisher-loc>
<publisher-name><![CDATA[Baden Wuerttemberg Cooperative State University Stuttgart]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B28">
<label>28</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gaida]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<source><![CDATA[Comparing Open-Source Speech Recognition Toolkits]]></source>
<year>2014</year>
<conf-name><![CDATA[ Proc. ICSLP]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B29">
<label>29</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Allauzen]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<source><![CDATA[enFst: a general and efficient weighted finite-state transducer library]]></source>
<year>2007</year>
<conf-name><![CDATA[ Proc CIAA]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B30">
<label>30</label><nlm-citation citation-type="book">
<collab>ELRA-ELDA</collab>
<source><![CDATA[TC-STAR, ELDA]]></source>
<year>2000</year>
<publisher-loc><![CDATA[Spain ]]></publisher-loc>
<publisher-name><![CDATA[ELRA-ELDA]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B31">
<label>31</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Thiemann]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Ito]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Vincent]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
</person-group>
<source><![CDATA[The Diverse Environments Multi-Channel Acoustic Noise Databas (DEMAND): A database of multichannel environmental noise recordings]]></source>
<year>2013</year>
<conf-name><![CDATA[ Proc of Meetings on Acoustics ICA2013]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B32">
<label>32</label><nlm-citation citation-type="">
<collab>Sony</collab>
<source><![CDATA[Sony]]></source>
<year>2000</year>
</nlm-citation>
</ref>
<ref id="B33">
<label>33</label><nlm-citation citation-type="">
<collab>Hark</collab>
<source><![CDATA[Hark]]></source>
<year>2000</year>
</nlm-citation>
</ref>
<ref id="B34">
<label>34</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hirsch]]></surname>
<given-names><![CDATA[HG]]></given-names>
</name>
</person-group>
<source><![CDATA[FaNT: filtering and noise adding tool. Niederrhein University of Applied Sciences]]></source>
<year>2005</year>
<publisher-loc><![CDATA[Germany ]]></publisher-loc>
<publisher-name><![CDATA[Niederrhein University of Applied Sciences]]></publisher-name>
</nlm-citation>
</ref>
</ref-list>
</back>
</article>
