Draft article not currently submitted for review.
This is a draft Articles for creation (AfC) submission. It is not currently pending review. While there are no deadlines, abandoned drafts may be deleted after six months. To edit the draft click on the "Edit" tab at the top of the window. To be accepted, a draft should:
It is strongly discouraged to write about yourself, your business or employer. If you do so, you must declare it. Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Last edited by Catfun1990 (talk | contribs) 59 days ago. (Update) |
Daniel Sáez Trigueros (December 19, 1990 – August 12, 2024) was a Spanish researcher known for his contributions to the fields of artificial intelligence, machine learning, computer vision, text-to-speech synthesis, and generative models.
In 2019, Daniel gained his PhD on artificial intelligence by the University of Hertfordshire under the supervision of Dr Margaret Hartnett. Between 2016 and 2024 he co-authored 18 papers [1][2], mainly focused on face recognition while working for GB Group (2016-2019) and text-to-speech synthesis while working for Amazon (2019-2024)[3].
One of Daniel's most notable contributions was the study on the evolution of face recognition techniques where he contrasted older approaches, such as geometry-based and feature-based methods, with the modern dominance of deep neural networks, highlighting the superior accuracy and performance made possible by large datasets.[4]
He also co-authored CopyCat, a system for transferring prosody in text-to-speech synthesis without losing the identity of the target speaker.[5]
Notable Publications
edit- Daniel Sáez Trigueros, Li Meng, Margaret Hartnett: Face recognition: From traditional to deep learning methods. arXiv preprint arXiv:1811.00116, 2018.
- Sri Karlapati, Alexis Moinet, Arnaud Joly, Viacheslav Klimkov, Daniel Sáez-Trigueros, Thomas Drugman: CopyCat: Many-to-Many Fine-Grained Prosody Transfer for Neural Text-to-Speech. arXiv preprint arXiv:2004.14617, 2020.
- Daniel Sáez Trigueros, Li Meng, Margaret Hartnett: Enhancing convolutional neural networks for face recognition with occlusion maps and batch triplet loss. Image and Vision Computing (vol. 79, pp. 99-108), Elsevier, 2018.[6]
- Daniel Sáez Trigueros, Li Meng, Margaret Hartnett: Generating photo-realistic training data to improve face recognition accuracy. arXiv e-prints arXiv: 1811.00112, 2018.[7]
References
edit- ^ "Daniel Sáez-Trigueros". scholar.google.com. Retrieved 2024-09-10.
- ^ "dblp: Daniel Saez-Trigueros". dblp.org. Retrieved 2024-09-10.
- ^ "Daniel Sáez-Trigueros". Amazon Science. Retrieved 2024-09-10.
- ^ Trigueros, Daniel Sáez; Meng, Li; Hartnett, Margaret (2018-10-31). "Face Recognition: From Traditional to Deep Learning Methods". arXiv.org. Retrieved 2024-09-10.
- ^ Karlapati, Sri; Moinet, Alexis; Joly, Arnaud; Klimkov, Viacheslav; Sáez-Trigueros, Daniel; Drugman, Thomas (2020-10-25). "CopyCat: Many-to-Many Fine-Grained Prosody Transfer for Neural Text-to-Speech". Interspeech 2020: 4387–4391. doi:10.21437/Interspeech.2020-1251.
- ^ Sáez Trigueros, Daniel; Meng, Li; Hartnett, Margaret (2018-11-01). "Enhancing convolutional neural networks for face recognition with occlusion maps and batch triplet loss". Image and Vision Computing. 79: 99–108. doi:10.1016/j.imavis.2018.09.011. ISSN 0262-8856.
- ^ Sáez Trigueros, Daniel; Meng, Li; Hartnett, Margaret (2021-02-01). "Generating photo-realistic training data to improve face recognition accuracy". Neural Networks. 134: 86–94. doi:10.1016/j.neunet.2020.11.008. ISSN 0893-6080.