Beena Ahmed joined the University of New South Wales (UNSW)
in 2017 and is an Associate Professor in Signal Processing at the School of
Electrical Engineering and Telecommunications. Prior to that, she served as an
Assistant Professor at Texas A&M University in Qatar. She received her
B.Sc. Engineering in Electrical Engineering from the University of Engineering
and Technology, Lahore, Pakistan in 1993 and her Ph.D. from UNSW in 2004. She was
awarded the Superstar of STEM, Science and Technology Australia in 2019. Her
current research interests are in applying machine learning and remote
monitoring in healthcare and therapeutic applications.
Carlos Busso is a Professor at the Language Technologies
Institute, Carnegie Mellon University, where he is also the director of the
Multimodal Speech Processing (MSP) Laboratory. He received his BS and MS
degrees in electrical engineering from the University of Chile, in 2000 and
2003, respectively, and earned his Ph.D. in electrical engineering from the
University of Southern California (USC), Los Angeles, in 2008. He was selected
by the School of Engineering of Chile as the best electrical engineer who
graduated in 2003 from Chilean universities. He is a recipient of an NSF CAREER
Award. In 2014, he received the ICMI Ten-Year Technical Impact Award. He also
received Hewlett Packard Best Paper Award at the IEEE ICME 2011, and the Best
Paper Award at the AAAC ACII 2017. He received the Best of IEEE Transactions on
Affective Computing Paper Collection in 2021 and the Best Paper Award from IEEE
Transactions on Affective Computing in 2022. In 2023, he received the
Distinguished Alumni Award in the Mid-Career/Academia category by the Signal
and Image Processing Institute (SIPI) at the University of Southern California.
He received the 2023 ACM ICMI Community Service Award. He is a member of AAAC
and a senior member of ACM. He is an IEEE Fellow and an ISCA Fellow. His
research interest is in human-centered multimodal machine intelligence and
application, focusing on the broad areas of speech processing, affective
computing, and machine learning methods for multimodal processing.