Shrikant Venkataramani

Hello World

Welcome to my page, I am Shrikant Venkataramani. I am a fourth year Ph.D. candidate in the department of Electrical and Computer (ECE) engineering at the University of Illinois at Urbana-Champaign. I work on problems related to Computer Audition where we teach a machine to "listen" like a human.

Currently, I work on Source Separation problems where given a mixture, the goal is to separate the mixture into its constituent sounds. Most of my work lies at the confluence of Machine Learning and Signal Processing problems. If you are interested in my work, please check out my Projects page here.

I work with Professor Paris Smaragdis at the Computational Audio Lab. To know more about our lab and the work we do, you can check out the lab webpage here.

Updates

  • Oct 10, 2018: New arxiv submission
    We have a new submission up on arxiv titled "End-to-end Networks for Supervised Single-channel Speech Separation" Check out the paper here.

  • Sep 10, 2018: Performance Based Cost-functions for End-to-end Speech Separation APSIPA 2018
    I will be presenting our work on "Performance Based Cost-functions for End-to-end Speech Separation" at the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference on Nov 14, 2018. My first trip to Hawaii, can't wait for this.
  • July 5, 2018: End-to-end Source Separation with Adaptive Front Ends Asilomar 2018
    Our paper on "End-to-end source separation using Adaptive Front Ends" has been nominated for the best student paper award at the Asilomar conference on Signals, Systems and Computers on Oct 28, 2018.
  • July 5, 2018: End-to-end Source Separation with Adaptive Front Ends Asilomar 2018
    I will be presenting our work on "End-to-end source separation using Adaptive Front Ends" at the Asilomar conference on Signals, Systems and Computers on Oct 28, 2018.
  • Nov 3, 2017: Presenting at ML4Audio workshop at NIPS 2017
    I will be presenting our work on "End-to-end source separation using Adaptive Front Ends" at the NIPS workshop on Machine Learning for audio processing (ML4Audio) on Dec 8. I am really looking forward to this one.
  • Oct 28, 2017: New arxiv submission
    A revamped version of our paper "End-to-end source separation using Adaptive Front Ends" is now available online on arxiv! Check out the paper here.

    To learn more and hear-out some separation examples, visit the related project page here.
  • Sep 28, 2017: Best Student Paper Award, MLSP
    Our paper "Neural Network Alternatives to Convolutive Audio Models for Source Separation" has been awarded the "Best Student Paper Award" at MLSP 2017! Check out the paper here.