Skip to content

aritra0342/Speech-Emotion-Detector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

🎤 Speech Emotion Detector

A Python-based real-time Speech Emotion Recognition project that listens to your voice through your microphone and predicts your current emotion (e.g., happy, sad, angry, etc.) using MFCC features and an MLP classifier.


GitHub Repo

The full project is hosted here: speech-emotion-detector


Dataset

This project uses the RAVDESS (Ryerson Audio-Visual Database of Emotional Speech and Song) dataset, a validated multimodal collection of emotional speech and song performed by 24 professional actors across several emotions (calm, happy, sad, angry, fearful, surprise, disgust, neutral). Each expression is available in multiple modalities at different intensities and can be downloaded under a Creative Commons license from Zenodo :contentReference[oaicite:0]{index=0}.


Features

  • Trains on the RAVDESS dataset (speech-only, Emotion_1.zip)
  • Extracts MFCC audio features (standardized to fixed-length)
  • Achieves ~78% accuracy with an MLP classifier
  • Real-time emotion prediction from your microphone input
  • Built with Python, Librosa, scikit-learn, sounddevice, etc.

Getting Started

1. Clone the repository:

git clone https://github.com/your-username/speech-emotion-detector.git
cd speech-emotion-detector

About

🎤 A real-time Speech Emotion Recognition system using Python, Librosa, and scikit-learn. Detects emotions like angry, happy, sad, etc. from your voice via mic input, only when you use it in your local system like VSCode either you have to upload WAV files to check it.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages