HearMe: Classifying Emotions from Audio
- Model: CNN trained on 300+ audio samples for classifying 5 emotional categories using MFE features.
- Toolchain: Used Edge Impulse for dataset collection, model training, and model deployment.
- App: Integrated the model into a Streamlit web app for real-time audio prediction with < 2-second latency.
- Tech Integration: Parsed serial audio data, labeled emotions, and updated interface on the fly.
← Back to Projects