I built an end-to-end working prototype that uses Computer Vision to detect sign language meanings in images/live videos and generate an equivalent, realistic voice of words communicated by the sign language, all in real-time. It was fun. And it was stressful too.
In this talk, I'd talk about the highs and lows of working on this project while also elaborating on the groundwork, the process, the architecture used, as well as on how the project was expanded into a research project to address a communication barrier in sub-Saharan Africa (I created a pioneer dataset for the Nigerian Sign Language) while at it.
Steven Kolawole has his technical skillset cuts across Data Science and Software Engineering, with a bias for ML Research these days. His research interests focus on resource-efficient machine learning in terms of computational resources and low-resource/limited labeled data.
He is and has been heavily involved in varieties of ML subfields including ML Engineering, Software Engineering, Data Engineering, Data Science/Analytics, and Cloud Computing.
Steven is also big on knowledge sharing via community mentorship and collective growth, open-source development, meetups facilitation, speakership, technical writing, research, and he gets kicks from helping tech muggles find their feet.