Improving Tone Deafness and Mispronunciations with Signal Processing
Not everyone is born with a singing voice that will win them a spot on Glee or American Idol, but what if you could improve the sound of a truly terrible vocalist?
That’s part of the idea behind the work of Professor Mark J. T. Smith, Dean of the Graduate School at Purdue University, and his colleagues.
He and his research team are working toward using signal processing not only to improve music but foreign language education as well. They have recently been developing automated software that can be used to correct pronunciation errors in Spanish.
“Our program identifies the error in pronunciation, makes the correction, and then plays it back for the speaker,” said Smith. Currently, the software can correct errors in cadence, intonations, pitch and accents.
Smith spoke at Boston University March 2 as part of the ECE Department’s Distinguished Lecture Series, which brings prominent engineers to the university. He discussed the topic, “Improved Models for Accent Detection and Voice Synthesis.”
When modifying and synthesizing speech, there are many components that go into reaching the desired outcome including pitch control and control over the time scale. Smith said that he and his team have been using the ABS-OLA synthesis model in their work because of the greater flexibility this algorithm allows.
Development is not yet complete on the software, but when tested on isolated words like “hierro” (iron) – complicated for some because of the rolling “rr” –impressive results were achieved.
In addition to working at Purdue, Smith is a fellow of the IEEE and has authored many papers in the areas of speech and image processing, filter banks, and wavelets. He is also an accomplished fencer, having been a member of the U.S. Olympic Team in 1980 and 1984.
-Rachel Harrington (firstname.lastname@example.org)