Meta’s latest AI suite is turning even whispers!

Meta's AI suite creates a flawless and near-perfect translator. It gives a soul to artificial intelligence and saves it from being a robot.
 Meta’s latest AI suite is turning even whispers!
READING NOW Meta’s latest AI suite is turning even whispers!

Meta, which combines Instagram, Facebook Threads and WhatsApp under its roof, attracts attention with the updates it makes in the world of artificial intelligence. Meta makes speech translation more perfect and impressive with its new artificial intelligence package. She can even translate whispers now. Here are the details…

Meta’s new artificial intelligence package makes speech translation more seamless!

In August, Meta introduced SeamlessM4T, an AI translation model that supports nearly 100 languages ​​for text and 36 languages ​​for speech. However, Meta is not satisfied with this. Updated the architecture of the algorithm to further improve it. With its updated architecture, this algorithm will make speech translations more spontaneous and meaningful. For this reason, technology giant Meta has now added 2 new features to this tool.

The first of the two new features, as the name suggests, transfers your expressions to your translated speech. Meta called this feature Seamless Expressive. With this feature, the algorithm will also translate your voice tone, emotional tone and whispers, and speaking speed.

The conversations translated so far always sound robotic. We can say that this feature is a potentially game-changing development both in our daily lives and in content production. Additionally, this feature is supported by several languages. These languages ​​include English, Spanish, German, French and Italian. It’s even available in Chinese.

The second feature is that it starts translating a conversation while the speaker is still talking. This allows others to hear the translation faster. Meta named this feature Seamless Streaming. Unfortunately, this feature has some handicaps. There is a slight delay in translation, although it is still just under two seconds. But you won’t have to wait for someone to finish their sentence to hear the translation.

According to Meta, the difficulty here arises from the fact that different languages ​​have different sentence structures. Therefore, it is necessary to decide whether there is enough context to start creating a translated output or whether to continue listening. For this, the tech giant had to develop an algorithm dedicated to examining partial audio input.

Meta’s latest development in its seamless communications suite surpasses its competitors. It also looks more impressive than the mobile translator tools offered by companies like Google and Samsung. There is no information on when the public will be able to use these new features. However, experts predict that Meta will include these features in augmented reality glasses.

Comments
Leave a Comment

Details
196 read
okunma30354
0 comments