(Pocket-lint) - Yubo is a live social discovery platform that lets you meet people by communicating with them in real time through live streams. While the platform has been incredibly successful among Gen Z for helping them to make new friends, like all online social platforms, there can still be instances of harmful or offensive behaviour.
Yubo is tackling this issue head-on with their latest introduction of audio moderation.
How does Yubo’s audio moderation work?
Although social media have made significant strides in advancing real-time video and image moderation technologies, audio moderation has still shown to be challenging. Victims of online harassment report that they often experience hateful speech through audio such as live streams.
Yubo has addressed this by introducing robust audio moderation in partnership with Hive, a leading provider of cloud-based AI solutions. The company performed a trial phase in May of 2022 in the U.S. and has recently expanded it to include its largest English-speaking regions. This allows the technology to gather more insights to recognize harmful speech better.
The app’s audio moderation is a cutting-edge feature, they are actually the first social platform in the world to introduce it, but the principle is simple. It records and automatically transcribes 10-second audio snippets in live streams of at least 10 people. The tool instantly scans the text using Hive’s AI. For higher accuracy, the technology only flags the transcripts that contain phrases and words that violate Yubo’s Community Guidelines.
The audio moderation relies on keywords to detect potential harassment. The administrators have configured the technology to stay on the lookout for particular words that indicate hate speech. It reviews over 600 live streams daily, scanning for these phrases to increase safety on the social media app.
Best of all, the app doesn’t do all the work on its own. The flagged content is sent to Yubo Safety Specialists, who investigate potential incidents to determine the proper course of action to maximize reliability. These specialists may decide to contact law enforcement to de-escalate a situation if it is severe.
If a transcript doesn’t contain any suspected violations, Yubo doesn’t review or keep it. Non-flagged content is automatically deleted after 24 hours in order to maintain high user privacy. However, flagged transcripts that require law enforcement or internal investigation may be stored for up to a year.
What are the benefits of Yubo’s audio moderation?
The greatest benefit of Yubo’s audio moderation is added safety. As the technology continues to improve and become more accurate, it will be able to provide users with another layer of security. The moderators utilize the technology to filter the content and keep the platform as safe as possible.
What makes Yubo’s audio moderation so effective?
Many features make Yubo’s audio moderation incredibly effective. Primarily, the algorithms and AI powering the technology, which relies on machine learning.
Machine learning allows Yubo to process large quantities of data more effectively and efficiently. If the administrators analyzed the information manually, it would take them too long to detect unwanted behaviour.
In addition, machine learning will continue to help the technology strengthen and improve its accuracy automatically as it processes new data and information.
Finally, machine learning helps Yubo discover correlations and patterns that would otherwise remain hidden. This also streamlines decision-making and makes the transcripts more understandable.
What are the challenges of Yubo’s audio moderation?
One of the biggest challenges of Yubo’s audio moderation is false positives.
As previously discussed, the technology uses keywords to detect hate speech and decide whether a transcript needs further review. False positives can occur when the algorithms pick up words from a song playing in the background for example. It’s a red flag for audio moderation but rarely constitutes hate speech.
The same goes for playful language. For example, people may use offensive words out of context for different effects. However, the AI classifies this as offensive behaviour, so it’s another instance of false positives.
This problem highlights the complexity of online audio moderation, but it also shows the importance of combining tools with human input for context and nuance. That’s exactly what Yubo does.
They don’t rely solely on audio moderation technology to determine hate speech or other types of offensive behaviour. Instead, the tool only sends the content for review. The moderators have the final say on what action to take. They also continuously aid the algorithms enhancing precision and reducing the number of false positives and can intervene in real time if there is an issue detected in a live stream.
All of which translates into a safer environment.
Cyberbullying takes another L
Effective audio moderation is an exciting and industry-leading undertaking at Yubo. While, of course, with any new technology some challenges remain, it still shows a world of promise in the impact it could have to limit cyberbullying in the future.