There are various ways to achieve this goal, so I'll focus on explaining the main concept and capabilities. It's crucial to keep this separate from the 'Filler Words Tool,' and I'll elaborate on the reasons later. Before delving into the tool idea, I'd like to provide some context for my proposal. It's all one paragraph so you can skip it.
I'm part of a team dedicated to safeguarding kids from potential threats posed by adults on the internet. Our approach involves setting up decoys, engaging with adults, and conducting thorough interviews as part of our investigations, that can last from 1 to 3 hours. Building rapport is our initial strategy to create a situation where the individuals believe law enforcement won't be involved, which makes confessions easy for them to talk about in depth. We record this on video as evidence for law enforcement, ensuring that any inappropriate online activities are disclosed. However, the challenge arises when law enforcement requires comprehensive information before confiscating devices or making arrests that will prevent an individual from tampering with evidence. This process of censoring 1-hour videos can extend to 4-8 hours to guarantee nothing is overlooked since youtubes guidelines are applied selectively.
The proposed tool is essentially a feature to create and manage a list of words or phrases to be muted. If automating this process proves too complex, an alternative is to streamline the manual process. Users could simply right-click and select 'mute word' or 'mute words' (if multiple are highlighted). It's important to emphasize the distinction between 'word' and 'words' for clarity.
To optimize loading time, this feature should be an option during media import. Activating it would expand the choice between a default list and a customized one. Users can also decide if they want the words to be highlighted and/or muted.
Default lists would include words generally considered inappropriate, avoiding explicit mention to maintain context. Users adding words to the default list would automatically create a separate list comprising default words and any additions. This caters to the most common censored words on TV and addresses monetization limitations on platforms like YouTube.
For user privacy, a content warning when viewing the list could be implemented. This ensures users are aware that the list may contain offensive words.
Keeping this tool separate from the 'Filler Words Tool' allows for a toggle option on the timeline. When activated, users can visualize all muted sections or highlight a specific area to mute, correcting any oversights or potential misinterpretations. This information can be used to train the AI, enhancing its accuracy over time.
Lastly, when users highlight and right-click any text, the menu should include an option to add it to the list. If a single word is chosen, it's treated as one entry. However, if multiple words are selected, they should also be considered as a single entry to account for context. For instance, platforms like YouTube consider context when assessing descriptive sentences—explicit details might be overlooked, but any indication involving a minor can cause a video to reach less people.
Our goal isn't for entertainment purposes. It is essential for spreading awareness to parents who can find other resources to keep better eyes on their Childs online activities and even take preventative measures. Not only that, but the more these adults are caught, the easier it is to take down larger groups that actively produce the material being plagued online - leading to the rescue of REAL victims all over the world.
So it's more than just a feature request in my eyes. Thanks for anyone that actually reads this. Sorry for it being so detailed. I just think many haven't gave enough thought into the actual implication of such a feature. Devs work hard and it's not as simple as many think. Ive made far more in depth mockups for InShot and PicsArt who both made the ideas into real tools. I wouldn't mind doing that as well.