Deep Dive into Artificial Intelligence (AI) Models for Signal

Learn how you can use Aware's various AI Models for Signal to strengthen rules and discover relevant content

When building rules within Signal you have the ability to choose from a list of AI Models:

  • Contains Language
  • Code Detection
  • Sentiment
  • Toxic Speech
  • NSFW (Not Safe For Work) Image
  • Screenshot Detection
  • Password Detection 

Screen Shot 2023-03-21 at 9.23.09 AM

 

Contains Language:  Aware listens in 12 different languages: Bengali, Chinese, Dutch, English, French, German, Hindi, Japanese, Portuguese, Russian, Spanish and Turkish.

Note: An additional Language option (“other”) is available to detect when a language is being spoken that is outside of a specific set of languages their employees are “allowed” to use or within the bounds of what the customer would expect.

Companies have recognized that when employees use certain languages outside of the primary collaboration language, it tends to be an anomaly and further examination is needed.  With language detection, Aware provides the ability for “contextual review” of these anomalies so they can be investigated further.

Sample Content:

    • [Trigger Message] Private Chat - Person 1: "第二季度的財務預測是什麼" or "What are the financial projections for Q2?"
    • [Trigger Message] Private Chat - Person 2: "我們預期的一半。這將是一個糟糕的季度" or "Half of what we expected. It will be a bad quarter."

 

Code Detection: You can choose the Source Code Detection condition to listen to groups where it is not permitted to share certain codes. This allows you to gain visibility into where code is being shared within your collaboration platforms. Most companies have alternative applications where code-sharing is preferred. Use code detection to notify admins when/where code is shared, review the content and take action if needed.  

Sample Content:

    • [Trigger Message] Private Group/Team - "Sorry to do this...I have to feed my kids lunch (my wife has a meeting). I won't be available for the code review, so I figured I would share my code here, so everyone knows what I did. Let me know if it looks good.

 

Sentiment: Use Sentiment to gauge the emotion of the conversation. Sentiment is separated into the following categories:

  • Very Positive: Content contains emotion that is extremely happy or excited

  • Positive: Content contains emotion that is generally happy or satisfied, but the emotion is not that extreme
  • Neutral: Content does not contain much of a positive or negative emotion
  • Negative: Content contains emotion that is perceived to be angry or upsetting, but not extreme
  • Very Negative: Content contains negative emotion that can be perceived as extreme

Toxic Speech: Use Toxic Speech to gauge the toxicity and health of the conversation. Toxic Speech is separated into the following categories:

  • Inappropriate: contains harsh language or swear words that would be inappropriate for a work environment, such as sexual innuendos or off-color jokes
  • Offensive: contains inappropriate language and includes a direct target, such as a person or group
  • Hate: considered offensive, but also includes language that is motivated by or suggests bias against a race, religion, disability, sexual orientation, ethnicity, gender/gender identity
  • Healthy: does not contain language in any of the above categories
Sample Content #1: Use Very Negative/Negative Sentiment and Toxic Speech models to provide immediate visibility into potentially harmful and/or toxic information that could harm the company's reputation and/or culture if not addressed. 
Create a Rule within Signal that looks at Messages and select Very Negative/Negative Sentiment condition type from the list.
    • [Context message] Public Group/Team - Person 1: "Good Morning Team! We have put together a plan, following State and CDC Guidelines on how we will return to the office. These guidelines were carefully developed to protect every employee and will be mandated…"
    • [Trigger message] Reply in Public Group/Team - Person 2: "Does leadership really think this plan is going to work…what ever happened to involving the people that matter in big decisions? I refuse to come back if everyone on my team is as well. Sorry, but not sorry – Mike has no awareness of personal space and Susan is sick every other day. I will not share common spaces with these people."

Sample Content #2: Create a Rule within Signal that looks at Messages and select Toxic Speech (Hate and Offensive) condition type from the list.

    • [Trigger message - offensive/hate speech] "Aweeee poor little sally didn't get her code right...don't be such a baby. You should have known better; you suck at your job."

Sample Content #3: Create a Rule within Signal that looks at a Messages, select a Custom Keyword List: Black Lives Matter and Sentiment (Very Positive, Positive, Neutral)

    • [Trigger message - positive sentiment/Black Lives Matter custom list] "So excited to join the protests for #BlackLivesMatter. Who is with me!?

 

NSFW Image Detection: The NSFW (Not Safe for Work) image condition detects images that may contain nudity or sexual content. This can include images such as adults or kids in swimsuits, people wearing suggestive clothing and pornography. With NSFW you can stay on top of where and how these types of images are being posted so they can be removed from your platform if deemed not safe for work.

Sample Content:

[Context Message] Public Group/Team - "Hi Team! Let's all welcome Jim back from vacation. He will now be our lead Project Manager, please reach out to him for any questions."

[Trigger Message] Public Group/Team - "Thanks everyone, glad to be back. We had a great time on our beach vacation, look how happy everyone is."

TravelAgeWestSlide_11-3

 

Screenshot Detection: Aware's Software Screenshot image condition detects images that are usually taken from digital screens, such as a computer or phone. Screenshots become flattened images, (.jpg or png.,etc), that could contain sensitive data that might otherwise go undetected. it's not always bad if someone shares a screenshot, but given the source of where the screenshot is being shared might be worth surfacing for review by the Community/Team admin

Examples of software screenshots that the condition will detect: Browser/OS, office software & applications (Excel, etc.), collaboration platforms, office software & applications, email, social media platforms and common software components (buttons, icons, etc.)

Sample Content:

[Context Message] Direct Message - "Hey, how did your meeting go about Project X? Any exciting insight you can share?"

[Trigger Message] Direct Message - "Hi, it was awesome! I got to see some pretty top-secret stuff. Checkout this screenshot I took from the meeting. Don't tell anyone I shared it with you."

 

Password Detection: The Password Detection rule trigger in Signal is powered by a machine learning model that works to identify messages where a password is being shared within a collaboration platform. Passwords are notoriously difficult to detect in natural language due to the fact that any word can be a password depending on a system’s requirements. 

Aware’s proprietary Password Detection model runs behind the scenes to scan messages for tokens that “look” like passwords typically used in the workplace. Depending on the credentials referenced, a password could be anything from a plain word like grouper to a complex string of characters like p@55'w0rd$$1,f. End users can experiment with the various dials included with Password Detection to determine the optimal “recipe” for their use case.

Screen Shot 2023-03-20 at 9.55.11 AM

For example, if a customer is hoping to automate deletion of potential passwords, they may opt for a more rigid approach that relies on the highest confidence level and regular expression builder to find the terms that are most likely to be sensitive passwords that the customer must protect.

Screen Shot 2023-03-20 at 9.53.32 AM

 

AI and other advanced "AND" combination ideas:

Rule: Heated Political Conversations 

      • [Negative Sentiment] AND [Global Political Topics]

Rule: Inappropriate/Offensive Content

      • [Toxic Speech (all)] AND [Negative Sentiment]

Rule: Sexual Orientation Bullying

      • [Sexual Orientation Keywords] OR [Sexual Orientation Slurs] AND [Toxic Speech or Negative Sentiment]

Rule: Potential Disgruntled Employee

      • [Salary Keywords] OR [Layoff/Termination Keywords] AND [Toxic Speech or Negative Sentiment]

Rule: Physical Appearance Bullying 

      • [Physical Appearance] AND [Toxic - Offensive/Hate]  

    Rule: Intellectual Property Leak - Code

        • [Sensitive project keywords] AND [Code Detection] 

    Rule: Intellectual Property Leak - Screenshot

        • [Sensitive project keywords] AND [Software Screenshot]

    Rule: Leadership Shaming or Gossip

        • [Executive Name Keywords] AND [Toxic Speech or Negative Sentiment]

    Rule: Toxic Conversations Related to Black Lives Matter

        • [Black Lives Matter Terms] OR [Looting/Rioting/Protest Terms] AND [Sentiment - Negative/Very Negative] OR [Toxic - Offensive/Hate]

    Rule: Positive Conversation on Black Lives Matter

        • [Black Lives Matter Terms] AND [Sentiment - Very Positive/Positive/Neutral]