Skip to main content

AI, especially GenAI, present new concerns as older ones linger.

There’s no rest for the weary, especially those weary communication pros responsible for securing their workplace collaboration environments against internal and external threats. Today’s security landscape is one that is constantly evolving, at an ever-increasing pace, as new applications enter the workspace, and new features quickly become available for existing apps.

These two trends, now largely driven by the introduction of AI, especially generative AI, features require a rethinking of security and compliance approaches.

Over the years Metrigy has tracked how companies secure their communications, collaboration, and customer engagement applications and platforms. Unfortunately, as I’ve noted in the past, the results aren’t good. For example, in our last study of 440 companies, published in Q2-2023, just 37% of participants said their company had a structured plan for securing their WC apps. Preliminary data from our 2024 study, scheduled for release in March 2024, show roughly the same result. Despite ever increasing attacks on collaboration apps, the needle simply isn’t moving!

Attack risks are continuing to grow as well. Consider:

  • Growing deployments of generative AI tools enable rapid creation of content including meeting transcripts, summaries, and action items that may need classification, retention, and protection in accordance with compliance and data loss prevention requirements.
  • Generative AI language models may use customer data for analysis and contextual responses, underscoring the need to ensure that vendors protect customer information.
  • Generative AI bots / virtual assistants / copilots query data stored within workplace collaboration applications to ensure that responses are accurate and aren’t the result of poisoned large language models.
  • The use of generative AI for employee support (e.g., for customer service agents, sales, customer support, etc.) must also ensure accurate responses to limit risk and liability.
  • The potential of AI to improve attack effectiveness via voice impersonation, optimized phishing and social engineering attacks, and new as of yet unforeseen attack vectors.

Continue reading at nojitter.com.