Skip to main content

With every technological innovation comes a downside; in the world of AI and its generative iteration, KM professionals must strike the balance between enhancing workflows with AI and protecting their enterprise against the risk of open generative AI (GenAI) solutions.

Experts from Pryon joined KMWorld’s webinar, Navigating the Risks of Generative AI for Knowledge Management, to explore how Pryon’s AI knowledge management platform can aid in achieving that risk/reward balance when implementing GenAI in the workplace.

Chris Mahl, president and COO at Pryon, introduced Igor Jablokov, founder and CEO at Pryon, who explained that “while generative AI is transformative in certain public, imaginative, and creative use cases, there is certain issues associated with them showing up in more enterprise contexts.”

Upon inspecting these technologies, Jablokov continued, there is an extensive list of dangers, which include:

  • Frequent hallucinations
  • Static training data
  • Problems memorialized in code
  • High risk of data leakage
  • Vendor lock-in
  • Reverse engineering
  • Lack of user access control
  • Prompt injection attacks
  • Supply chain attacks
  • Universal and transferable adversarial attacks
  • Copyright legal gray area
  • Legal constraints
  • Toxic and biased responses
  • Limitations of monolithic LLM architecture
  • Extensive energy consumption
  • Model collapse
  • Model drift
  • Poisoned training data

This laundry list of potential risks in implementing GenAI for proprietary use cases illustrates that while the technology is exciting, it’s not without its drawbacks. For instance, static training data—which limits an LLM to outdated information—can prevent the language model from outputting the most up-to-date information. Furthermore, models will not reliably “forget” wrong or outdated information, ultimately reducing overall model performance, according to Jablokov.

Outside of accuracy, GenAI poses unique privacy concerns in enterprise contexts. Leaking sensitive proprietary data, vendors using that data to train an LLM, masked queries revealing the content an LLM is trained on, and more can land an enterprise vying for innovation in hot water.

Enter Pryon, a platform purpose-built to eliminate the risks posed by purely generative approaches. Pryon transforms static content into interactive knowledge, ingesting information of any classification—including public, personal, published, and private content—and eliminating the distance between critical information and those who need it most.

Pryon reduces risk of hallucinations and bias, while simultaneously employing document-level access controls and RAG methodology that does not require model training on user queries. Enterprises have full control of the knowledge set available to the models, as well as access to various tools that enable additional control over responses.

Read More…