Enterprise Conversation Solutions
About JimmyLiao
  • Fullstack
  • Productize the Mobile/App/GenAI
Error uploading image.

Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
Agenda
1
Conversational Solution Challenges
2
RAG with GCP PaLM 2 Gemini Pro
3
Why Content Moderation matters (NeMo-GuardRails)
4
Evaluation
Conversational Solution
Challenges
The day before ChatGPT
How build ChatBot with DialogFlow
  • Agent / Intent / Fulfillment
Require ML/DL/NLP skill in the past
What we have for now
unknown link
Challenges
  • Chat with Your Data
  • Public / Private dataset
  • Hallucination
  • Content Moderation / Safety
RAG
with
GCP Vertex AI
First Impression
  • few lines of code (or cURL)
  • Let's see the response payload
RAG with PaLM 2 / Gemini Pro
Simple chat to validate Gemini Pro API (LlamaIndex syntax)
RAG with PaLM 2 / Gemini Pro
Content Moderation (Safety)
Safety Settings
Categories are the key values and are:
  • HARM_CATEGORY_DANGEROUS_CONTENT
  • HARM_CATEGORY_HATE_SPEECH
  • HARM_CATEGORY_HARASSMENT
  • HARM_CATEGORY_SEXUALLY_EXPLICIT
They can have a value of (child values of HarmBlockThreshold):
  • HARM_BLOCK_THRESHOLD_UNSPECIFIED
  • BLOCK_LOW_AND_ABOVE
  • BLOCK_MEDIUM_AND_ABOVE
  • BLOCK_ONLY_HIGH
  • BLOCK_NONE
  • Code snippet
Content Safety config/behavior
NeMo-GuardRails
LLMRails.init(LLM, Flow)
  • What LLM we utilize
  • Colang script defining the rails and the dialogue flows
Then initialize LLMRails with these two config
user or bot msg → semantic vector space
Use Case:
Blocklist
Let's see the code
moderation.co
actions.py
block_list.txt
Ground (chit-chat) vs moderation
Error uploading image.
Error uploading image.
Evaluation
LLM Vulnerability Scanning
Scan Results
  1. bare_llm: no protection
  1. with_gi: using the general instructions in the prompt
  1. with_gi_dr: using the dialogue rails in addition to the general instructions
  1. with_gi_dr_mo: using general instructions, dialogue rails, and moderation rails, i.e., input/output LLM Self-checking
Scan Results
  • Garak: an open-source tool for scanning against the most common LLM vulnerabilities.
Question?
Next topic pool
Validation / Robustness / Self-Service around LLM
Made with