Audience This documentation describes the features of Live Chat Sentiment Analysis Accelerator for B2C Service and provides the configuration steps required to implement it.
Software RequirementsThis accelerator uses different softwares, cloud applications, and native applications to implement the end-to-end solution. To obtain the various licenses or environments for Oracle softwares, you must work with the applicable Oracle Applications team supporting your organization.
Third-party DependenciesThe following table lists the third-party software libraries and their versions.
User FlowWhen an end user initiates a chat with an agent, the Live Chat Sentiment Analysis Accelerator flow is initiated. As of now this accelerator does not support the real-time live update, as it uses asynchronous Custom Process Model (CPM) updates for the prediction flow, which means that the user can expect a slight delay of a few seconds to a minute to see the prediction evaluation results to be reflected in the Monitoring Report for the supervisor.
Extension FlowThis flow captures “prediction of emotion” and “supervisor ask” for every new chat message.
CPM FlowThis flow explains the steps that occur when a row is inserted in ChatAIPredictionInfo, which invokes the Custom Process Model (CPM) to evaluate the prediction results saved for each chat message.
Feedback FlowThe feedback flow allows supervisors to correct any wrong predictions made by the model. This corrected data will be pulled by the feedback job running in the OCI Data Science. This job uses this delta with the current training data and trains a new model. This ensures continuous learning and improvement of the model.
OCI Deployment ArchitectureThe following diagram shows the network architecture of the OCI Language Service model deployed in customer tenancy. The data used to train the model is stored in object storage and the access to the object storage to the OCI Data Science job is regulated via resource principal. The custom model is deployed in private subnet of Language Service and the access to the model is exposed via OCI Language Model endpoints.
Configure SSO for SiteYou need RSA Certificate to configure the SSO user for the Ingestion job. This job pulls data from the B2C Service Cloud and stores data in the object storage for training ML model.
Create a Stack from Zip FileA Stack is a terraform configuration that you can use to provision and manage your OCI resources.
Configure External ObjectsYou must use external objects to configure connection to OCI model endpoints. Separate connections are required for sentiment model and supervisor ask model.
Add Custom ConfigurationCreate a new configuration setting to create site specific custom setting for CPM configurations for invoking machine learning predictions.
Add ReportsSupervisors/Administrators can use reports to monitor the live chat sentiment and get the sentiment details of wrapped chats. Two report definitions are packaged as part of the sample code. You can create new reports using .net console.
Install CPM FileThe CPM script allows you to consume machine learning predictions. You need to update the test contact Id in the CPM file.
Add ExtensionsYou can import Agent Browser UI Extensibility into Oracle B2C Service to create extensions using the Extension Manager.
Assign Workspace to Staff ProfileYou need to assign the workspace for the ChatAIPredictionInfo custom object for the admin user who will be reviewing the predicted values and giving the feedback.
Debug CPM LogsYou can use the Process Designer / Custom Process Models (CPM) to associate PHP scripts with object events. We use CPM to call the CX REST APIs to get the chat messages and evaluate messages to find if it’s a negative sentiment chat.
Debug Extension LogsExtension logs have the extension invocation logs with any failure in handler, if present, in the Message column.
Debug Terraform Deployment LogsTerraform module for OCI Logging is used to create logs and log groups for OCI services and custom logs. This log will help to identify and debug the failures or errors that occur during the terraform run.