Accelerator Integration Architecture

The Live Chat Sentiment Analysis accelerator uses the AI ML capabilities in predicting “text emotion” and “supervisor ask” for any chat response posted by end user. All the predictions are saved into custom objects for reporting purpose.

The accelerator evaluates the chat text predictions and flags the chats for the supervisors to monitor negative emotions. Async SPM is used to process the evaluation of prediction results. Due to the async CPM, the logic to evaluate and flag negative chats may have a delay after sending a chat text. Async CPM evaluates the prediction result of each text and flags the chat when a “supervisor ask” or “negative emotion” pattern is identified.

The following architecture diagram depicts the extension and CPM flow:

Here’s a description of the components shown in the architecture:

  • Agent Browser UI extension: An extension is typescript code which is used to customize the behavior in the UI. This extension code calls the custom models to get the predictions for emotions and supervisor ask. The prediction result is stored in a custom object.
  • External Objects: External objects have the capability to connect to OCI endpoints. This feature uses the capability to communicate with the custom language model deployed in the OCI language service.
  • CPM: Async CPM processes and evaluates the prediction results and identifies if a chat has a negative emotion or if there is a ask for supervisor.
  • OCI Data Science Job: The OCI Data Science job pulls and stores data for training custom models to predict emotions and supervisor ask in OCI Bucket Storage. This job trains the model using the ingested data.
  • OCI Language Service: The OCI Language Service can train custom models for text classification. The data ingested using the OCI Data Science job is be used to train the custom models. The OCI Language Service is used to expose these trained custom models as endpoints, which are invoked using external objects.