Quick Summary

Problem:
Customers want to find specific data points we don't support out of the box

Approach:
Leverage previous user research and work closely with our in-house Legal Knowledge Engineering team, who are a high volume user group

Solution:
A document viewer driven product that allows users to highlight text they want their AI field to find and quickly see suggestions (generated by AI) so they can adjust their strategy as they train.

Approach

We had a lot of user research that Sasha Vtyurina had been working on with users who had been using Kira’s Quick Study feature to train fields, as well as research that I had done myself with Kira users with regards to share fields between clients and companies.

The user base for this training product would be different from Kira’s Quick Study, since Zuva's products are not targeted towards law firms. Our user base was more likely to be in-house counsel at corporations and companies that are building legal software of their own who have in-house or contracted subject matter experts, who may not be lawyers at all.

Problems identified in the research:

  1. Users need to see feedback earlier in the process so they can access whether their training strategy is effective
  2. When training multiple fields at once, users want to easily add annotations for other fields if they happened to chance upon them during their training for another field
  3. Sharing of documents between training projects would limit the duplication of set-up efforts, such as uploading documents and reduce the wait time to start a project
  4. Using the fields with the API needs to be really straightforward

Isolated view of the highlighting and suggestions flow in the rough user flow

Solution

Users can train their own AI Fields by creating the field, uploading documents and manually highlighting the annotation. When the user is happy with their highlight, they can queue the field for training on the individual document. Every time there are 10 documents in the queue, automatic training will occur and generate AI driven suggestions on the user's next document. This allows the user to test the efficacy of their AI field as they complete the work. The suggestions become more helpful as they train more documents, with the goal being for the user to just start “accepting” the AI suggestion, at that point the user can be reasonably confident that their AI field is finding what they expect it to find and it is ready to be integrated into their own system.

A user has highlighted text and an "Add to Existing Field" pop-up has opened

Suggestion generated for user to review and accept or reject

When the field has been deemed ready by the user, the user simply selects the field and publishes it to the region of their choice. It will automatically be published to their organization and be available for use with their organisation’s tokens. They can continue to train the AI field until they are ready to update it, when an update is ready they would simply publish the field again.


Screen sequence showing field being published

Results

Our internal Legal Knowledge Engineering team continues to be the primary user of this application. They use the product to train AI fields on hundreds of documents and find training to be quick and easy to manage for the volume of fields they are typically working on. The need for customers to train their own fields, although frequently mentioned as a value add, is typically only explored when it’s absolutely necessary and not something that a customer would do without a very clear project in mind. Since we offer +1,300 AI Fields out of the box, there isn’t much need to train custom AI fields often. The feedback we have received from customers is largely positive, with users citing difficulty getting enough documents to train on as being one of the biggest hurdles.


The marketing video released for AI Trainer