Understanding Intella Assist and LLM Integration

Intella Assist (IA) is an advanced feature of Intella designed to enhance your workflow by integrating and supplying results using several 3rd party Language Learning Models (LLMs). 


Intella Assist (IA) requires an active internet connection to work correctly. When using IA data is exchanged with the chosen LLM to help with the results supplied via the IA feature.

This document explains how Intella Assist (IA) operates depending on whether:
(.) An LLM service is configured, and IA is in use by the reviewer
(.) What data is transmitted when a LLM is active in IA and a query is performed
(.) How can you manage these settings and control if data is sent to the LLM or not

What Happens If an LLM Service Is Configured?

For Connect and Investigator, when an LLM service is correctly configured and the user is granted the "Can use Intella Assist" permission, IA becomes active. Similarly, in the desktop version, IA is activated when an LLM is correctly configured in the preferences. Once IA is active and a user interacts with it, the following occurs:

  • Data is sent to the configured LLM service provider.

  • A confirmation window will be shown to notify users.

What Happens If No LLM Service Is Configured?

If no LLM service is configured, Intella Assist is effectively disabled. This means:

  • No data is sent anywhere.

  • The features of Intella Assist are not available for use inside Intella

This option is suggested for the best data privacy or where sending data outside the organization is not allowed.

Data Transmission Details

When Intella Assist is enabled, the following data is transmitted:

Intella Assist Facet

  •  User prompt: The query or input provided by the user
  •  Additional instructions: Predefined instructions about how to construct queries and guidelines generated by Intella 

Intella Assist Previewer

An example: If the reviewer prompts the following query “Please summarize this item.” 

  • User prompt: The initial query or input from the user. 

  • Item data & metadata: Previewed items’s content (including raw data) and metadata (properties, headers) 

  • Additional instructions: Predefined instructions and guidelines generated by Intella

Ensuring Transparency 

To maintain transparency and help you form your own views of the data sent, we recommend setting up a demo case with non-sensitive data. By doing so, you can review the data being sent out by examining the prompt log files located in [CASE]/logs/prompts. This allows you to see exactly what information is being transmitted.

Here is an example of a basic query from the Facet Search and Previewer panes and what information is being transmitted outside of Intella.  

“Find me all hits with the keyword look.” 






Predefined guidelines appended by Intella

Note: This is a brief overview extracted from the complete file


1. User Request:

  |||

  find me all hits with the keyword "look"

  |||


  2. JSON schema definition:

  '''json

  {"type":"object","definitions":{"query":{"type":"object","description":"Represents a facet query. There should be at least one search query. For limiting results a \u0027required\u0027 property should be used. For excluding results \u0027excluded\u0027 property should be used.



“ Whats is the IP address of where this email was sent from?”






Predefined guidelines appended by Intella:

Note: This is a brief overview extracted from the complete file


===USER_PROMPT===

What is the IP address of where this email was sent from?

===USER_PROMPT===


IMPORTANT:

- Please ensure to maintain the original formatting when answering as closely as possible, reflecting the structure and layout of the provided content in the sense of preserving new lines, email reply indicators (e.g., '>'), indentation...

- When user requests a summary, limit your response below the original content length (which is 2119 words) but never more then 128 words. You should focus primarily on item's content (enclosed in ===ITEM_CONTENT=== - ignoring ITEM_PROPERTIES and ITEM_RAW_DATA) unless otherwise specified. Keep summaries concise and clear.


Managing LLM Integration

How to Remove an LLM Service or Disable Intella Assist

If you need to remove an LLM service or turn off Intella Assist, you can do so by adjusting the configuration settings and permissions in your system. Here’s a quick guide:

  1. Access System Settings/Preferences: Navigate to the Intella Assist tab where LLM services are managed. 

  2. Remove LLM: Under “Service” select None (Intella Assist is disabled). Click “apply”

  3. Adjust Permissions (Connect & Investigator): Ensure the "Can use Intella Assist" permission is not assigned if you want to disable the feature.