OpenText home page.

OpenText eDiscovery Aviator

What if eDiscovery took days, not months? Build strategy faster.

Video

Learn how OpenText™ eDiscovery AI improves case strategy and service excellence

5 ways to transform eDiscovery and investigations with GenAI

GenAI doesn't just speed up document review. It fundamentally changes how legal teams uncover evidence, assess risk, and build winning case strategies. Here's how OpenText eDiscovery Aviator empowers you to work smarter, faster, and more strategically.

eDiscovery Aviator key document summaries workspace

1. Accelerate case strategy

Gain a strategic advantage with AI to identify and summarize key evidence for faster early case assessment (ECA) and strategy formulation.

eDiscovery Aviator workspace rapid exploration

2.Uncover critical evidence

Zero in on key documents and critical issues faster with AI to surface hidden information across massive datasets for proactive, data-driven decisions.

eDiscovery Aviator review workspace showing document summarization

3.Cut through complexity

Transform dense documents into clear, plain-language summaries to accelerate case knowledge and effortlessly share insights with clients and colleagues.

eDiscovery Aviator review dashboard

4.Classify documents in minutes

Upload your review memo and let the model rapidly sort responsive from non-responsive documents with built-in testing, sampling, and token estimates.

eDiscovery Aviator agentic AI workspace

5.Let AI agents do the work

Stop managing tasks across disconnected tools. A built-in AI command center lets you deploy intelligent agents that reveal data insights and create high-value work product in record time.

Winning case strategies built on smarter AI insights

Legal teams using eDiscovery GenAI see real results in speed, cost, and quality.

  • 75%
    cost savings compared to manual document review
    Document review can consume 75%+ of discovery costs. Aviator review delivers the same quality for significantly less.
  • 88%
    faster document review
    GenAI automation cuts review time from months or weeks to days or hours—giving you a strategic edge in litigation and investigations.
  • 90%+
    recall with optimized workflows
    Exceptional review quality through expert prompt engineering, and proven workflows—built on a decade of OpenText™ leadership in TAR.

How to get started with OpenText eDiscovery Aviator

Swipe to see more What to doWhy nowResources
Step
Flexible deployment options
Choose eDiscovery Aviator—on-demand, multi-matter subscription, or private cloud options.
Cloud-native Aviator integrates seamlessly into your workflowsView deployment options
Explore Aviator capabilitiesFrom key document summary to Aviator review.Uncover efficiency when and where you need it most.See the demo
Gather case insight
Use Aviator for document summarization or explore data for an investigation or ECA
Start small and build your GenAI confidenceWatch the video
Test Aviator review with your dataSubmit sample datasets to refine criteria with transparent token estimatesBuild confidence in results and understand costs before committing to full-scale reviewFind out more
Get expert guidanceEngage the OpenText™ Professional Services team for support and best practicesDe-risk GenAI adoption with experts in legal workflows and decades of TAR leadershipJumpstart your GenAI journey
Enhance results with managed review servicesCombine Aviator with managed review for added savingsPair GenAI technology with expert prompt engineering for 10–20% additional savingsExplore the offering

Learn more about OpenText eDiscovery Aviator

OpenText eDiscovery Aviator: What’s new

Read what’s new

OpenText eDiscovery

Read the product overview

OpenText eDiscovery

Read the product overview
Play video

Take a tour of Aviator Review

Watch the demo
Play video

See Aviator Review in action

Watch the demo

Unlocking the Kennedy Archives: Revealing the hidden truths through AI-powered workflows

Watch the webinar

Rapid Exploration with OpenText eDiscovery Aviator

Watch the webinar

Rapid Exploration with OpenText eDiscovery Aviator

Watch the webinar
August 22, 2025

Building GenAI foundations: Jumpstart your litigation practice with GenAI

Part 1: Learn how to build generative AI skills for smarter eDiscovery.

Read the blog
December 2, 2025

Navigating the Legal AI tidal wave: Expert insights from OpenText World 2025

Legal leaders at OpenText World 2025 shared five practical ways to use GenAI in eDiscovery.

Read the blog
November 3, 2025

From data overload to strategic clarity: How AI transforms early case assessment

See how OpenText Aviator Rapid Exploration uses GenAI to deliver instant insight and clarity.

Read the blog
November 3, 2025

What’s new in OpenText eDiscovery

Get the latest updates from OpenText eDiscovery to power faster, smarter analytics.

Read the blog
September 15, 2025

Embedding GenAI in MDR: Jumpstart your litigation practice with GenAI

Learn how you can embed GenAI directly into your outsourced managed document review (MDR).

Read the blog

OpenText eDiscovery Aviator FAQ

  • OpenText eDiscovery Aviator is any large language model (LLM) feature that has been or will be added to OpenText eDiscovery. These features are identified by the OpenText Aviator icons. At present, these features include:

    • Aviator key document summary
    • Aviator summarization
    • Concept label summaries
    • Aviator review
    • Aviator rapid exploration
  • Aviator features are available to all OpenText eDiscovery (single or multi-tenant) cloud customers in all AWS regions, provided they have signed the appropriate legal paperwork.

  • The LLM that powers our Aviator features has multilingual capabilities. Currently, primary testing and validation of the features have been on English-language documents and prompts.

  • OpenText eDiscovery connects to Amazon Bedrock via the Bedrock API. The interaction of OpenText eDiscovery with the LLM is transactional:

    • The OpenText eDiscovery job creates a session with the LLM via the Amazon Bedrock API, directly from the customer's OpenText eDiscovery AWS environment.
    • The request is sent to the LLM with the required information (usually the prompt + document text, but depends on the feature).
    • The LLM returns the response back to the customer's OpenText eDiscovery AWS environment.
    • The session with the LLM is closed and all data is deleted from AWS Bedrock.
  • Interaction with the LLM is on a project-level and a separate session is created for each LLM feature interaction.

  • No. Once the session with the LLM is closed, no data from the session remains on the Amazon Bedrock/LLM side. Amazon Bedrock does not store or log customer data in its service logs.

  • No. Amazon Bedrock performs a deep copy of the LLM and hosts it. The LLM provider (Anthropic) has no read/write access and no internet access to the model within Amazon Bedrock.

  • OpenText does not train models. Models accessed by OpenText eDiscovery Aviator are pre-trained by the LLM provider. Customer data, input prompts, and outputs are never used to train the LLM model. LLM results are not shared across projects.