ZyLAB Partners With SDL On Multilingual E-Discovery Technology; Offers Visual Classification For E-Discovery

Wednesday, June 5, 2013 - 14:16

ZyLAB has announced that it has integrated its e-discovery solution with SDL’s automated machine translation technology SDL BeGlobal, to support law firms and multinational corporations facing multilingual litigation.

ZyLAB’s customers can use the integrated machine translation, SDL BeGlobal, to translate all information up front, much like they currently pre-tag based on known search terms. The translations can then be reviewed so that the legal teams can quickly uncover relevant data and route critical information for a specialized human translation if needed.

The integrated language recognition, automated translation and text analytics solution ingests the content into the e-discovery platform, where it can be stored for search and analysis just like in any other case. In case of multilingual content, the option to detect language and to automatically translate is chosen. The content is then processed through integrated machine translation and can be reviewed in English or another language. Searching in English or in the other language(s) will bring up the original document, no matter what the original language is. ZyLAB’s software detects and recognizes 400 foreign languages, including all Western European, Eastern European, Russian, Chinese, Japanese, Arabic, Persian, and most regional indigenous languages. SDL BeGlobal automatically translates over 80 language pairs.

* * *

 

ZyLAB, leading provider of e-discovery and information management solutions, has announced that it has incorporated true native visual search and categorization in its e-discovery and information management solution to enable the speedy and accurate identification of non-textual information. The inclusion of Visual Classification together with ZyLAB’s Audio Search completes ZyLAB’s multimedia search capabilities.

Enterprise Big Data contains large volumes of electronically stored information (ESI) that is non-textual, for example, pictures, video and audio. Processing, searching and classification of these types of ESI without textual information in the image or in the image-related documents adds significant costs and risks to all parties involved in an eDiscovery process.

ZyLAB'’s Visual Classification automatically recognizes pictures and identifies amongst others: people, babies, elderly people, flowers, cars, planes, in- and outdoor scenes and many other concepts. The new functionality is perfectly usable for the identification of images of personal identifiable information (PII), potential intellectual property, handwritten notes, checks, IDs, and other information that otherwise cannot be recognized automatically.

ZyLAB is the first vendor to market this technology in the eDiscovery space. Using Visual Classification during the legal processing of data, images are handled by the image classification engine. This makes it possible to tag images and video with the available concepts. Based on this tagging, users can determine the workflow for processing. One can decide not to OCR images that contain landscapes, babies or flowers (for example), saving significantly on processing time as performing OCR on images can be time-consuming, especially when you perform a four-way OCR.

The native visual search and categorization technology that ZyLAB now offers as a new add-on to the ZyLAB eDiscovery and Production Platform is traditionally used in law enforcement and intelligence applications such as video surveillance and picture classification.

Together with ZyLAB’s Audio Search, ZyLAB’s new Visual Classification now completes ZyLAB’s multimedia search capabilities. In combination with ZyLAB’s award-winning global text-search, text-mining technology, and support for more than 450 languages, 1000 electronic file formats, and 200 different repository types, including the cloud, ZyLAB will be able to find more relevant information in the e-discovery market, regardless of its form, shape, language or location.