PART 2: Predictive coding faces the dreaded S curve, e-discovery goes back home .. and the cloud becomes SO important

 

Predictive coding faces the dreaded S curve, e-discovery comes back home … and the cloud becomes SO important”
  

 Part 2 of a 3 part series

In Part 1 (which you can access here) I provided an overview of the three subject areas described in my title, focusing on that dreaded S curve.   
 
Here in Part 2 I will address why the cloud is becoming a major factor in the e-discovery ecosystem.
 

In Part 3 I will address the growing movement by corporations to bring e-discovery back in house.

25 May 2017 (Paris, France) – The places where I really learn what is going on in the e-discovery ecosystem are via presentations and “chat sessions” I have attended at the many IQPC corporate counsel events, at Georgetown Law’s Advanced eDiscovery Institute, and at the Association of Corporate Counsel legal operations events, plus a smattering of top table events such as the Stroz Freidberg digital investigations conference and many of the Consero corporate counsel events. As I have noted in earlier posts, the Georgetown Law event has a segment called “Behind Closed Doors” and IQPC has a similar segment called “BrainWeave”. The technology segments at both events are similar: corporate counsel gnashing their teeth over what e-discovery tech works, what tech doesn’t work, what tech they have cobbled together in-house, etc.

These are “war stories” and lessons learned from a wide range of corporate counsel, about a wide range of software deployed.

I will have more about those sessions in Part 3 which discusses the movement by corporations to bring e-discovery back in house. But suffice it to say the cloud “as an e-discovery application option” has been heavily discussed.

I also think that to really stay up-to-date in e-discovery you need to spend some time reading Doug Austin, Craig Ball, Bill Dimm, and Ralph Losey to keep mindful of the e-discovery/forensic forest … and the trees.

To me, the cloud is really the “BIGLY” issue in e-discovery these days. Cloud has become so important to the e-discovery business world that almost every vendor seems to be clamoring to build something organically, or seeking to acquire or partner with cloud vendors.  As Alex Woodie, my buddy at Datanami, told me years ago: “Listen, when terminology from the IT department breaks through into the mainstream culture, you know you’re onto something really hot”.

The argument is compelling and the venture capital markets certainly see it.  A number of years ago I became a principal in a private equity group that has made investments in a number of information management technology companies in both the U.S. and Europe. Cloud technology has became our focus over the last three years and I have immersed myself in all manner of cloud characteristics (storage, broad network access, resource pooling, rapid elasticity, etc.) and the infrastructure behind the cloud (data centres, IT environments, virtualization, etc.).  This led us to establish a dedicated web site for our clients/members to address financial/investment aspects of the cloud, the technology involved, and even news briefs on the latest cloud “gadgets”.

Plus, through my e-discovery unit Infotech Europe, I am called upon to evaluate various vendor presentations so I see lots of responses to PDFs (yes, firms still use them), vendor pricing models and analyst evaluations of vendor offerings.

I became very intrigued with all of this about 2 years ago when a long-time corporate counsel client told me his company had signed up with Microsoft Azure cloud services and e-discovery was one of the services he could get … sort of a “check-the-box” option. Microsoft’s acquisition of document e-discovery software maker Equivio was just one addition (ok, a key addition) to Office 365’s growing portfolio of document management tools for major industry verticals. And the stream of patents they have been acquiring for “methods for organizing large numbers of documents” is just more indication of the move into the e-discovery space. Their cloud efforts have been monumental. They must be.  They missed mobile entirely, and now seek success in creating cloud-based tools to allow developers to build their own applications on the many AI techniques out there (as does Facebook) so I expect Microsoft will have much in store for us.

CH-CH-CH-CH-CHANGES!!

As the e-discovery industry has matured – and acquisitions and mergers run apace – most legacy e-discovery providers still appear to be content to keep doing what they’ve always done: procure a mousetrap, service the mousetrap with a fleet of people to pull its complicated levers, and sell the whole package to law firms and corporations.

This is the managed services model, and it thrives on complex technology. Complexity makes the support services you’re selling more valuable and creates obscurity around what is actually being done. Complexity is the monolith. It feeds the notion that this stuff – finding things, basically, though how could it be that simple? – is too “dangerous” to do oneself internally, because complexity also creates risk.To these legacy providers it makes sense: they have the advantages of consolidation, scale, relationships, and technology entrenchment. But there is a grating element. At an IQPC event session, one corporate counsel (tasked with parsing several vendor presentations) was annoyed by what he termed “this attitude by several vendors that the e-discovery software application required a level of technical acumen and training I did not have and I best stay practicing law. Excuse me!?

Note: there was an interesting study put out by the American Bar Association – the results mirrored in a similar study by the Association of Corporate Counsel – that almost two-thirds of attorneys in the private sector work at firms with five or fewer lawyers. These, by and large, are people who do not have in-house IT or support teams, are either unable or unwilling to engage high-priced vendors, and likely rely on either legacy eDiscovery software (e.g. Concordance, Summation) or non-legal review tools (e.g. Adobe Acrobat) – neither of which possess the power to accurately or quickly perform the basic tasks of discovery. And, of course, the rise of law firm-focused cybercrime has added a scary new dimension to discovery, and laid bare the security inadequacies of most legal technology.

But for the irked corporate counsel I noted above – and, in fact, many corporate counsel I have met at the events noted in the first paragraph of this post – the issue for corporations is that with technology getting ever more powerful, easier to use, more accessible and requiring less human capital – well, shouldn’t e-discovery join the conflict in legal between old-school conventions and modern tech, as it has in artificial intelligence legal research?

“BADGES? BADGES? WE DON’T NEED NO STINKIN’ BADGES!” 

Yes. The rapid maturation of cloud-based eDiscovery software has done much to change the calculation.

It used to be simple.  You had three options:

(1) pitch all eDiscovery work to vendors,

(2) invest a gazillion bucks in on-premise hardware and software to perform the work internally, or

(3) the infamous hybrid model: depending on the nature of the work, some of it is fielded internally and some outsourced to a vendor.

And we all heard the drill from vendors: “managed services” – the vendor provides the majority or all of the technology infrastructure, but also personnel who, in the best case scenario, serve as an extension of the firm’s full-time litigation support team (project managers, consultants, etc.). This is a boon for the managed services company, who can charge both for its complex, clunky technology and the highly-skilled labor needed to use it.

But then … BANG!  The cloud.  Which acts as a force multiplier. Efficiency and scalability are among its hallmarks. So, ideally, law firms and corporate counsel can “do more with less” (the current mantra) with the right cloud platform. Work that would otherwise have to be outsourced to vendors or managed service providers can be completed with internal resources at a fraction of the time and cost due to these platforms’ ability to automate, or eliminate entirely, many of the steps involved.

And a point mentioned (incessantly) at these corporate counsel events: while it is true that the incremental costs associated with data processing and search have come down significantly, the amount of data has risen exponentially so that e-discovery costs are still prohibitive.

NOTE: an aspect not covered in my series, but just as important: for well-heeled law firms and their Fortune 500 clients, the costs of eDiscovery are the costs of doing business. eDiscovery is a nuisance to be sure, but it’s not a killer. It’s not a deterrent to justice. But for the other two-thirds of practitioners – to say nothing of pro se litigants – the high price of evidence can be a non-starter. As Judge Facciola, Judge Bennett, and Judge Peck (among many others) have noted numerous times: the system is driving an entire economic class out of court. For these law firms and corporations without the firepower, they are in the untenable position of either pursuing the dispute through to its merits and risk running up costs that outpace the value of the case itself, or attempt to bypass eDiscovery altogether – either by coming to a mutual agreement that electronically stored information will not be exchanged, or by relying on tools that are not equipped to handle the demands of modern eDiscovery. There is a lot to be said on this issue, but no space right now.

 

And when I think of these cost challenges I cannot help but think of Craig Ball’s “EDna Challenge” last year, the profession-wide call to make “e-discovery for everyone” a reality.


BUT LET’S STEP BACK A BIT …

Writing blogs about emerging technology puts you at risk of becoming enamored with buzzwords. Cloud this, big data that. AI is taking over. So, where does this all lead? Is analytics where I should go? Is there a new bigger buzzword around the corner that overtakes all the others?

My belief is that these phenomena are all interwoven and working toward the same enabler: speed. One phrase repeated at conference after conference this year: “Speed is the new IP.” I will address that in Part 3 of this series.  But first …

The terms “SaaS” and “cloud” are thrown around a lot in this space. Cloud computing is an architecture. It’s not a technology. It’s not a delivery model the way most people infer it is. It’s not a browser-based solution. It’s a three-tiered architecture that divides between infrastructure, platform and software. It’s the three elements you use when you create an application:

  •  the presentation layer
  • the logic layer, and
  • the underlying network and database

So if you have the three tiers and appropriately divide your load between them – so, for example, if you can use the database from anywhere, if you can use the platform for specific functionality to deliver services, if you use software for the presentation layer and for minimal enforcement of validation and things like that … you have a true cloud architecture.

If all you do is create a browser-based platform that’s on top of a hosted app that still runs as a single application  in one server with everything intermingled, you don’t have a cloud computing architecture. The distribution to the three tiers and the distribution of work using things like infinite elasticity for scalability, that’s what being in the cloud means. Just having a browser? That’s not the cloud.

That is why not all “cloud” offerings are true cloud offerings. Many, such as RelativityOne’s cloud, are hosted (in other words, remotely available via servers you connect with via the Internet), but that’s not cloud. See the NIST report: cloud comes with elasticity, redundancy, etc. and gains much of its actual value not by virtue of being cheaper (although it can be), but by removal of IT management concerns, such as disaster recovery, provisioning, even security from internal operations to external experts – the vendor staff. All of this has significant value, but it isn’t something that is part of the conversation yet.

But a company like Logikcull does fit the description. Even cloud e-discovery providers like DISCO fall short by this description, specifically in their scalability. From what I understand, in order to get data into their “cloud” system, you have to send it to them, either via FTP or physical media. The ingestion/processing piece is not automated — and not native to the rest of their system.

Vendors will drive the change in this conversation as they seek to improve offerings and find points of differentiation as the cloud SaaS eDiscovery market becomes more crowded. In fact, that brings up a point that will reoccur as we consider the state of the eDiscovery software market. Core eDiscovery functionality does not offer much by way of differentiation. As the market expands, evolves, and matures, vendors will look to new avenues for differentiation. As Jim Duffy of Blue Hill Research (he has done a deep dive into the e-discovery cloud) has noted:

For some, that will be by finding new areas of functionality, not just in IG, but new workspace options-perhaps a tab that extends reach thru dispute resolution workflow. Others differentiate themselves by offering a convergence of services and technology. Others, in cloud.

At the same time, we’re also seeing different models from established vendors in response, such as Kroll Ontrack – not a classic SaaS strategy, but still playing with that new crop of creativity in delivery models.

So, ultimately, it won’t be a question of right or wrong. It will be about determining how a model aligns to your organization’s needs, supporting greater variety of project types. Duffy says you’ll be in “three places, three options”. Does your company need:

  • complex, white glove operations with secret, secure on-premises hosting;
  • an on-demand subscription model for low complexity, high volume, high frequency (think FOIA requests); or
  • casual, self-service models where the organization selects options that fit, etc.

For me, one of the best cases I have seen … in both the U.S. and Europe … has been in data processing. I have been working with an in-house corporate law department in Europe that is looking at cloud options. In the “usual suspect” vendor world, raw data is collected by the law firm, then sent away to the vendor to be prepared for review. The processed data is then made accessible via a review platform to the firm, or sent back to the firm so that it can be loaded into its in-house review tool.

It’s a headache just to read that.

With a cloud platform, the firm may simply invite its client into its account and have it begin uploading data. Drag, drop, done. The data is automatically processed and made immediately accessible to the firm’s attorneys for analysis and review.  [ See my chart below under the “Security” section to see how this works ]

My position is that it is the responsibility of vendors to articulate the differences and the responsibility of buyers to understand how capabilities align to their needs for each use or case. Don’t fall under the spell of the buzzwords.  Do your homework.

SECURITY

This past January Inside Counsel published an article speculating as to whether e-discovery repositories pose an attractive target to cybercriminals and, given the often chaotic nature of discovery, whether the process itself presents a vulnerable point of data breach. As we would eventually learn later from news of Chinese hacks of document review rooms, the answer was a resounding “yes”. And last week at the Logikcull/ACEDS “Corporate eDiscovery and Cybersecurity User Group” it was a major topic of conversation.

And given the way data is transferred throughout the discovery process – from the client to its external providers (law firms, service vendors) and then, ultimately, to requesting parties (e.g. adversarial litigants) – all of these exchanges share a common source of risk: data is moving and, consequently, is at its most vulnerable. While some organizations have taken steps to address these risks, such as providing access to secure FTP portals or shipping data only on encrypted media, those measures only go so far. An equally problematic issue is the copying and share of data that has been removed from those secure channels. In the course of discovery, this happens all the time – think about how attorneys may email documents back and forth as a means to share notes and collaborate. And, finally, when a production is shared with requesting parties, that information becomes subject to the other side’s security protocols, whatever they happen to be (or not be).

It’s why cyber security experts who have examined the e-discovery process say the most secure e-discovery or legal intelligence platforms are the ones that eliminate the risk inherent to discovery by providing one central hub where all data is securely hosted and all channels in and out of the database are secure. When data is in the platform, it must be encrypted at rest. And when it is shared with opposing parties, it should be shared through encrypted channels – ideally a secure, permissions-based link whereby requesting parties can access that data remotely and instantly.

That is why the cloud offers that security. So long as it is in the architecture. Eric Palmer of FireEye told me you need to have in the architecture what he calls “a closed-loop hub.” This means that all data resides in one central repository, where all channels into and out of the platform are encrypted (i.e. when data is in motion) and all data inside the platform is encrypted at rest. Compare this to the traditional ediscovery model which I described above, where data is shipped or FTP from the client to the law firm to the vendor and back. The cloud architecture looks something like this:

When I met with Robert Groud of Microsoft to research this post he ran me through the key security elements that must be considered as an integral part of any SaaS application development and deployment process:

  • SaaS deployment model
  • Data security
  • Network security
  • Regulatory compliance
  • Data segregation
  • Availability
  • Backup
  • Identity management and sign-on process

The Cloud Security Alliance has copious quantities of material on these security issues and I refer you to their website for more detailed information (click here). It will arm you with the right questions to ask your SaaS provide.

But it is also important to consult the Legal Cloud Computing Association which released its Security Standards providing guidelines for cloud service providers to ensure adequate protection of client data stored in the cloud in a manner consistent with lawyers’ ethical obligations.

PRICING

In my view the real e-discovery cloud battle will be between kCura (RelativityOne) and Logikcull.  Call it “The Battle of the Titans”. Or “Microsoft (kCura) vs Apple (Logikcull)”.  Clearly kCura had to up its game as it has faced the mounting competitive onslaught of Logikcull.

I had the opportunity to be a fly-on-the-wall when a U.S. client let me sit through two presentations, one by Logikcull and one by kCura/RelativityOne.  I also had the benefit of a kCura channel partner providing me some pricing information and a Q&A, with mirror disclosures by a Logikcull client.  The following information is based on those disclosures and the presentations I heard.

Data volume is always going to be a major pricing driver. Several points come out of the RelativityOne material:

  • RelativityOne requires a minimum commitment of 0.5TB per year, measured by peak usage during the month.
  • Assuming a data fee of $15/GB/month, that’s a minimum commitment of probably $90k/year (not including the user fees).
  • Data fees include all data that lands in Relativity – regardless of whether it was processed or imported.

And an interesting statement:

Based on our research of the U.S. market – both through the consulting group we worked with and our own meetings with partners – we heard retail rates for hosting between $10 and $17 per GB per month. Of course, there are outliers. We have examples of rates as low as $3 and as high as $45. Also, the range is dependent on guaranteed volumes, whether or not the customer signs an agreement to host multiple matters over a year or longer versus only committing to host a single case, and the overall mix of services being provided in addition to hosting.

kCura seems to be adopting a “universal users” approach – all contracted on-premises users will have access to RelativityOne for no additional user fees. That’s a very good thing, a very good price point.

The material had an exact user price, but given it was dated (6 months ago) I must assume it has changed.  But I think one can assume the price per user could be comparable to today’s on-premise pricing. For comparison purposes, using a schedule from TechLaw Solutions, the on-premise pricing starts at $117/user/month for 25 users and goes down to $80/user/month for 1500 users (this is for an annual named user license with unlimited Relativity Analytics).

Also of note: there will be no processing fees in the RelativityOne program.

Some provisos:

  • there seems to be a 3 year commitment for data fees coterminous with users
  • Legal Hold and Collection will be separate add-ons at the same rate price as on-premises
  • Each data fee is tied to a single geography – there are fees to expand
  • This is still a partner play rather than a direct sales model
  • Partners will be the only providers of on-demand, case-by-case hosting

One interesting monetary note:  “Certified Partners” receive 25% of the enterprise customer’s total license cost (users + data fees); to be a certified partner you must commit to 3TB of data in RelativityOne in year 1, at a price of $275k (40% discount off rate card). So that indicates a rate card price of $460k/year for 3TB, or $12.78/GB/month.

I have done some cold outreach to a few Relativity partners to validate this and get more details, especially around the user pricing. I may update this post accordingly.

As regards Logikcull pricing … a bit different.

Logikcull charges $40/GB/month, with no minimum fees and no commitments. This includes unlimited 24/7 in-app and phone support (“median response time = 4 mins”according to 5 Logikcull clients I spoke with), unlimited users, unlimited projects, unlimited downloads, etc.

Logikcull also has these subscription options but they are “account specific” because different organizations have different needs in terms of users, data, etc. But in general, based on the pricing sheets I saw, with subscriptions, the per-GB cost comes down significantly and you have access to other “enterprise grade” product features such as sub accounts (e.g. multiple Logikcull accounts for different business units under one “master Logikcull account”) and data copy (which could be a pretty high-powered data reduction tool, by my estimation).

And I think it is very important to point out — given this horrific “race to the bottom” that per-GB pricing can certainly encourage — that not all GB are created equally. For example, comparing Logikcull’s $40/GB to, for example, Cloudnine’s $35/GB is comparing apples to oranges: these tools don’t do the same thing. For a CloudNine analysis click here.

Because, because, because, because … given all this discussion of cost, that cost itself only has meaning in the context of value. As one of the vendor reps noted at a meeting: “One dollar is a huge cost if you’re getting 80 cents in return. It’s dirt cheap if you’re getting ten”. To this end, when the unit of cost is the gigabyte, it must be said that not all gigabytes are created equally.  As another vendor rep said at another meeting:

Look, gigabytes are simply not all the same. Just as the horses under the hood of a Corolla are not equal to those of a Ferrari, even on a 1:1 basis. A 400-HP Ferrari will dust a souped-up 400-HP Toyota in any number of areas: speed, acceleration, driver experience, comfort level, etc.

And that brings me back to my point above about automated processing steps. Because that’s horsepower. If you have a monster of a processing engine (one Logikcull stat sheet I stumbled upon says every piece of information uploaded to Logikcull is subject to more than 3,000 automated processing steps; I did not verify that) so that all your data is preserved, indexed by metadata, deduped, imaged, scanned for viruses and yada yada yada …. well, that means you get more/higher value work completed at a lower cost with fewer resources.

So … do your homework.  Ask your vendor all of those questions.

POSTSCRIPT

Depending on who you ask or what you read, the balance of power dictating how discovery is performed, by whom, and at what price is either shifting back toward the corporation, or back to law firms and other legal service providers. For every survey showing mega-companies are conducting more discovery on their own, with internal tools and internal teams, there is another indicating that law firms, especially the very big ones, are investing more heavily in building their own discovery business units.

For my part, I have seen a tremendous spike on The Posse List (my e-discovery/legal industry job listing service that covers 5 continents) from corporations seeking to build their own in-house e-discovery units which comports exactly with the study done by Ari Kaplan which shows the move back in-house. We’ll have more on that in Part 3 of his series.

And this movement is clearly driven by new class of intuitive, accessible and extremely powerful technology solutions and, yes, many of them cloud-based. Because they both broaden the base of users able to perform these once-complex activities and act as force multipliers.

Legal cloud is inevitable because the hallmarks of legal cloud – user-friendliness, accessibility, speed, transparency, security, scalability – are the answers that solve the problems associated with e-discovery. You could argue that existing cloud solutions are not yet mature enough to supplant on-premise counterparts, but it’s hard to make the case that, all things being equal, you wouldn’t want those qualities in a solution.

There is also an issue driving cloud not often discussed but that was certainly on display at LegalTech this year: as an e-discovery software program grows in size and complexity, the software can become a cruel maze. Jason Lanier has often written about it in his books and on his blog when discussing large-scale software as “feeling like a labyrinth … even the best software development groups periodically find themselves caught in a swarm of bugs and design conundrums”.

And let’s face it:  much of the “high tech” e-discovery software we use day-to-day in review rooms is deficient despite the sales pitch at such venues as LegalTech.   Last December in D.C., at a major law firm’s document review, we saw a “state-of-the-art” document review software “crash and burn” due to a myriad of tech failures … not scalable, suffering from “lock-in” (poor integration of different pieces of an enterprise software that do not function well together) coupled with poor supervision by the firm’s staff attorneys whereby the review was summarily dismissed while the law firm “rethought” its options.

We get reports on these tech burnouts on a regular basis via our contract attorney e-discovery membership who provide ongoing feedback on almost every document review platform. And it will get even more complicated as we move more and more toward unified information access platforms … let’s turn THAT into a LegalTech buzz word … which are platforms that integrate large volumes of unstructured, semi-structured, and structured information into a unified environment for processing, analysis, e-discovery and information governance.

I have heard RelativityOne disparaged as “eh, big deal. It’s an exact clone of their premium software, just plugged into Azures IaaS”. But their product is far better, far more than that. I am not sure what this means for kCura channel partners, but there is much, much more going on with their technology. I strongly advise you go to sites like Capterra and G2 Crowd, where you can find more than 160 verified user reviews. In fact, you can find a side-by-side comparison of Logikcull and Relativity here: https://www.g2crowd.com/compare/logikcull-vs-relativity.

One other important thing to point out is that Relativity and Logikcull tend to specialize in two different areas. While Relativity might be appropriate for huge cases (terabytes), it’s often overkill/too complex to time-and-cost-effective-deployment on smaller cases. Using Relativity on a 10 gig case is like shooting a fly with a bazooka. Logikcull, on the other hand, shines for organizations with high matter velocity and lower matter volume. (i.e. gigabytes, not terabytes). And what’s interesting to me is that this is the much bigger market. According to numerous corporate counsel surveys over the last year or so, 80% of all eDiscovery projects fall into the smaller, routine bucket. So a solution like Logikcull is among the only solutions that allows you to get through them quickly and affordably. That’s where the automation piece really comes into play.

Which is why when you do your homework and examine these solutions, find out how many critical processing steps are automated — yes, that again. Because the upshot is that by capturing and ensuring the integrity of the data fed to it and breaking that information out in a meaningful way, gives users a fuller picture of the evidentiary issues in the case and gives them deeper insight into individual pieces of data. This is increasingly important, as disputes are increasingly resolved through data. And more data. And even more data.

Clearly the rest of the e-discovery market is scrambling because many vendors will simply be unable to compete … which means they need to find new business. Cynic that I am, this move by e-discovery providers into compliance and information governance seems to me to be a way for the ediscovery hound dogs to jump into those slots in protection of their business. I am just cynical enough to believe this is mostly $$$ driven, and not from rational enlightenment: a desire to service clients, or grow technical knowledge.

Coming in Part 3:

 
the growing movement by corporations 
to bring e-discovery back in house
 
 

A number years ago I had a chat with Nick Patience, one of the founders of 451 Research, about who the real long-term players, the “annuity players”, would be in e-discovery. He felt that although the (then) current crop of “usual suspects” plus some of the up-and-comers had longevity, the real future of ediscovery would be driven by companies like Google, IBM, Microsoft … and he even said Apple (he was the first chap to cue me into their AI work).  These companies had the expertise, they had the budgets, and they had the ability to combine big data know-how with ever increasing analytics technology and expertise, and knew how to make sense of massive amounts of historical and newly collected data for business operations, overall information management and e-discovery.

Yes, he said, the current crop of e-discovery companies will surely have better and better text analysis platforms and media analysis platforms (mobile phone data, photos, audio, etc.) that can ingest large volumes of unstructured text/media data.

But … in time … I think it will be the bigger players who will do the better job of identifying the entities and relationships within a data field, who can store the resulting data in their knowledge bases. These are companies that can cross multiple disciplines, who have the cross-functional teams with different functional expertise. I have had the opportunity to spend time with members of the the AI units and analytics units of Google, IBM and Microsoft. Next month I have been invited to meet with the Google digital investigations unit in Zurich. In all of these encounters you understand why they have a greater depth of information. Will they be able to execute? Yes, because they avoid the linear thinking of the “usual suspects” in the e-discovery space.

And listen, I’ll wait. With Google’s DeepMind now marching out its algorithm that can summarize documents and use machine-learning to produce and connect coherent and accurate snippets of text from other documents in a database .. well, sign me up!

In Part 3 we are going to discuss some of this analytics technology and expertise and why it is one reason e-discovery is moving back in-house.


 

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top