Blogs
— 8 min read
TABLE OF CONTENTS

AI–particularly Generative AI– is a buzzy sector for a reason. Advances in AI could dramatically change how industries function. And private market investing is no exception. 

We asked ChatGPT how AI will impact the future of M&A deal sourcing…and the conversation was riveting. But what was lacking? Specific examples and an understanding of its limitations.

So we're bringing in an expert to speak to the missing parts from the interview with a machine.

Lianne Longpre is a Senior Machine Learning Engineer at Grata, the leading deal sourcing platform. Powered by AI, Grata provides private market investors the data they need to first their next big deal.

Here is the interview.

How do you explain artificial intelligence?

Longpre: That's a broad question. Basically the big thing Grata has in common with what ChatGPT is doing is this concept of natural language understanding. So the idea is that we have all of this text out there that’s generated by humans.

The challenge is this: we are trying to make a computer truly understand what the language, the words on a webpage, are talking about, the true intent of the words. When somebody asks an AI a question, the computer needs to first figure out and understand, what is the intention of the question, before it can actually give you a good answer.

There's always ambiguity in language, so we have to understand what a person is actually asking. To be able to do that, we have to target and solve the ambiguities of language from the query side of it. 

And then, on the flip side, when we're looking at what to actually show in the response (for Grata, this response is search results) we have to take our proprietary data and truly understand it and categorize it in a way that’s meaningful for our clients.

That’s the root of natural language understanding and how I think about training artificial intelligence.

What does AI mean for private market investors? How will it change their day-to-day tasks?

Longpre: Blue Skies version: it takes away so many menial tasks that people have to do. 

One specific use case is one that Grata already solves for. Our clients are looking for companies that ChatGPT doesn't know about. They may try using Google or LinkedIn, but the first companies that will appear are the most well known companies in the world. We know for a fact those are not the kinds of companies our clients are looking to invest in.

If you don't have the right tools, you have to go and manually look at millions of companies to find the companies that are good candidates. That would take a lot of people and a lot of time to get through.

Before Grata, finding these companies that took them a lot of time. But now that it's all aggregated using our technology, our clients can focus a lot more of their time on outreach, market research, networking or building their next thesis.

What are some of the current limitations of AI?

Longpre: So, limitations. I love talking about this because right now we have these rose-colored glasses of “AI is amazing and we are just going to solve all of our problems!” But we still need a human in the loop to analyze how we are using AI and how we are building it.

Even ChatGPT has errors. There is no AI existing that doesn't have errors. And to be fair, humans also have errors that we need to fix, right? The difference is that with human error it’s a lot easier to understand the logic of an error.  We understand each other's logical paths more than we understand a computer’s logical path.

When deep models make mistakes, it’s not obvious how they got to a wrong answer. 

With AI, the challenge is getting to a place where a computer can output a “I don’t know” response. We are not there yet. 

That's I guess the whole premise of machine learning. By nature, regenerating is always a kind of guessing. It's making very educated guesses. Throwing some darts. Making its best guess based on the information it knows. Which is what humans do too. Yeah. You've got your world of knowledge and based on your knowledge, everything that you've seen in your entire life, you make the best guess at what the right answer is for the next thing.

What are some of your ethical concerns around AI?

Longpre: So when we're using AI in a product it's automatically making decisions. We have to consider if the AI was trained in a biased way since it is now going to be propagating that bias into its outputs. And in certain areas, certain industries, it's going to be more critical to really examine this area than others, I'm thinking specifically around hiring, healthcare, and insurance. The types of companies where it's a really you can immediately see how AI bias can lead to discrimination, but those are not the only areas.

What does a “human in the loop” look like at private equity firms, investment banks, and corporate development teams? 

Longpre: I'll start with my first example, ChatGPT, since it’s the most commonly known and used as of right now. ChatGPT is so powerful, and it can do so much, however, if you don't ask it the right question, you're, it's not gonna be useful tool. If you've even played around with it and asked a question, you may not have gotten a great response. You have to like mess around and be creative about the words you use when you ask questions.

So right now, the human-in-the-loop component is the person who decides to go back and change the prompt and attempt to understand why did the AI respond this way? How I can modify the input to get a better output?

A team needs someone who knows how to answer this crucial question: Could the output be better if I asked a better question? 

Will the human in the loop need to change as AI evolves? If so, what will that look like?

Longpre: Well, the answer is yes, but I'm not exactly sure what that's going to look like. I think we're going to start seeing a lot more people who need to understand AI, but not have to know how to build it. You'll need people who know how to use the tools.

Right now, it's the people who built the tools who know how best to use them. Eventually, that will change. AI will become a thing your need to do how to use, like Excel.

Everyone will have to have some level of understanding of AI.

How has AI / ML training evolved in 2023 from your perspective?

Longpre: When I first started working in the field to now, the majority of my time working on specific problems has changed because technology has evolved. At the very beginning, I had to build my own model to do some type of task, like "entity recognition", which means, given a text, like, let's pull out all of the named entities in it. (e.g. the name of a company) At the beginning of my career, that was not really solved and I had to build my own.

Now, there are so many pre-packaged models that already solve that problem. But again, that doesn't remove the need for me as a human-in-the-loop. It just removes the need to focus my time on building that model itself. I can now solve bigger problems with the time I save.

Grata is Powered by AI

Grata was built on ML and NLP in 2016. We invented private company search and continue to develop groundbreaking features and data acquisition technologies. Learn more about how Grata is unlocking the middle market for private market investors with power of artificial intelligence.

Try Grata Today!

Unlock the middle market with Grata

Book a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get Updates every month!