Training AI Models for M&A Use Cases

Nevin Raj
Co-Founder, COO at Grata

Training AI Models for M&A Use Cases 

Artificial intelligence (AI) in M&A is not just a trend. Recent improvements in technology have transformed M&A, aiding in the deal optimization process. The integration of AI in M&A increases efficiency, improves data organization, and enhances decision-making capabilities. This opens up many opportunities within the industry for document analysis, creating comparative tables, and assessing buyers’ quality to explore potential acquisitions.

This blog reviews the history of large language models and artificial intelligence and outlines specific prompting techniques for their applications in M&A. 

What is a Large Language Model? 

A large language model (LLM) is a large, deep neural network trained to produce general-purpose language. LLMs, or large language models represent an impressive stride in how computers understand and use language. They are trained from large amounts of text and data learning to predict what comes next in a sentence. Some of the best models can write clear, coherent paragraphs and respond in a way that feels natural. 

When applied to M&A, LLMs and GPTs are not just about technology and coding; they provide strategic insights that can integrate into a deal cycle. The adoption of these models is increasing because of their proven efficiency in producing market synergy analysis and in processing and analyzing documents.

History of Large Language Models

Understanding the history behind LLMs, where they come from, and what they represent today is vital to understanding how to use them effectively. 

AI was created to imitate human-like intelligence. Current technologies are based on historical models like “Bag of Words” (BOW), TF-IDF, and BM25. Like many technologies, these early innovations are not as adept as we see today. They would go through articles and pull out words like “the,” “of,” “and” and help pick out keywords. Systems built off this would run searches based on the keywords, but their capabilities needed to be improved. 

Then, around 2017 or 2018, there were more evolutions in natural language processing (NLP), and the technologies around embeddings and transformers developed rapidly. These transformers, such as Word2Vec, could start writing in sentences, making sense of the words, and associating them more competently compared to the original Bag of Word models.

Another massive transformation in NLPs came about in 2018 with the LLM called BERT, which has evolved into GPT, Calaude, Llama, and more. These new LLMs are well-educated models who read many books and online articles and can write coherent prose. 

How to Train Large Language Model

Like a child who expands their knowledge over time, the evolution of LLMs and their abilities to comprehend and generate text take time, practice, and training. Certain LLMs can assimilate and be trained on various features and parameters. For example, GPT 4 has an expansive knowledge base that can handle multiple dimensions. 

You must train the model using historical data to provide context and introduce personas and target audiences to tailor responses to prompting. In this article we review four of many prompting tips and tricks, access the M&A Science AI M&A Prompting Guide here for additional prompting tips. 

In the second session of M&A Science’s AI M&A Teardown with Kison Patel, I walk through how to train a model with examples. Continue reading for a step-by-step overview, or watch the full on-demand webinar

Understanding the Persona

One of the most critical parts of prompting is understanding the persona you are assuming and the persona for which you are trying to get the LLM to create content. Directing an LLM to adopt a specific persona and defining the audience will tailor the tone and style of the responses. 

Including words like “imagine you are” or “act as a” gives the model an indication that it needs to respond in a way that is relevant to you. How you will get responses based on this persona setting will be different from a general response. 

You must provide the model with the persona to get the types of reactions with the correct tone, style, language, and prose you want to hear. 

Understanding Your Audience 

In addition to training a model to speak within a persona, you must train the model to understand who it is talking to. Is it talking to a beginner? A seasoned M&A practitioner? A board member? A CEO? 

An understanding of the audience will help with the complexity and depth of the response provided by the LLM. To tailor the response, clearly define the intended audience of the prompt. 

Problem Structuring 

When training an LLM for M&A, you can’t just casually ask it to spin up a full diligence or create an investment memo. Just like you would with someone new to M&A, you must break down what you need to complete into smaller, simpler parts. Structuring the prompt and the direct outputs requested will help the model to create a more tailored response. 

To best structure the prompt, complete sentences are recommended. Additionally, separate the sentences with “first,” “second,” “third,” “next,” or “last.” The LLM will pick up on the periods and modifiers and will give you a response that will be in separate paragraphs, knowing you want complete and individual answers. Specific models provide a more combined answer if you use commas. 

Consider separate, smaller tasks for higher-quality answers, as the model will not merge multiple concepts into one. Additionally, be explicit in how you would like your answer back. Sometimes, it needs to be in a longer form; other times, it needs to be a concise bulleted list. Tell the LLM to provide you with the type of answer you need. 

Referencing Text

When training an intern or new college grad, you would give them an example of the task or assignment. The same goes for an LLM. In GPT 4 and other language models, you can upload a PDF or document into the system, and ask the model to generate a response “based on the text provided,” or “based on the information in this PDF.” Note not all GPTs will have this capability, free models having less capabilities.

Never upload confidential agreements to a public model. Public agreements are a great way to test out and build the model, but private, confidential agreements should never be uploaded, just as they wouldn’t be sent out to public. 

What is an AI Hallucination vs. Inference? 

We can’t ignore one of the biggest issues with LLMs - hallucinations. 

AI Models are trained on large datasets and information imports and generate responses based on the patterns analyzed in the data. When an untrained LLM doesn’t have access to real-time data applicable to its prompt, it will hallucinate, spitting out a prediction on what seems to be the most coherent response. 

An inference is when you prompt an LLM to judge data points. Using words like “could,” “might,” or “will” engages the LLM in evaluating potential outcomes based on hypothetical scenarios. As a result, the LLM provides an inference or analysis even though it may not be directly or grounded. 

To avoid hallucinations or inferences, utilize Chain-of-Verification (CoVe) prompting styles. 

This prompting style grounds the LLM in truth and accuracy, not hypothetical. It tells the LLM to verify each step’s accuracy and cross-reference critical metrics. This fact-based prompting style forces the LLM to provide the sources it is generating from. 

Accuracy is key when conducting a deal. AI hallucinations will hinder accuracy, causing problems in your deal later on. AI should not be the be-all and end-all to your conclusions; you should always rely on human review throughout a deal process. 

Additionally, purpose-built solutions that combine AI-based search with proprietary data are required to push AI forward within the world of M&A. Tools like Grata’s AI Analyst Ana, or DealRoom AI were made specifically to increase efficiency in a deal cycle. 

DealRoom, announced it’s first embedded AI functionality: AI-Powered Document Analysis This beta functionality drives M&A due diligence efficiency, providing M&A teams, financial analyst, legal teams and other functional leads with a powerful solution to streamline their workflows and enhance decision making. 

Conclusion 

Although AI has made great strides in dealing with algorithms and prediction models, it is still in its infancy in certain areas within a deal cycle. AI has great potential to synthesize data and assumptions for models, analyze earnings announcements, and manage the sensitivity of information in data rooms.

By utilizing AI and enhancing decision-making through deep data analysis and insights, the M&A industry is on the brink of a significant paradigm shift. Remember the above prompts for the most accurate responses to streamline your M&A activity.

Definition
Pros
Cons
No items found.

M&A Science Newsletter

Subscribe today! 

Keep up-to-date with our weekly newsletter, featuring interview summaries, event announcements, and M&A industry job opportunities.