In today’s age of information, there’s too much to read. There simply aren’t enough hours in the day to absorb the spectrum of news available on a given company or sector. It’s nearly impossible to stay up-to-date, and it’s difficult enough just to identify what is (or should be) the most impactful news. As a result, it’s all too common to make key investment or corporate decisions based on an incomplete viewpoint.
At Causality Link, our mission is to solve this problem, delivering an AI-powered research platform that leverages the collective intelligence of tens of thousands of new articles each day and tracks the crowd consensus to measure the perception of companies, industries and macroeconomic indicators. And while we’ve been at it since 2016, we’re always innovating to make our product even better.
Case in point: the latest version of our platform features new functionality powered by a large language model (LLM) that makes it easier than ever to grasp the impact of the news. With our latest update, our platform’s findings can be distilled into straightforward, plainspoken summaries that enable non-technical personnel to quickly understand the state of play and decide whether a topic deserves additional attention or that they can move on with confidence.
LLMs are a hot topic right now – but there’s nothing fad-like about how we have incorporated them into our platform. Generative AI tools like ChatGPT can be useful and fun, but anyone who’s used them understands that they can miss the point or even hallucinate information entirely. We firmly believe that high-stakes strategic decisions must be informed by AI that is explainable, and that a black box cannot be trusted to arbitrarily rank and select what information should be shared with its audience. To maintain explainability, we’ve augmented our LLM with the right citations from our data lake, based on analytical queries we’ve been developing for years. This effectively creates the guardrails necessary to ensure quality control, coherence and impact. To learn how and why, read on.
Causality Link + Generative AI = Analytics Controlled Narrative
As stated above, our platform already reads tens of thousands of articles each day. In theory, an LLM could be applied to these pages upon pages of text, assess their contents and produce narrative summaries based on them. Almost instantly, users would receive relevant intelligence in an easily digestible format. Simple enough, right?
Unfortunately, no. Left to their own devices, LLMs are prone to draw conclusions that are irrelevant or even completely incorrect, especially in shorter formats. Summaries can be highly generic or otherwise random in the topics they select. You might have been looking for a nuanced analysis of sales trends for a specific product line, but if the LLM was not trained against the period of time you’re interested in, and if it wasn’t handed the answer directly, it’s likely to invent a response that’s “similar” but effectively made-up. In other words, while LLMs may be able to read the story, they are often ill-equipped to truly understand the narrative, and they are completely unable to track and aggregate collective intelligence.
A better approach is to guide these LLMs by setting the context for them – which is the purpose of “Retrieval Augmented Generation” (RAG) – with the difference that we replace the embeddings-based statistical distance retrieval model by our analytics symbolic retrieval model. For any given company, product, sector or industry, our platform is equipped to understand the most relevant indicators or events at play, their direction in terms of “getting better” or “getting worse” and the attention they are commanding in terms of quantity of citations. We’ve been tracking these metrics every day for the past ten years. To complement these signals, we also identify cause-and-effect relationships that track what is really motivating consumers, moving markets and more. Once a user has identified their interests, our methodology sifts through our analytics to find faint signals, then pinpoints specific examples in the text that best represent the trend, creating evidence-based context.
Then – and only then – is the LLM equipped to synthesize the most important information and provide summaries that actually reflect the most essential, impactful intelligence to drive strategic decision-making. Instead of pulling from the text at random, it aligns positive and negative arguments with overall sentiment and connects text with statistics. The summaries become more truthful and balanced by an order of magnitude.
Think of our platform as an accelerated version of a large, high-performance team of analysts. First, it reads every relevant news story in the world and extracts the most important details. Next, it identifies the collective point of view – determining where there is disagreement, how many people are talking about it, how sentiment is changing over time and more – and provides a wide range of statistics to represent these findings. Now, our new update has added a third step to that process: distilling these numbers and charts into actionable written summaries.
This is a powerful form of neurosymbolic AI, which is a way for computers to combine human-readable concepts with statistical neural networks that aren’t necessarily explainable. Through the combined power of these two technologies, our system can manipulate abstract concepts and complement the traceable extraction performed by the symbolic system, greatly reducing AI hallucinations. The result: findings gleaned from readable documents are refined using our system’s analytics and cause-and-effect relationships to paint a more complete, more accurate picture.
An Array of Use Cases
This new feature has powerful implications for any firm seeking to understand how they are viewed in the marketplace, how they stack up against the competition and the factors that could put their reputation at risk. ESG stakeholders, management consultants and innovation groups within banks and investment firms are all logical candidates, as these roles can involve a heavy component of assessing public sentiment and translating it into optimized decisions.
Once these users identify what they want to track, we align their priorities with the relevant KPIs already in our system and provide intelligence on the precise areas they want, creating a robust dashboard of their goals and risks. The LLM-written summaries are based on these findings and informed by the overall sentiment, creating strong coherence between text and chart. To top it off, the summaries are interactive, enabling users to drill down, view attributed citations, read the source articles and learn more. It’s a simple, powerful workflow that maximizes effectiveness of an emerging technology by tapping into our tried-and-true methodology.
Future Efficiencies in AI-Powered Market Research
As we continue to innovate in the AI and data sectors, methods like our “analytics-controlled narrative” emphasize the importance of marrying data with storytelling. It’s not just about numbers or words alone, but how the numbers reflect the story. This blend of art and science is a powerful example of how humans can maximize their use of AI in a time of rapid adoption and outsized promises.
Looking ahead, we will continue to take advantage of the latest in AI and machine learning technology to best serve our clients. Helping firms better understand the future through global perspectives and causal links remains our north star, and we’ll never stop innovating in pursuit of that goal. Stay tuned in the coming weeks as we reveal more technical details about how analytics-controlled narrative works. In the meantime, we invite you to sign up for a free trial that demonstrates these techniques in action.