Market Ethos
February 24, 2025
It’s not what you say – it’s how you ask
Sign up here to receive the Market Ethos by email.
Artificial intelligence (AI) has clearly been a dominant market, economic and workflow theme over the past few years following the proliferation of large language models (LLMs). Many have started to use AI tools in their day-to-day, some have not. Some believe it will change our world, others believe it is simply another technology tool to help be more productive, perhaps with a financial bubble along the way. This Ethos is not about how much compute we need and how we will power those data centers, it is about using AI.

We are not AI super users, but we do use it across various areas. Years before these LLMs hit the public’s consciousness, we started dabbling in machine learning related to our more quantitative research and management. Now with the continued advancement of LLMs, the use cases of AI have really increased.
Summarization is a big one. AI-generated summaries of earnings calls is very useful and time saving, with limited loss of content. Report summaries are also useful. Those great 100+ page analyst reports are the best, but sometimes reducing can enable us to get through more of them. For the record, we still read some from cover to cover (in case any of the analyst authors of such reports are reading this Ethos).
Upload a report or reports into ChatGPT and ask it to summarize. Or better yet, ask it more specific questions like what is the biggest concern of management, or risk to margins. By extracting information across various sources, it can uncover things that may not have been on your radar.
Enrichment is quickly becoming a valuable output from LLMs. Have you ever received a generic email trying to market or sell you something? Maybe even from an asset management firm, with a really great fund or ETF? The low open rate on such communications leaves much to be desired. Using AI to create much more customized communications, incorporating specifics about the client/recipient, can greatly improve the readership. Even using tools to make communication content can save time and improve quality.
Search is quickly becoming more user friendly with LLMs. Everyone is familiar with a traditional search on a given topic, such as google. You receive a list of potentially applicable links to content on your inquiry. Sure, you have to scroll past the ‘sponsored’ links companies have paid to show up at the topic and then spend time sifting through the rest to find what you are looking for. LLMs such as ChatGPT certainly provides a much more succinct output that is often much more on point. We use search engines much less now and encourage readers to try it out on LLMs, if you have not already.
Many tasks such as summarizing market news, analyzing data, or drafting client communications can now be prepared using LLMs. Even the title of this report was generated using an LLM, pretty good right? While it is true that AI will allow us to save time due to making information more accessible, relying too much on it might weaken our ability to think critically. Mainly because it is too easy and too compelling.
Traditional search engines are more time consuming but you also enjoy better clarity on the source. For instance, search for energy demand in 2025 and you see a number of links. Some are from reputable organizations such as the International Energy Agency (IEA) or British Petroleum. Other links are from sources, let’s say, that are more suspect. Like a blogger in their basement (exaggerating for effect). The reader can decide which sources to put more credence behind, or read all and use that important critical thinking skill to come to your own conclusion.
An LLM output with a similar question, the output is on point, well-constructed and easy to read. There may be sources cited, but it’s challenging to differentiate between content from a more trusted source or one that you may view as less objective because it is an amalgamation of many sources.
Prompt paradox – The answer you receive from an LLM is based on a slice of available information and is based on the prompts you feed into it. How you ask or frame the question impacts the answer or output. Which means it can just as easily reinforce existing biases as it can uncover new insights. If you were to word the prompt in a more bullish tone, you’ll likely get an associated output you were hoping for, not necessarily the right one or one that would encourage critical thinking. Learning to use LLMs as a helper rather than a substitute for thinking will be vital. Otherwise, you’re just getting a high tech “Yes-man”.
The following (click on each) is an oversimplification and exaggeration, asking a popular LLM why someone should buy or sell gold.
One output is rather positive on gold while the other is rather cautious. While the comparison in the prompt is clear, the real challenge arises when even minor changes in the wording of a prompt has a significant impact on the response. The user may not be aware or even understand why changing a few words results in a different output. The difference in verdict is not only determined by the info provided in the prompt but highlights how much results can depend on the written prompt—just ask the newest professional role in AI, the prompt engineer. The ability to precisely articulate what you’re looking for in a prompt is critical.
The wording in your prompts can fuel confirmation bias and since it is not entirely clear how an LLM moves from prompt to response, it may be hard to detect. Too much confirmation bias hurts critical thinking.
Remember in school when you were asked to write a book report or a report on a specific topic? Nobody liked it, write 2,000 words on topic X. And now people have LLMs to easily create it. But what might not have been apparent at the time of doing those book reports was that it was actually designed to foster critical thinking. You were forced to research, determine what is important, what is not important and formulate your report.
One of the risks of AI & LLMs is a further deterioration of people thinking critically about topics. Counter to this is the increased ease of gathering information on a topic—you just have to be careful how to ask.
Final thoughts
The benefits of AI are very impressive, and we are still in the early days of application development. Sure, there are risks such as higher confirmation bias, but the benefits are so much greater. Regardless of your application choice, we encourage you to explore. It clearly adds to productivity.
Just for fun, we put this report content through a preferred AI detector. It uses AI to detect AI. 12% text appears to be AI generated, roughly the same amount of content in the side-by-side comparison, the rest is original content in case you were getting worried.