Wall Street’s “Cautious Shift”: Why Quants Aren’t Using Generative AI to Invest (Bloomberg Survey)

Wall Street’s “Cautious Shift”: Why Quants Aren’t Using Generative AI to Invest (Bloomberg Survey)

By ADMIN

Wall Street’s “Cautious Shift”: Why Quants Aren’t Using Generative AI to Invest (Bloomberg Survey)

Generative AI is everywhere—from chatbots to office tools—but it still hasn’t fully convinced one of finance’s most data-driven groups: quantitative investors (quants). According to a new Bloomberg survey highlighted in a recent report, more than half of surveyed quants say they aren’t using generative AI for investing yet.

That might sound surprising at first. Quants are famous for using computers, statistics, and machine learning to search for patterns in markets. So why would they hesitate now—right when AI is getting so much attention? The short answer is: the stakes are high, the data needs to be “perfect,” and generative AI still raises tough questions about reliability, repeatability, and control.

What the Bloomberg Survey Found

Bloomberg interviewed 151 quant investors over several months—between April and November of the prior year—to understand how they are (or aren’t) using generative AI in their investment research process. The results showed that 54% do not incorporate generative AI into their workflows.

This doesn’t mean quants are anti-technology. In fact, many quant firms have used machine learning for years. What the survey suggests is that generative AI—the kind of AI that can write text, summarize documents, and generate ideas—has not yet become a standard tool for making trading decisions or building market-beating strategies.

Who Are “Quants,” and Why Their Standards Are So Strict

Quant investors manage strategies that rely on models—often complex systems that process huge volumes of market data. In many firms, these models can trigger trades automatically, in milliseconds, and at massive scale. That’s powerful, but it also means mistakes can be extremely costly.

Think of a quant strategy like an aircraft autopilot: it can work incredibly well, but only if every sensor is calibrated and the system behaves predictably. If something “hallucinates,” breaks silently, or can’t explain why it made a decision, that can be unacceptable in a high-risk environment where repeatability and auditability matter.

Why Generative AI Still Has Trouble Winning Quants Over

The recent reporting lines up with what many quant professionals have been saying publicly: generative AI is interesting, but not proven as a consistent source of “alpha”—the finance term for returns above the market.

1) “Alpha” Is Hard, and Generative AI Isn’t a Magic Button

Markets are competitive. If a simple, widely available tool could reliably beat the market, it wouldn’t stay a secret for long. Quants often believe that advantage comes from:

  • Unique data (signals other people don’t have)
  • Clean pipelines (data that is structured and tested)
  • Strong research discipline (repeatable experiments)
  • Execution quality (trading efficiently with low costs)

Generative AI may help with parts of research and productivity, but many quants remain unconvinced that it can consistently create robust signals that survive real-world trading costs and changing market regimes.

2) Data Formatting Is a Bigger Problem Than Most People Think

One of the clearest explanations in the report is also one of the least glamorous: the data has to be structured correctly. Bloomberg’s Angana Jacob, the company’s global head of research data, said quant environments require data to be cleaned and structured in very specific ways because of how complex their systems are—and because of how much capital could be affected if something goes wrong.

She also emphasized that quant research happens in a highly controlled setting, where models must be:

  • Explainable
  • Repeatable
  • Reliable across time and testing

In other words: it’s not enough for an AI tool to sound smart. It has to be testable like a scientific instrument—again and again—without drifting into inconsistent behavior.

3) The “Unglamorous” Work Comes First

Jacob described the preparation needed to make datasets AI-ready as “foundational” and “unglamorous.” That’s a key point. A firm can’t simply plug messy data into an AI model and expect trustworthy outputs. For quants, the hard work often includes:

  • Normalizing data formats across vendors
  • Removing errors, duplicates, and “survivorship bias”
  • Creating consistent timestamps and identifiers
  • Documenting data lineage (where it came from and how it changed)
  • Building guardrails and tests that detect when something breaks

Until that foundation is solid, many quants would rather move slowly than risk a model failure that could cause major losses. Jacob even framed this caution as a positive sign of diligence.

Why This Matches What Quants Have Been Saying Publicly

The report also notes that this survey result matches the tone from quant executives at industry events. For example, prior reporting described skepticism at a London-based quant conference about whether AI could really beat the market or add meaningful investing value right now.

In that same spirit, a UBS executive was cited as saying AI isn’t going to help win the “alpha war.” That phrase captures a core fear: if everyone gets similar AI tools, then the competitive advantage may shrink rather than grow.

So… Are Quants “Not Using AI” at All? Not Exactly.

It’s important to be precise about language. Many quant firms already use machine learning (ML). The survey and reporting focus on generative AI—tools that generate text, code, summaries, or synthetic content.

Quants may be more comfortable with ML models where inputs and outputs are tightly defined, performance can be measured clearly, and the model is trained for a specific numeric task (like classification or forecasting). Generative systems can be useful, but they can also produce outputs that are harder to validate.

Where Generative AI Could Still Help Quants Today

Even if generative AI isn’t making the final “buy/sell” call, it can still be valuable in supporting roles, such as:

  • Summarizing research notes, filings, and transcripts
  • Organizing large text datasets for human review
  • Helping write code prototypes or tests (with supervision)
  • Searching internal knowledge (like “chat with your data” tools)
  • Drafting documentation and data dictionaries

In other words, many firms may treat generative AI as a productivity tool first—before trusting it with portfolio decisions.

Bloomberg and Others Are Building Tools to Bridge the Gap

Bloomberg believes AI adoption in quant investing will be closely tied to data formatting and availability going forward. Jacob said her team is creating data products for quants that could increase AI adoption once the underlying data “catches up” to the ambition.

And Bloomberg isn’t alone. The report points to a startup called Carbon Arc, founded by former Point72 data executive Kirk McKeown, which is also focused on structuring datasets so they’re easier to ingest into AI models.

Why Data Vendors Matter More in the Generative AI Era

If you zoom out, this story is also about the growing importance of data infrastructure. In modern finance, “AI readiness” can depend less on flashy models and more on:

  • High-quality datasets with clear definitions
  • Standardized formats across asset classes
  • Metadata that describes limitations and coverage
  • Transparent provenance and update schedules

When that infrastructure improves, more firms may feel safe experimenting with generative AI in research workflows—especially for tasks where mistakes are caught before trades happen.

What This Means for the Future of AI in Investing

This doesn’t read like a story of rejection. It reads like a story of timing and risk management. Quants are not rushing because:

  • They need strong evidence that generative AI improves results
  • They require rigorous testing and controls
  • They depend on clean, structured, explainable data

But the interest is still there. Jacob suggested enthusiasm is high for what AI can do once the data is ready—implying adoption could accelerate later, especially as vendors and internal teams standardize data pipelines.

Two Likely Paths Forward

Path A: “Assist first, decide later.” Generative AI becomes common for productivity (summaries, code help, internal search) before it is trusted for core portfolio decisions.

Path B: “Hybrid systems.” Firms combine strict numeric models with carefully controlled generative components—where the generative part is boxed into limited tasks, heavily tested, and monitored.

Practical Takeaways for Readers (Even If You’re Not a Quant)

If you’re a student, investor, or someone curious about AI, this story offers a useful lesson: real-world adoption is often slower in high-stakes environments. The more money, safety, or reputation is on the line, the more organizations demand:

  • Proof, not hype
  • Testing, not vibes
  • Repeatability, not one-off wins

So when you hear “AI is changing everything,” remember: in industries like finance, change tends to arrive in phases—starting with low-risk assistance, then moving toward deeper integration once the foundations are reliable.

Frequently Asked Questions (FAQs)

1) What did the Bloomberg survey say about quants and generative AI?

The survey found that 54% of surveyed quants do not use generative AI for investing, based on interviews with 151 quant investors conducted between April and November of the prior year.

2) Are quants against AI in general?

No. Many quant firms have used machine learning for years. The hesitation is more about generative AI in investment workflows—especially where outputs must be highly testable and dependable.

3) What is the biggest reason quants are moving slowly?

A major reason is data structure and formatting. Quants need clean, controlled, and explainable data for complex systems—because errors can be extremely costly.

4) What does “explainable and repeatable” mean in quant research?

It means a model’s behavior should be understandable and consistent: if you run the same test again, you should get the same result, and you should be able to justify why the model made a decision. Bloomberg’s Angana Jacob emphasized these needs in quant environments.

5) Is generative AI useless in investing?

Not necessarily. It may be useful for support tasks like summarizing documents, organizing research, drafting code, or improving internal search—especially when humans remain in control of final decisions.

6) Who is trying to solve the “data problem” for AI in finance?

Bloomberg is building data products aimed at quant workflows, and the report also mentions Carbon Arc, founded by former Point72 executive Kirk McKeown, focusing on structuring datasets for easier AI ingestion.

7) Where can I read more background on this topic?

You can explore broader market coverage and AI-in-finance discussions via reputable financial news outlets and data providers (for example, Bloomberg’s markets coverage). (External reference)

Conclusion: Caution Now, Acceleration Later?

The headline takeaway is simple: quants are not rushing to use generative AI for investing. The Bloomberg survey suggests most are still on the sidelines, and the reasons are practical—data quality, model control, explainability, and the high cost of mistakes.

But the deeper message is even more interesting: this looks less like a rejection and more like a careful build-up. If vendors and firms can solve the “boring” foundation—structured data, clear workflows, strong controls—then generative AI may eventually become a standard part of the quant toolkit. Until then, the smartest players may keep doing what they do best: test everything, trust evidence, and avoid hype-driven shortcuts.

#SlimScan #GrowthStocks #CANSLIM

Share this article