Evident’s Mousavizadeh on Davos signals for “AI realists”

Evident CEO and co-founder Alexandra Mousavizadeh was in Davos, where AI was front and center of every session. In emailed newsletter commentary, she writes her thoughts about the major takeaways:

As we know, the AI crowd tends to divide into two familiar (and, frankly, boring) camps: the AI Pessimists versus the AI Optimists, ping-ponging back and forth about the promise and peril of this technology.

I found the most interesting place for AI in the Alps was among those I’d call the AI Realists. These are the people who are doing the hard work of building, testing, and iterating on the technology itself and figuring out its business adaptations. There’s no time for the hype; that’s so 2023. Yes, we know it’ll revolutionize everything. The issue in 2024 is how it’ll be put to use. Implementation will be slow, arduous, and costly work (or “uncomfortable” as Sam Altman put it).

The AI Realists occupy the scarce middle ground between extreme outcomes. After all, a steeper climb to our collective destination will likely offset some of the negatives that compete for headline news.

Outcomes, outcomes, outcomes… How do we get to AI outcomes, fast?

If 2023 was the “Year of AI,” then 2024 could be the year the reality bites. At our November Symposium, we saw the harsh reality that we could be 6-, 12- or 18-months away from some GenAI use cases in production – and maybe even longer to see true ROI. I heard this repeatedly in Davos too.

Banks are under increasing pressure to demonstrate and report tangible returns from continuing AI implementation. If they can’t, a new “AI Winter” might be on the horizon. But unlike the AI Winter used to describe a fallow period of funding and interest in AI research (see 1974-1980, 1987-1993) – this one will be defined by the gap between the lofty expectations set by leadership, investors, and industry pundits . . . versus the practicalities of scaling promising PoCs into robust infrastructure and outcomes.

The threat of a “cooling down,” comes from unexpected obstacles that might slow momentum: regulatory curveballs, defection of skilled talent, pushback on data/privacy concerns, and inevitably, issues of cost and reliability that follow in the wake of scaling any tech application. Conversations at Davos reinforced the need to conduct critical assessments of an institution’s over-dependence on any one LLM platform and weighing up the benefits/risks of using open-source versus closed models. Both due diligence exercises could easily extend testing and refinement timelines. As a community, we need to come up with answers – and quickly too.

Microsoft is leading, but isn’t the only game in town. What does a future-proofed AI partnership strategy look like?

One of my highlights this week is the question that The Economist’s Editor-in-Chief Zanny Minton Beddoes posed in moderating a session with Sam Altman and Satya Nadella, in their first joint appearance since the recent drama at OpenAI. Given that Microsoft’s commercial arrangement with OpenAI will end when OpenAI reaches AGI, who gets to define AGI and what happens next (with its customers)? Sam’s response: “I don’t think anybody knows anymore what AGI means.”

Huh… Granted, everyone is working towards AGI, yet no one can define it – but is the “we’ll know it when we see it?” standard sufficient to build trust and assuage sceptics? Moreover, what does this mean for its customers? Is the OpenAI/Microsoft partnership stable enough to rely on in the long run? If anything, the deflection adds to that laundry list of unknowns cited above.

AI investment at scale requires buy-in and stalwart support from top leadership within a bank (up to and including CEO). These leaders are: (a) risk averse; and (b) have a fiduciary responsibility to shareholders. If your AI platform of choice, infrastructure provider, and implementation partner each take a laissez-faire approach to long-term planning, C-suite confidence in the bank’s overall AI strategy suffers.

Banks that reap the benefits from AI investments will be those institutions that move rapidly from deploying AI in low-risk, back-office functions → to enabling generative co-pilots for client-facing personnel → to fast-tracking automated code generation tools → to eventually, implementing foundational models that extract the most value from proprietary data assets. Consequently, banks that address questions of risk, lock-in, portability, scalability, contingency planning, etc. upfront will be more effective as they graduate to each new rung of the ladder in terms of sophistication, risk, and most importantly, reward. It’s not about the fast start – it’s about the fastest progression once you start.

Now that multiple LLM providers are coming to market featuring comparable performance, banks have options for the first time. Consequently, additional steps taken to explore and validate which provider(s) offer the best solutions may slow things down in 2024.

Macro(n) point: What’s happening in France? Do regional AI ecosystems exacerbate leaders and laggards?

The hot wars in Ukraine and the Middle East – and potential conflict over Taiwan – hung heavy over Davos. Add to that the warning from the IMF and the BCBS that AI could compound uncertainty and instability if we don’t have ways to mitigate shared risks. This convergence of kinetic conflicts and a regional arms race in AI technology is . . . not great.

The AI ecosystem needs diversity and redundancy to benefit business stakeholders. To that end, a recent bright spot has been the sudden and impressive debut of Mistral – putting Europe on the map with regards to LLM development in just six months (alongside the United States and China). Note: the UK remains absent in the LLM development landscape, representing a missed opportunity to exert a leadership role in the region, especially given its excess of talent and capital.

What the Mistral story tells us is that with the right ecosystem – namely, a supportive policy environment, accessible talent, and sufficient funding – frontier AI development can take place anywhere. What are the knock-on effects of regional clusters of AI excellence? How will this present in the Evident AI Index rankings of AI maturity by country?

The greatest risk in 2024 is not Marc Benioff’s “Hiroshima moment” for AI – but insufficient answers to the open-ended questions highlighted above. Without clear answers, we risk discouraging business from taking the risks to move ahead with implementing AI at scale. Any bank that stalls early in the AI race may never get the chance to catch-up again.

In short, AI Realists have precious little time to waste…

Related Posts

Previous Post
BoE’s seclending committee mulls T+1 timing, tax and short selling regulation
Next Post
Interview: FIA Tech on derivatives data analytics and “full speed ahead” for CDM in collateral

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account