此内容来自:文件夹

当没有人知道它的作用时,如何选择经理

这是一个普遍的真理,即投资过程应该是可以理解的。然而,人工智能管理人员可以提供对其投资过程的真实解释。这是一个问题,写了柱子angelo calvello。

当我问蒂姆麦斯卡尔,CIO在咨询公司Nepc时,关于积极管理人员的越来越多地利用人工智能,他释放了罗马历史学家苏纳尼乌斯:“AI投资不会消失。”

证据是McCusker的一面。根据商业内幕近期摩根大通发布会上,银行问ed 237 investors about big data and machine learning and found that “70% thought that the importance of these tools will gradually grow for all investors. A further 23% said they expected a revolution, with rapid changes to the investment landscape.”

Such investor interest signals both the frustration with current active — and specifically quant — managers and the nascent promise shown by AI hedge funds.

无论原因是什么,AI投资顾问和资产所有者具有严峻挑战。

我们的行业作为一个普遍的事实,即投资过程应该是可以理解的。作为Kabuki剧院的一部分,我们呼吁投资尽职调查,资产所有者和顾问要求经理能够解释其战略和模型。

经理揭示了足够的投资过程,以便为分配者提供一个凯恩,然后他们可以从中定位自己并继续评估。

我们必须认识到,传统经理的披露程度反映了一个故意的行为:经理可以揭示更多,而不是因为它声称,这样做会使其进程有风险。这可能更诚实地分享太多会揭示过程本身的缺乏。

但是,虽然AI经理可以提供其方法的一般概述(“我们使用递归神经网络”),但它可以提供其投资过程的真实叙述 - 不是因为故意偏转,而是因为,与传统经理不同,它没有手工建立的投资模式。该模型自身构建,经理无法完全解释模型的投资决策。

Think of a traditional manager as Deep Blue, a human-designed program that used such preselected techniques as decision trees and if/then statements to defeat chess grandmaster Garry Kasparov in 1997. Think of an AI manager as DeepMind’s AlphaGo, which used deep learning to beat some of the world’s best Go players. (Go is an ancient Chinese board game that is much more complex than chess and has more possible moves than the total number of atoms in the可见宇宙。)没有明确的人类节目,alphago创建了自己的模型,允许它比其人类对手更好地做出决定。

有足够的时间和培训,我们可以解释为什么深蓝色使某个国际象棋在一定时间内移动。虽然我们可以观察alphago如何播放,但我们无法解释为什么它在特定时间点进行特定移动。作为Yoshua Bengio.那a pioneer of deep-learning research, describes it: “As soon as you have a complicated enough machine, it becomes almost impossible to completely explain what it does.”

这就是为什么AI经理无法解释其投资过程。可解释性的要求带来了对 - 和延期的评估,对尖叫停止的射门的投资策略。

With AI investing, allocators face a new choice. Currently, in an act of complicity, they choose access over knowledge — accepting a manager’s willfully limited disclosure of its narrative but naively believing that the narrative does exist and is known to the manager’s illuminati.

所有AI消费者面临的新选择更为根本。选择,据Aaron M. Bornstein是普林斯顿神经科学研究所的一名研究员,“我们想知道将以高精度的方式发生什么,或者为什么会以牺牲准确性为代价?”

Requiring interpretability of investment strategies is a vestige of old-world assumptions and is entirely unsatisfactory for reasons that transcend investing: We either foreswear certain types of knowledge (e.g., deep learning–generated medical diagnoses) or force such knowledge into conformity, thereby lessening its discovered truths (do we really want our smart cars to be less smart or our investment strategies to be less powerful?). Moreover, this requirement smacks of hypocrisy: Given what Erik Carleton of Textron calls “the often flimsy explanations” of traditional active managers, investors really don’t know how their money is invested. And conversely, who would not suspend this criterion given the opportunity to invest in the Medallion Fund?

我们需要更好地投资有益资产。AI投资可以提供帮助,但其通过迫使我们判断AI策略而不是其可解释程度,而是通过其结果来判断。

作为科学家塞尔默布莱斯·詹姆斯把它放了,“我们正在进入一个黑色的未来,充满黑匣子。”接受它。