Commentary on CWT w/ Jack Clark - Bullish case for AI impact on economic growth
Link to MR post: MR
Jack and Tyler are both of the opinion that AI will cause lower economic growth than the 20%-30% estimates of those who are "feeling the agi" because real world bottlenecks in the world of atoms, which still makes up the majority of the Economy by far, will put some brakes on that breakneck acceleration.
Now, I'm certain that both of them have already thought about the argument I'll make here, but I just want to spell it out loud. Many bottlenecks in the real world are consequences of political disputes (broadly speaking) in which parties have to rely on claims to support their side. In a world where more people rely on AI - and here I'm imagining the voting public using using AI to help them think about these disputes - it's possible that we'll see more rational, or at least better supported by evidence, policy getting an edge.
For example, a supporter of a proposal to reduce the maximum speed in a road can now not only easily generate a report with proofs that such a measure on average reduce deaths where it's implemented, but they can also challenge anyone who opposes the validity of such a conclusion to try to extract an opposite one from their preferred LLM. Failure to do so should on average weaken less substantiated position (I'm not saying reducing speed limits are good, it's just an example and one could think of many better ones).
Now, there are infinite reasons why that wouldn't be the case. I get that there are suspicions about bias in academia, actual bias in academia, people use motivated reasoning, people simply disregarding valid rationales for policies or outcomes that they don't like, etc. In short, I'm well aware that information being available is not a panacea for good policy. If it were, probably government would be a lot better than it is since it's not like there's a dearth of information in the Internet age.
Going back to economic growth and AI, where I'm getting at is that AI availability should improve the ability for those arguing in good faith to convince genuinely undecided voters, which in turn should on average increase favor for actually well-founded policies, which in turn should translate to higher utility, which is very likely correlated with economic growth. I hope I'm adequately conveying how indirect I believe this effect would be. I certainly don't expect this effect to account for the 25% gap in economic growth estimates, but I believe it should improve somewhat on the base expectation that policy will drag things down by that much.
The flip side, of course, is that AI might also make it easier for constituencies to vote *against* policies that might increase overall utility if it decreases *their* utility. It should become a lot easier to see through "trust me, this is going to be good for you" arguments that ultimately hurt them, but might increase GDP or something like that. I suppose we'll have to see the impact of both effects, but I believe in general people are aware of their own interest and care about others, albeit less than themselves (which I don't think is bad) and are genuinely interested in balancing these interests in a way that they think is fair, so I'll believe the net will be positive.
Comments
Post a Comment