It’s different enough from prior technologies to outpace the professions’ typical defense mechanisms. Besides, most workers are more vulnerable than true professionals.
I don't necessarily disagree with you here, but I think the main argument against AI automating all jobs away is that many jobs nowadays are about coordination between institutions, product releases around consumer trends, etc. AI will be super productive, but I don't think people will trust it more than other people because we can't hold it accountable for mistakes in the same way we can hold other people accountable. I actually have a post scheduled for tomorrow talking about just this (and another one Tuesday why I am not worried about AI from a stereotypically economic perspective). Great post!
I look forward to your post! I’m sure trust will not come all at once, but also don’t see any theoretical reason why AI won’t be able to coordinate between institutions, or time product releases around consumer trends, etc. Liability frameworks for AI developers could emerge for certain mistakes; and if it makes significantly fewer mistakes than humans, employers may eventually see the inability to hold it accountable as an acceptable cost of business.
Everyone seems to be focused on the question of if and when AI will replace all human jobs, but I think the bigger concern is not over when it will happen but how fast. That is, how long will the transition period between a totally human-dominated and totally AI-dominated economy be? What will be the time interval between the moment that AI first replaces a large sector of the job market in such a way that those workers can't easily get a different job or modify their job to incorporate AI and the time when AI has replaced so many jobs that human labor is virtually obsolete?
This matters the most to me because that transition period is going to be really difficult. If AI replaces all human work, or so much of it that you don't need the incentive of, "You must work to make a decent living," to get people to do the remaining jobs, then it seems like we don't actually have a big problem economically. As you suggested, we can just implement UBI to sever the link between work and financial resources. But it's not like we're just going to immediately switch from the current economy to one like that. There will be some point when AI is powerful and diffused enough to replace some labor but not all, causing mass unemployment, but where human labor is still necessary, so implementing UBI could cause a disastrous economic collapse by removing the incentive to work for those who still have to.
If that transition period ends up being really short, that's great news - things might suck for a little bit, but we'll come out the other end okay. But if it ends up being long, something pretty drastic is going to have to be done to deal with it, and I don't really have any idea what.
The trend you describe is a subcomponent of what I meant by diffusion. I expect that the pace at which AI replaces human labor will likely be halting--advancing in fits and spurts as it runs into certain barriers--rather than constant or exponential. So there is not only uncertainty about the maximum portion of human jobs AI will eventually be able to replace; and not only uncertainty about how long the whole process will take from start to finish; but also uncertainty about how long each barrier will hold, and thus how long society will linger at certain intermediary stopping points (some of which, as you said, could be more socially destabilizing than others).
The professions' ability to defend themselves using legal lobbying, unionization, etc. is just one of these barriers, but I do think it's an important and underestimated one.
Yeah, this all makes me worry that the professions' ability to defend themselves could end up being a really bad thing, since it seems like the best outcome is for the diffusion period to be as fast as possible to mitigate the social and economic problems that occur in the intermediate period.
I suspect that the tipping point for when people will demand political change will be sooner than that, though. Even if professions were 60% of the workforce, 40% of the workforce being displaced is more than enough to require addressing. And I'm optimistic that you can make a UBI that's high enough to get people by without being so high that nobody wants to work during the transition period. One of the arguments against a UBI is that lots of people want to work, they find meaning and purpose in it; and if labor force participation goes down because of a UBI, wages may rise enough for the jobs that need to stay human to attract enough people to do those jobs.
"But if machines are genuinely, substantially better at almost all economically valuable work, and significant competitive advantages accrue to whoever uses them most"
This doesn't have to be necessarily true for AI to replace jobs. All that is required that employers think it is true, and that those employers have power to prevent competitors from arising.
True, but there's an argument employers won't think it's true unless it's actually true: Just as a matter of inertia, not wanting the bad press from laying off employees, employers preferring to keep professional relationships all else equal, and workers fighting to keep their jobs, I would expect there to be a slight bias in favor of humans over AI.
I don't necessarily disagree with you here, but I think the main argument against AI automating all jobs away is that many jobs nowadays are about coordination between institutions, product releases around consumer trends, etc. AI will be super productive, but I don't think people will trust it more than other people because we can't hold it accountable for mistakes in the same way we can hold other people accountable. I actually have a post scheduled for tomorrow talking about just this (and another one Tuesday why I am not worried about AI from a stereotypically economic perspective). Great post!
I look forward to your post! I’m sure trust will not come all at once, but also don’t see any theoretical reason why AI won’t be able to coordinate between institutions, or time product releases around consumer trends, etc. Liability frameworks for AI developers could emerge for certain mistakes; and if it makes significantly fewer mistakes than humans, employers may eventually see the inability to hold it accountable as an acceptable cost of business.
Everyone seems to be focused on the question of if and when AI will replace all human jobs, but I think the bigger concern is not over when it will happen but how fast. That is, how long will the transition period between a totally human-dominated and totally AI-dominated economy be? What will be the time interval between the moment that AI first replaces a large sector of the job market in such a way that those workers can't easily get a different job or modify their job to incorporate AI and the time when AI has replaced so many jobs that human labor is virtually obsolete?
This matters the most to me because that transition period is going to be really difficult. If AI replaces all human work, or so much of it that you don't need the incentive of, "You must work to make a decent living," to get people to do the remaining jobs, then it seems like we don't actually have a big problem economically. As you suggested, we can just implement UBI to sever the link between work and financial resources. But it's not like we're just going to immediately switch from the current economy to one like that. There will be some point when AI is powerful and diffused enough to replace some labor but not all, causing mass unemployment, but where human labor is still necessary, so implementing UBI could cause a disastrous economic collapse by removing the incentive to work for those who still have to.
If that transition period ends up being really short, that's great news - things might suck for a little bit, but we'll come out the other end okay. But if it ends up being long, something pretty drastic is going to have to be done to deal with it, and I don't really have any idea what.
The trend you describe is a subcomponent of what I meant by diffusion. I expect that the pace at which AI replaces human labor will likely be halting--advancing in fits and spurts as it runs into certain barriers--rather than constant or exponential. So there is not only uncertainty about the maximum portion of human jobs AI will eventually be able to replace; and not only uncertainty about how long the whole process will take from start to finish; but also uncertainty about how long each barrier will hold, and thus how long society will linger at certain intermediary stopping points (some of which, as you said, could be more socially destabilizing than others).
The professions' ability to defend themselves using legal lobbying, unionization, etc. is just one of these barriers, but I do think it's an important and underestimated one.
Yeah, this all makes me worry that the professions' ability to defend themselves could end up being a really bad thing, since it seems like the best outcome is for the diffusion period to be as fast as possible to mitigate the social and economic problems that occur in the intermediate period.
I suspect that the tipping point for when people will demand political change will be sooner than that, though. Even if professions were 60% of the workforce, 40% of the workforce being displaced is more than enough to require addressing. And I'm optimistic that you can make a UBI that's high enough to get people by without being so high that nobody wants to work during the transition period. One of the arguments against a UBI is that lots of people want to work, they find meaning and purpose in it; and if labor force participation goes down because of a UBI, wages may rise enough for the jobs that need to stay human to attract enough people to do those jobs.
"But if machines are genuinely, substantially better at almost all economically valuable work, and significant competitive advantages accrue to whoever uses them most"
This doesn't have to be necessarily true for AI to replace jobs. All that is required that employers think it is true, and that those employers have power to prevent competitors from arising.
True, but there's an argument employers won't think it's true unless it's actually true: Just as a matter of inertia, not wanting the bad press from laying off employees, employers preferring to keep professional relationships all else equal, and workers fighting to keep their jobs, I would expect there to be a slight bias in favor of humans over AI.