Noah was responding “to the voice of God.” He listened, observed, and made sense of the whole. Your black box of complexity exists to inhibit these steps you’ve described. The challenge is not knowing what to do. The challenge is believing that “this too shall pass” and things will return to what we are comfortable with. I don’t believe this is realistic. The gutting of the white-collar workforce can be treated as tragic. It is in the short-term. But every one of those human beings RIFed are capable of creating new opportunities. I interviewed one on Friday.
Richard, you have displayed the true value of AI. If Noah represents the seer who sees the storm on the horizon, then what does the ark look like. The question of the future is not about AI. It is designed to devour itself. It is how do we structure the world to utilize human potential.
Whatever happened in Washington this week, it is nothing more than an attempt to hold on to a past that no longer exists. The systems of governance and finance are the black box. They are all outside the ark believing Noah is the fool. In reality, they cannot see that structure of the world is changing after many millennia of being basically the same. The great man/woman who leads is no longer. They are not coming to save us. They can’t even save themselves.
Yet, the future still belongs to the industrious, the resilient, the collaborator, the social catalyst, the seer, the believer, and the adapter. As bad as the storm will be, we will survive to thrive in a new world that has not dawned yet. I am as confident of this as I have been of anything I’ve seen my lifetime.
Yes to all. I suspect the biggest challenge for most of us in understanding as small as we may feel, we are enough, and can move. The understanding for me, that at 75, not understanding how to code was no longer a disadvantage, and that clarity of thinking is the key advantage. Watching Claude build The Trivium Forge was, for me, akin to magic until I understood that the magic is no longer in the technology, it is in its ability to translate ideas into action without a priesthood of technologists, and that perhaps we are in something of a Gutenberg moment. It will be interesting.
Absolutely. The reign of Genghis Khan as corporate/government leader is coming to an end. The complexity of the world makes hierarchies of power unsustainable.
We’ve known for a generation that networks, collaboration, and leadership as “the first among equals” is our future.
"...AI is now eating software. If you are in tech, it is beginning to feel much like the weavers felt during the industrial revolution, as they were replaced by machines and children."
I think the picture is somewhat nuanced here. Many of "those in tech" are really close to this in the sense that they feel the impact and they also can immediately see it. A bit like you just now did something that connected your space to AI, and the effect was immediately visible to you, and astounding. Those in tech who feel like weavers probably are. Whatever skill set they have that enabled them to, for example, develop software, will likely change massively. But many others in tech will not feel like weavers, because a significant part of their skill set remains. Especially so if in the past, they have been working in varied environments, with varied programming languages and tech ecosystems - and generally are of the curious kind. The quality is now more in the questions, the architecture, the decision what software to build and to make work with what other software. But what will it be two years from now? And while some tech skills (long standing experience in some niche ecosystem) become massively less valuable, other skills become super valuable because they catch the inevitable errors the AI makes, and address the limitations it has. Simple example: If you write code, you need to regularly review it for errors, mistakes, redundancies, or other things that just happen while coding. If your skill set includes the knowledge that code reviews, and acting on them, are necessary to pay of "technical debt", and that you can actually ask an AI to review the code one of its colleagues has written, and if you know how to assess the outcome of that code review (which of the to-do's this surfaces to let the AI work on and which not), then you actually become a super enabled coder. Like replacing your shovel with an excavator. Similar for many other tech related jobs. Claude is wonderful in helping me build Raspberry Pi based devices of various kinds. I know the basics and principles (or Claude educates me if I ask, and I need to be willing to educated, and keep a critical eye on the shallowness of my knowledge), but it would have never made sense for me to get into the details of any of the libraries required for only one or two such projects. (caveats apply - it is terribly easy to overlook security risks and other stuff)
---
"I am aware that there are those with deep technical experience who would look at this and find it unremarkable."
Not really, from what I have seen. *Everyone* using AI is on a steep learning trajectory, unless they at some point sign off. What anyone does as cutting edge with todays best AI systems becomes boring, if not obsolete, a year or two from now. And the experience you made is a learning experience that will resonate with everyone seriously working with AI - because these experiences are happening regularly, to everyone. Even to the likes of Azeem Azaar or Ethan Mollick.
And what you will (now?) see, probably, is the difference between the people who, on all levels, are learning, exploring, critiquing, and wrestling with this, having revelations, getting a bloody nose every once in a while, and worried about the larger risks, and those who stare disbelievingly, who cannot get themselves to think differently, who are still surprised about whatever one shows them in that space, good or bad. You say you were "seriously alarmed". Very healthy. What is your creation actually doing? Can you rely on it? Will it lure you into weird biases? Will you get overconfident when you build the next one?
---
"And I think there is a craft to this. AI is a tool, and there is a skill to using it. It is a very different mindset to AI as “super Google”, or a provider of answers. It feels to me far more like a sculptor’s hammer and chisel that we use to chip away at the challenge until what we are looking for reveals itself."
Yes. A self-guiding chisel which one should lock up whenever one leaves the house (e.g. OpenClaw and emerging colleagues). And the chisel may morph into a small monkey, or a self-guiding, mobile powerdrill deciding where it wants to go and drill holes, anytime!
Noah was responding “to the voice of God.” He listened, observed, and made sense of the whole. Your black box of complexity exists to inhibit these steps you’ve described. The challenge is not knowing what to do. The challenge is believing that “this too shall pass” and things will return to what we are comfortable with. I don’t believe this is realistic. The gutting of the white-collar workforce can be treated as tragic. It is in the short-term. But every one of those human beings RIFed are capable of creating new opportunities. I interviewed one on Friday.
Richard, you have displayed the true value of AI. If Noah represents the seer who sees the storm on the horizon, then what does the ark look like. The question of the future is not about AI. It is designed to devour itself. It is how do we structure the world to utilize human potential.
Whatever happened in Washington this week, it is nothing more than an attempt to hold on to a past that no longer exists. The systems of governance and finance are the black box. They are all outside the ark believing Noah is the fool. In reality, they cannot see that structure of the world is changing after many millennia of being basically the same. The great man/woman who leads is no longer. They are not coming to save us. They can’t even save themselves.
Yet, the future still belongs to the industrious, the resilient, the collaborator, the social catalyst, the seer, the believer, and the adapter. As bad as the storm will be, we will survive to thrive in a new world that has not dawned yet. I am as confident of this as I have been of anything I’ve seen my lifetime.
Yes to all. I suspect the biggest challenge for most of us in understanding as small as we may feel, we are enough, and can move. The understanding for me, that at 75, not understanding how to code was no longer a disadvantage, and that clarity of thinking is the key advantage. Watching Claude build The Trivium Forge was, for me, akin to magic until I understood that the magic is no longer in the technology, it is in its ability to translate ideas into action without a priesthood of technologists, and that perhaps we are in something of a Gutenberg moment. It will be interesting.
Absolutely. The reign of Genghis Khan as corporate/government leader is coming to an end. The complexity of the world makes hierarchies of power unsustainable.
We’ve known for a generation that networks, collaboration, and leadership as “the first among equals” is our future.
Some thoughts:
"...AI is now eating software. If you are in tech, it is beginning to feel much like the weavers felt during the industrial revolution, as they were replaced by machines and children."
I think the picture is somewhat nuanced here. Many of "those in tech" are really close to this in the sense that they feel the impact and they also can immediately see it. A bit like you just now did something that connected your space to AI, and the effect was immediately visible to you, and astounding. Those in tech who feel like weavers probably are. Whatever skill set they have that enabled them to, for example, develop software, will likely change massively. But many others in tech will not feel like weavers, because a significant part of their skill set remains. Especially so if in the past, they have been working in varied environments, with varied programming languages and tech ecosystems - and generally are of the curious kind. The quality is now more in the questions, the architecture, the decision what software to build and to make work with what other software. But what will it be two years from now? And while some tech skills (long standing experience in some niche ecosystem) become massively less valuable, other skills become super valuable because they catch the inevitable errors the AI makes, and address the limitations it has. Simple example: If you write code, you need to regularly review it for errors, mistakes, redundancies, or other things that just happen while coding. If your skill set includes the knowledge that code reviews, and acting on them, are necessary to pay of "technical debt", and that you can actually ask an AI to review the code one of its colleagues has written, and if you know how to assess the outcome of that code review (which of the to-do's this surfaces to let the AI work on and which not), then you actually become a super enabled coder. Like replacing your shovel with an excavator. Similar for many other tech related jobs. Claude is wonderful in helping me build Raspberry Pi based devices of various kinds. I know the basics and principles (or Claude educates me if I ask, and I need to be willing to educated, and keep a critical eye on the shallowness of my knowledge), but it would have never made sense for me to get into the details of any of the libraries required for only one or two such projects. (caveats apply - it is terribly easy to overlook security risks and other stuff)
---
"I am aware that there are those with deep technical experience who would look at this and find it unremarkable."
Not really, from what I have seen. *Everyone* using AI is on a steep learning trajectory, unless they at some point sign off. What anyone does as cutting edge with todays best AI systems becomes boring, if not obsolete, a year or two from now. And the experience you made is a learning experience that will resonate with everyone seriously working with AI - because these experiences are happening regularly, to everyone. Even to the likes of Azeem Azaar or Ethan Mollick.
And what you will (now?) see, probably, is the difference between the people who, on all levels, are learning, exploring, critiquing, and wrestling with this, having revelations, getting a bloody nose every once in a while, and worried about the larger risks, and those who stare disbelievingly, who cannot get themselves to think differently, who are still surprised about whatever one shows them in that space, good or bad. You say you were "seriously alarmed". Very healthy. What is your creation actually doing? Can you rely on it? Will it lure you into weird biases? Will you get overconfident when you build the next one?
---
"And I think there is a craft to this. AI is a tool, and there is a skill to using it. It is a very different mindset to AI as “super Google”, or a provider of answers. It feels to me far more like a sculptor’s hammer and chisel that we use to chip away at the challenge until what we are looking for reveals itself."
Yes. A self-guiding chisel which one should lock up whenever one leaves the house (e.g. OpenClaw and emerging colleagues). And the chisel may morph into a small monkey, or a self-guiding, mobile powerdrill deciding where it wants to go and drill holes, anytime!