Alignment Issue

Often discussed as a key problem in Artificial Intelligence (AI) discussions, the Alignment Issue highlights concerns that we, humans, will achieve undesirable outcomes should we build an AI product in a manner that does not align with the needs, values, interests, or objectives of humanity. However, some researchers are improperly framing this problem as an AI issue, which will lead to an unsatisfactory solution and outcome. That is not to say that the issue does not exist or is not critical to think about from an AI perspective, but rather that the framing of the problem is too narrow and, as a result, will be approached and solved in a siloed manner, excluding consideration of the highly automated organizations that will leverage the AI and may be operating out of alignment, and also excluding the humans that themselves require some level of alignment. Alignment must move from humans to the organization and its board to its sub-organizations or teams to the product or technology that the organization is creating. If any of those parts of the equation are out of alignment, then the desired outcomes become less likely.

For the sake of learning about the alignment issue that exists today (even prior to introducing AI as an accelerant) and how that issue may evolve in the future, let us first imagine a future state scenario as described in My Vision. A group of humans – we will refer to them as Citizens – decides they want to start a United Nation. Initially, there are no states, simply a united group of citizens that want to start a nation to create a system that works for them in alignment with their needs, values, and interests to achieve outcomes that the community has a shared desire to actualize. From a Board of Directors of Supervisors perspective, we might eventually create a representative Congress, and from a CEO perspective, the citizens will start by electing a President. Since we are just one united community, each citizen will have one vote to elect the leader of their community. Within this system, the citizens eventually want to create sub-communities, like states or cities, inside of which they live and operate, but for now, they are a single community consisting of a united group of citizens that desire a shared outcome of one day experiencing a system that works for them in alignment with their needs and values.

Within that community, citizens want to create, invest in, and participate in organizations that exist to solve problems and help meet their needs, and the community expects that organizations solving its problems do so in alignment with the values and interests of its citizens, so the north star outcome – the end destination – can be reached. Each of the organizations creates products and services as offerings to meet those citizen needs, but, in this future state, organizations will eventually build all services as products, which means all organizations in this future-state nation operate as product organizations. Over time, as these organizations increasingly add more technological capabilities to become fully autonomous organizations, they will become technology products in and of themselves, through which citizens engage in value-creating activities. The Venn Diagram that has IT and Business as separate organizations will fully overlap and merge as single circle called a Product organization. We should expect every organization will become a fully automated tech organization in the future, which means that it is critical for that organization (and its board), and not just the product or technology it makes, to align with and represent human needs and values.

Finally, there will undoubtedly be a need for a centralized (or decentralized) intelligence to power and inform the nation and the organizations operating within it, and the organization created to do that would undoubtedly build AI and AI Agents as part of its mission. We want the AI and AI agents working for us in alignment, and that becomes challenging if the AI and agents are accessed and leveraged by highly autonomous organizations that are operating out of alignment, thereby accelerating the rate at which undesirable outcomes are experienced. The expectation for that intelligence organization, like every other organization, would be that the organization, its board of directors, its agents or workers, and the products or technology it creates are all aligned with the values and needs of the citizens. If the organization itself is misaligned, then the human agents working in it, the AI technology they are developing, and the future AI agents, will all be operating out of alignment.

If we think about our current state nation and compare that to our desired future state nation as described, we can break down this Alignment Issue into a set of sub-issues that will allow us to start solving it and building the solution into the product. Let us assume for now that we can identify core sets of needs based on Maslow’s Hierarchy, which will allow for alignment mapping within the system. This is not meant to be a deep dive or even a complete set of issues that make up the alignment issue, but rather a reasonable starting point around which we can start to think about, discuss, and agree before proposing a solution.

1. Citizen Alignment

2. Needs Alignment

3. Values Alignment

4. Outcomes Alignment

5. Objectives Alignment

6. Team Alignment

7. Board Alignment