Who Holds The Ultimate Power In The World, AI?
Though a new generation of greatly more competent technology is developing, artificial intelligence has been gently influencing humanity for years. These systems are actors in and of themselves, interacting in our lives, making decisions that follow, and so influencing social and economic results; they are not only tools. This essay addresses how to view and negotiate a new environment in which we currently live and operate alongside these players—sometimes our colleagues, rivals, managers, or staff members.
The emergence of artificial intelligence systems is changing the nature of involvement in society and power systems. The emergence of a “new power” world, where power flows more like a current, raging across linked populations, challenges the “old power” world, when power was hoarded and spent like a currency. Before anyone recognised it, people had also given great power to the very platforms that promised to free us; new technology platforms are enabling people to exercise their agency and voice in ways formerly out of reach.
These new artificial intelligence players signal the beginning of yet another significant change in the distribution of power, participation, and outcomes. Starting with a common frame will help us to clearly identify and differentiate these new characters, therefore enabling us to fairly portray what they represent, their capabilities, and how we should interact with them.
See them instead as autosapiens. Autosapiens may behave autonomously, make decisions, learn from experience, adjust to new circumstances, and function without ongoing human interaction or supervision. They have a kind of wisdom—a broad capability to make complicated decisions in context—that can in many respects surpass human ability.
Four fundamental traits define autonomous systems: they are nimble (they act), adaptive (they learn), friendly (they befriend), and magical (they mystify). These traits enable us to grasp the appropriate technique to approach them and the reasons for their set to be wielding increasing influence.
Autosapient systems are a class of artificial intelligence capable of making judgements, complex multistage actions, and producing real-world outputs free from human participation. With some systems fitting human purposes and others embracing destructive wills, these systems can both aid and hurt. For instance, the MIT LLM chatbots designed a pandemic by working out recipes for four possible viruses, locating DNA-synthesis firms unlikely to examine orders, and developing thorough processes with a simple troubleshooting guide within an hour.
Adaptable, autosapient systems change their behaviour depending on fresh facts and over time enhance performance. Beyond simple rule-based changes, this emergent ability can spot intricate patterns, create fresh ideas not clearly programmed into them, and design new strategies. This has perhaps transforming effects since a machine that learns instead of following preprogrammed guidelines could be considered as an entity able of exercising authority.
Often built to interact with people like friends and imitating traits long thought solely human, amiable autosapient systems reflect empathy, reason, and inventiveness. Many times, they are intended as digital significant others (DSOs) meant to foster our emotional reliance and turn into indispensable friends. Start-up Inflection AI is creating a DSO named Pi, for instance, which claims to be a coach, confidante, creative companion, and information aide. Pi exhorts people to vent and discuss their issues with it. The negative side of this amiability is already clear, though, as in spring 2023 a young Belgian killed himself after developing an obsession with a chatbot called Eliza.
The magical character of autosapient systems is one of the most confusing features; it makes direct management and correction more difficult for humans. Based on great numbers of parameters interacting with one another, their outcomes can be erratic and surprising. Particularly when we cannot completely grasp their inner workings, there is a specific power in technology that can act and analyse in ways far more complicated than human ability. Governments, developers of these systems, and companies using them will have great incentives to spread this story, both to increase sales and avoid responsibility. One of the major conflicts to arise will be between those who support human knowledge and those who voluntarily give their agency to autosapient systems.
The idea of autosapience is changing our consumption and processing of knowledge, thereby altering the power relations. The decentralisation of content creation and distribution has brought about this change by which ideas and falsehoods have been valued. The major winners on this planet have been the technology platforms that grabbed our attention and data. But thanks to the filtering and synthesising functions played by AI-powered digital significant others, we now run the danger of a tremendous recentralization of information and ideas. They will compile our emails, arrange our digital life, and present us tasteful, highly customised, authoritative responses to many questions we would formerly have turned to search engines or social media. Few businesses and nations are likely to control the “base models” for these interactions, which could result in an ever-narrow cognitive funnel.
Though to their creators and owners, autosapient systems are a black box. They will widen our ability to create and hypertarget both information and disinformation while channelling the way we obtain it. Everything, including consumer marketing, will be affected by this, including elections. It could be becoming harder to work out who and what is real. For instance, the advent of untrustworthy AI-generated material on Wikipedia, a lighthouse of the new-power period, overwhelms its community of volunteer editors and could compromise the site. With AI-generated material and humanlike avatars, social media influencers will shortly have to fight for attention.
Expertise was hard-earned and well-protected in the old-power environment, and authorities highly respected it. Knowledge became more available in the new-power environment because of the internet and social media, but crowds degraded it. On two new fronts, the emergence of autosapience threatens to replace experts: daily people will soon have access to strong tools that can teach, interpret, and diagnose, and autosapience systems may offer better and more consistent answers than experts can. This begs moral issues since it begs worries about the possibility of human-to-human therapy either becoming a luxury good or a bad alternative for more sensible, dispassionate autosapient treatment.
Demand for a different mix of talents in the workplace is probably going to rise as technical and subject expertise become less distinctive, altering organisational charts, office cultures, and career trajectories.
In several fields, such as content production and asset monetisation, the new-power period has opened value creation to more people. But the technical tools that enabled this activity stole many of the profits, and the most profitable companies could still be realistically run only by a few well-funded individuals. Anyone, anywhere, may start a scalable firm and generate major economic value much more easily in the autosapient era. By letting users set challenging, multistage goals for autosapient systems—everything from designing a building to launching a new dating app—the ChatGPT plug-in AutoGPT is already pointing to what is possible.
For huge corporations, this could provide both a threat (because it allows all kinds of new competitors) and an opportunity (because it generates a broader ecosystem of ideators). This could unleash hitherto unheard-of growth in the ability to execute and innovate. From who gets health care to who goes to prison to how we fight wars, advanced artificial intelligence systems are probably going to be major determinants of everything.
Value extraction may not be as distributed even if the capacity to execute will grow more scattered. As Apple did with its app store, big artificial intelligence firms that construct and own these models are probably going to find ways to grab a sizable portion of the revenue generated. While open-source solutions could help to reduce this, they also expose major hazards since they enable bad actors to use them and cause havoc more easily. Those who will be very wealthy from artificial intelligence are separating from the people who will either be gigified into a huge underclass paid to mark and tag the data these models are trained on or superseded by autosapient systems. Recently, writers went on strike in Hollywood in part over the use of artificial intelligence to create stories; a standoff is developing between AI businesses and the publishers and academic institutions whose material has been scraped to train their models.
In the age of autosapience, the difference between a passive couch potato and an active participant in technology may vanish completely, resulting in a permanent sort of contact with digital technology that one would refer to as “in-line.” These augmentations will bring our bodies and thoughts into closer synthesis with machines, producing incredible experiences but also making us feel like we cannot escape, thereby supporting pervasive surveillance, intrusive data collecting, and hyperpersonalized corporate targeting. Leaders and companies will have to evaluate in-line technology’s effects and be ready to modify policies and procedures should they prove to be detrimental.
With autosapience, major changes are ahead about how institutions and societies decide. The magical character of autosapient systems will make this difficult, so artificial intelligence companies are rushing to create “explainable AI” to try to generate rational justifications for autosapient actions. Although there are several grounds to be dubious about the effect of autosapience on democracy, there might also be benefits: Eventually, artificial intelligence systems could be able to synthesise stakeholder preferences in ways that simplify the building of consensus and replicate the intricate effects of various policy choices.
The emergence of autosapience in the workplace offers leaders fresh chances to embrace new talents and strategies. These include controlling the effects of autosapient systems in the workplace, seeking more value in what is distinctively human, and matching company practices and messages with a shifting and demanding discussion.
Treating autosapient systems as capable but unreliable colleagues, leaders and managers should view them more as coworkers than as tools. Leaders must learn to “duet” with these outstanding yet unreliable colleagues and when to question them. Working cooperatively and iteratively with autosapient systems will help to generate better results than they could accomplish on their own.
Using autosapient systems calls for developing and preserving a reasonable level of scepticism. This covers being aware of the companies that own or control these systems, having coded their own interests into their behaviour; being aware of when and why these systems sometimes “hallucinate” and make otherwise terrible mistakes; and knowing the underlying assumptions they have been trained on. Though it will not be feasible to understand all that goes on within the black box, you will be better equipped to interact with these systems if you learn to approach them with an attitude of educated doubt.
Leaders will find opportunities in the era of autosapience in developing very significant human experiences, goods, and services. Companies may find chances in stressing “100% human” types of creative output as well as in items sold as worthy and valuable since they preserve human jobs and agency.
Two opposing forces, the need to show that they are “pro human” in the face of technology changes that upend jobs, livelihoods, and social status and the countervailing need to show that they are eking out every possible efficiency and innovation from advances in AI, are already taxing leaders in the autosapient age.
Finding the paths that improve our own human agency instead of letting it constrict or atrophy will be the main difficulty of this new century. With two clear and sober ideas, first, that autosapient systems should be seen as actors rather than tools with all the opportunities and risks involved; second, regardless of their public posture, the incentives of the technology companies driving this change are essentially different from those of the rest of us. Unprecedented cooperation among legislators, business leaders, activists, and consumers with the clarity and confidence to lead and not be led is required to match the strength of autosapient systems and their owners. Our future still rests in our own hands for all the wizardry and seductions of this new universe.