For centuries, the path to economic mobility depended on land, labour, capital and enterprise. Those without land or capital had little choice but to sell their labour. But then came enterprise. Some individuals managed to innovate by combining creativity, hard work, and borrowed capital to build businesses, generate wealth and climb the ladder out of their economic class.
But in the age of AI, that ladder is being pulled away. AI has emerged as the new factor of production and so the fundamental question is: Who owns it?
This leads us to broader questions: Who decides how it is applied? Is productivity the only goal, or should we also strive for a more just society built on in
Who owns AI?
In theory, AI holds the promise of leading to a more affluent and inclusive society. With open-source models becoming widely available, it might seem like anyone can now build their own intelligent system.
In reality, however, deploying production-grade AI still demands three things: massive proprietary datasets, elite technical talent and powerful computing infrastructure. These are overwhelmingly controlled by big tech giants like Google, Microsoft and Amazon, along with Asian players such as Alibaba and Tencent.
Most financial institutions do not own this AI “stack”. They rent it. Even so-called “opensource” models that are supposedly free, often run on commercial cloud services and depend on data curated and stored with the same big players.
This concentration of AI capability in the hands of a few big tech firms raises many problems.
It threatens innovation at its roots. Young entrepreneurs seeking to build language models face steep barriers, from the high cost of data as well as GPU’s (Graphics Processing Units), a key component in LLM training, to the difficulty of hiring top AI talent which have been absorbed by banks and big tech companies. Its no wonder that banks and corporations are investing heavily in AI, with global spending in the sector reaching $31.3 billion in 2024, up from $20.6 billion the year before. While exact AI allocations aren’t always disclosed, leading institutions like JPMorgan Chase and Citibank have committed $18 billion and $9 billion respectively to their technology budgets for 2025, with a significant portion earmarked for AI initiatives.
This monopolisation risks stifling innovation and choking off entrepreneurship.
It also creates dangerous opacity. As models grow more complex, their inner workings become black boxes understood only by a privileged few.
A potential trillion-dollar technology in the hands of a few introduces systemic concentration risk, similar to the power outages and financial crises we’ve seen in today’s global institutions. This quiet concentration of AI power creates industry-wide dependency with minimal regulatory oversight.
Yet for the corporations investing into this new expensive technology, the pressure to justify return on investment is immediate. With limited in-house capabilities and high costs of adoption, the fastest and most obvious path to a ‘return on investment’ often becomes reducing headcount.
Cuts focused on the base of the job pyramid
Indian bank HDFC cut call centre costs by over 30 per cent through chatbot automation. In Hong Kong, HSBC automated 90 per cent of its loan approvals. More recently, in July 2025, Microsoft announced it would cut 9,000 jobs, about 4 per cent of its global workforce, as part of a sweeping restructuring tied directly to its massive investment in artificial intelligence initiatives such as GitHub Copilot, which are now capable of handling tasks that previously required human workers.
This highlights a systematic issue that goes beyond just the deployment of AI. It is about how strategic decisions are made at the top of the corporate pyramid in typical closed fashion, often without consulting a broader audience for feedback.
Such decisions can be suboptimal. According to McKinsey consultants, poor strategic choices have cost Asian banks over US$10 billion (S$12.7 billion) annually. Yet within these same organisations, AI is overwhelmingly deployed to automate lower-tier functions tied to the base of the workforce pyramid while senior executives, who approve these AI deployments, remain largely insulated from automation risk.
AI for productivity: Who can it best serve?
Yet, there is also growing evidence that AI boosts productivity, particularly in white-collar roles where humans work alongside machines. A 2023 study by the National Bureau of Economic Research found that generative AI tools helped junior consultants increase productivity by up to 40 per cent, especially in report writing and idea generation.
Early pilots in finance and compliance across Asia have shown similar gains, ranging from 15 to 40 per cent. This is echoed in the OECD’s 2024 Employment Outlook, which highlights AI’s ability to enhance the output of skilled workers by handling tasks like data analysis, calculations and forecasting.
While these examples show promise, they also reveal a risk: AI disproportionately helps those who are already skilled, while automating and replacing lower-end roles.
At leading insurance firms, claims agents now use AI tools to reconcile customer claims with original policy documents, a task that once took hours and can now be completed in minutes. This significantly boosts their productivity, allowing them to process far more claims in less time. It reinforces how AI tends to amplify the efficiency of already skilled workers, rather than replace them.
AI beyond Productivity: A tool for fairness & integrity
As discussed in the previous sections, one of the fundamental issues with AI deployment is the imbalance it creates: executives remain shielded from automation, while lower-tier workers face displacement. This reflects a deeper structural problem: strategic decisions on AI deployment are concentrated at the top, often made without accountability for their longterm societal consequences.
Geoffrey Hinton, often called the Godfather of AI, has argued that for AI to serve long-term societal value, it must be used not just to cut costs, but to enhance transparency and governance, especially at the top.
To address this, we must flip the script. AI should not be used solely to increase productivity or replace workers, but also as a system of checks and balances that strengthens governance, accountability and ethical culture. More ambitiously, it should be deployed across industries with the long-term goal of embedding fairness and integrity into corporate culture.
Here we explore two emerging AI applications aligned with the goal of using technology to promote fairness and integrity in financial services.
The first area is executive decision-making. If AI tools are given access to the same historical data used by senior leadership to make decisions, agentic AI models can simulate alternative “what-if” scenarios based on that data. This allows AI to function as a second-opinion engine , surfacing blind spots, flagging questionable decisions, and offering optimal solutions that align with long-term value creation.
This isn’t about replacing the C-suite. It’s about augmenting executive decision-making with an intelligent, unbiased voice that asks: “Did this decision serve the long-term interests of stakeholders — or was it driven by self-interest or short-term preservation?” Until AI is applied to leadership decisions as rigorously as it is to operational processes, its adoption will continue to reinforce inequality rather than resolve it.
Another use could be applying AI to foster ethical corporate culture. Traditional communication surveillance tools on trading floors rely on lexicons to flag phrases like “use personal phone” or “pump and dump”, which may catch misconduct in the moment. But they fail to detect broader cultural issues.
Newer AI-based tools analyse communication patterns and psycholinguistic cues over time to assess employees’ values, personality traits and susceptibility to misconduct. This is another example of why AI should not be limited to productivity gains, but also be deployed to identify bad actors and foster environments where integrity and transparency are actively cultivated.
The solution, for a just and fair society
Like electricity or water, AI should be treated as a public utility, not necessarily free, but accessible, transparent and governed in the public interest. The choices we make today about ownership, value distribution and the role of labour will define not just our economic future, but our social contract.
And the role of human labour in this new economy must be reimagined. If corporations pass AI-driven productivity gains solely to shareholders while shedding workers, the result may be not just economic dislocation, but social breakdown.
The proposal requires a coordinated response across business, government, and education. Corporations must look beyond short-term gains and recognise that neglecting workforce reinvestment will erode their own consumer and talent base.
Governments should implement policies that incentivise reskilling, subsidise vocational transition programmes and ensure AI is applied responsibly across organisations. Academic and vocational institutions must align training with national priorities and collaborate with the business sector to reskill workers in areas critical to the AI economy.
Consider the socio-economic state of the world today: 44 per cent of the global population still lives on less than US$6.85 a day, while the top one per cent control nearly 40 per cent of global assets. In 2025, the number of billionaires reached a record 3,442. Inequality is not just rising; it is accelerating.
As we introduce AI to a world already characterised by such deep inequalities, we risk allowing one of the most powerful discoveries in human history to serve the interests of the few who are driven by profit and returns, resulting in mass disenfranchisement and instability for the rest.
If AI is to serve true progress, government, business and academic bodies must collaborate with a clear objective to reorient AI as an inclusive tool for equality, transparency, integrity and long-term societal stability.
Sam Ahmed is the Founder & Managing Director of Deriv Asia X, a consulting & technology firm specialising in AI & Blockchain solutions. Sungjong Roh is an assistant professor at the Lee Kong Chian School of Business at Singapore Management University.