Ethics and performance: the case of Chat-GPT development
ChatGPT is not only a technological revolution, it is also a case study in corporate social responsibility. Developed as a counter-model to GAFAM with a non-profit organisation, OpenAI, the company that produces Chat GPT, eventually branched out into a capitalist model. In addition, the Kenyan workers’ case revealed by Time also involves Samasource, a former NGO turned for-profit company. Time’s revelations raise the question of whether non-profit statutes are now being used as launch pads for tech companies motivated by profit rather than by the public interest. A look at this case and the questions it raises.
Although the non-profit sector has been structured around models of democratic governance, transparency and ethics that are beyond those of the average sector, there are situations that can challenge this solidarity ethic. Like any commercial enterprise, non-profit structures can be subject to constraints. Economic difficulties, a drop in turnover, economic redundancies and treasury problems are all threats to non-profit organisations. They are therefore obliged to make ethics and performance coexist in their objectives. Could these pressures supplant the general interest mission of the non-profit sector ?
The evolution of OpenIA: from non-profit to limited profit status
Issue faced by the company OpenAI, mainly known for its tool ChatGPT. The organisation was founded by Elon Musk and Sam Altman in 2015 as a non-profit company. The aim was to develop artificial intelligence for the benefit of all, thus quickly entering into direct competition with GAFAM.
The company’s status changed in 2019: one year after Elon Musk’s departure, the company is switching to a limited profit motive to attract investors. This is an American entrepreneurial status that allows a company to incorporate private funding while limiting dividends to shareholders. Despite this change in status, the company continues to position itself as acting for the common good, the website even states that their mission is: “to develop artificial intelligence for the benefit of all humanity.”
Until the publication of a Time magazine study in early January, the company maintained an image of transparency and ethics faithful to its origins. To resolve the toxicity issues present in previous AIs at ChatGPT, the company used a subcontractor located in Kenya. The revelations of the investigation have undermined OpenAI’s apparent ethics, whether in terms of working conditions, remuneration or the massive exposure to toxic content and language. In the press release produced in response, OpenAI’s managers explain that they had no knowledge of their service provider’s practices.
Strong pressures on a competitive sector
This case reveals the hidden side of the struggle between competitiveness and ethics. In an ultra-competitive sector massively invested by GAFAM, Open-AI found itself confronted with the limits of its moral commitments. The company commissioned by Open-AI is in fact employed by the majority of its competitors, because it offers extremely competitive prices. While the organisation can be criticised for breaking with its initial commitments, it has in fact behaved like any of its competitors. The use of subcontractors to filter out hate data is a method used by OpenAI’s main rivals in the AI race. Companies such as Google, Meta and Microsoft have also signed contracts with the same operator implicated here and for similar tasks. As OpenAI has developed, it has become more and more similar to its competitors, both in terms of the evolution of its status and in its practices of raising funds and using cheap service providers.
In reality, Open-AI is being criticised for the incoherence between its positioning in the media and the reality of the company’s operations. Displaying a particular legal status and moral commitments has made the company a global phenomenon. When the reality behind this success was revealed, many users felt cheated by the company.
However, the case of Open AI should not be generalised to all non-profit organisations and companies or even to the AI sector. The last one represents a real asset for the Social Tech associations which in the near future will probably work with artificial intelligences such as ChatGPT. Writing articles, research aids and integration in search engines, the applications of AI in the web will be multiple and intersectoral. Social Tech will certainly be capable of taking advantage of these opportunities to develop ethical solutions with real impacts.
What is ChatGPT’s view ?
ChatGPT was asked a few questions to find out its analysis of the dilemma its creators faced, between productivity and ethics. And for AI, the conclusion is clear: companies with a commitment to non-profit and ethics have a responsibility to the public and the rest of the non-profit sector. Transparency and accountability of non-profit organisations is an absolute necessity.
Having made no specific commitments in response to the scandal, OpenAI plans to reach $1 billion in revenue by 2024. This goal seems to be moving further and further away from its initial intention, but also from the recommendations of its own product as evidenced by this response provided by the said AI :