
When we think about generative AI, it is tempting to focus on the technology alone. Yet a growing body of evidence shows that AI is as much a cultural phenomenon as it is a technical one. Research from MIT Sloan demonstrates that generative AI is not culturally neutral. When the same models were prompted in English and Chinese, their responses reflected the cultural values embedded in the language: English prompts led to answers with an independent social orientation and an analytic cognitive style, while Chinese prompts elicited an interdependent social orientation and a more holistic cognitive style. The researchers stress that these hidden cultural tendencies shape the advice AI gives and that recognising them is crucial as AI becomes part of decision making. In practical terms, this means that an AI-generated recommendation for a marketing slogan or policy could change depending on the cultural frame encoded in the language. Users can mitigate this by explicitly instructing the model to adopt a specific cultural perspective.
This research fits within a broader conversation about how AI adoption is transforming organisations. A Chief Learning Officer article notes that generative AI mirrors its authors and training data, meaning that it amplifies existing biases. The rapid adoption of AI is revolutionising industries, workflows and job descriptions. It also raises difficult questions about intellectual property, data privacy, workforce training and job displacement. Examples of bias are already well documented: facial recognition systems have shown higher error rates for dark‑skinned women than for white men, and hiring algorithms have been found to favour certain male coded words on resumes. AI’s influence on creativity is also complex; experiments show that AI advice can make individual outputs more creative but can reduce the diversity of ideas across a group. These findings underscore why organisations must approach AI adoption as a cultural challenge as well as a technical one.
Against this backdrop, OpenAI’s corporate structure and mission take on new relevance. OpenAI was founded in 2015 as a nonprofit to ensure that artificial general intelligence benefits all of humanity. In 2019 it created a for profit subsidiary to scale research and deployment, but the nonprofit retained control. A recapitalization in October 2025 converted the for profit into a public benefit corporation and renamed the nonprofit the OpenAI Foundation, which continues to control the for profit entity. Both organisations share the same mission, and the structure links financial success to mission impact. The Foundation holds a significant equity stake, ensuring that resources flow back into research and philanthropic work. This governance model is designed to align commercial incentives with the goal of developing safe and beneficial AI.
OpenAI’s own research emphasises the need for careful preparation as AI reshapes work. Its Workforce Blueprint reports that ChatGPT has reached 800 million weekly users, and analysis of 1.5 million de‑identified conversations shows that most people use it for seeking information, getting practical guidance and writing. The blueprint argues that AI is currently more of an enabler than a replacer of human work. To understand economic impact, OpenAI researchers developed an evaluation called GDPval, which tests models on real world tasks. The results show that GPT‑5‑level systems match or exceed human professionals on about half of these tasks, completing them in minutes instead of hours. However, a Stanford study cited in the blueprint finds evidence that early career employment may be negatively affected as AI improves. The report therefore calls for accelerated education and training: programmes like OpenAI Academy and OpenAI Certifications aim to give millions of workers the skills to thrive in an AI enabled economy. The blueprint also stresses that jobs involve collaboration, judgement and relationships – qualities that current models do not replicate – and that OpenAI’s mission is to keep humans responsible for important decisions. Access to AI should be a right, and the company advocates policies that democratise AI and support reskilling.
Bringing these threads together, a coherent picture emerges: AI development, deployment and cultural impact are deeply interconnected. Research shows that AI systems reflect cultural patterns and biases; organisational studies show that AI adoption changes how teams work and can exacerbate inequalities; and OpenAI’s mission and governance model are intended to ensure that AI benefits society at large. Our approach to research reflects this interdependence. We draw on peer reviewed studies, reports from reputable organisations and OpenAI’s own data to understand how AI influences culture and work. By integrating insights from MIT Sloan and the Chief Learning Officer article with OpenAI’s blueprint, we can advise leaders on how to harness AI responsibly. This means acknowledging and mitigating cultural bias, investing in education and wellbeing, and aligning AI strategies with mission driven values.
Leaders should treat AI adoption as both a technical and cultural project. Recognise that AI models carry cultural assumptions and that these assumptions can shape business decisions. Build governance structures that link commercial success to public benefit, and invest in training programmes that prepare workers for new roles. Above all, keep humans at the centre: AI can accelerate our work, but it should never replace our capacity for judgement, creativity and empathy.


