Scholarly open access journals, Peer-reviewed, and Refereed Journals, Impact factor 8.14 (Calculate by google scholar and Semantic Scholar | AI-Powered Research Tool) , Multidisciplinary, Monthly, Indexing in all major database & Metadata, Citation Generator, Digital Object Identifier(DOI)
Generative Artificial Intelligence (GenAI) is a new frontier for computational creativity and large language models (LLMs), along with diffusion models, provide existing capabilities for generating text, images, audio, and video, that are like nothing seen before. However, no system is developed in a vacuum and GenAI models are trained on vast volumes of human data that often entrench and amplify the biases, prejudices, and social injustices endemic to society. The paper asserts that GenAI will exacerbate injustice on a never-before-seen scale if deployed without appropriate ethical safeguards due in large part to the human data it must rely on. The proposed ethical framework is informed by John Rawls's justice as fairness and by Immanuel Kant's respect for autonomy. In Rawls, we have a distributive perspective from which to assess how the benefits and burdens of AI are apportioned so that we consider more than just technical fixes and socio-political justice. In Kant, we see the importance of transparency in the communication of human, and AI, activities so we may empower individuals with informed consent and have respect for individual autonomy and agency. Kant also provides developers and designers working with AI with a guide to articulating ways that users may engage with models and holding the models accountable for design.By analyzing bias within image and language generation and considering the effectiveness of various corporate mitigation efforts, the conclusion of this paper is that only when we integrate these justice principles into the AI lifecycle (data curation through deployment and auditing) can we realize GenAI's innovative risk and opportunity while directing it toward a more just and equitable society. The paper ends with an ethical prescription based on conclusions drawn, highlights crucial spaces for research that needs to bring philosophical theorization into technical implementation, and calls for policy-based standardization of these fairness principles.
Keywords:
Generative AI, Algorithmic Bias, Algorithmic Fairness, AI Ethics, John Rawls, Immanuel Kant, Distributive Justice, Autonomy, Transparency, Explainable AI (XAI), Research Gaps.
Cite Article:
"Justice In The Machine: A Rawlsian And Kantian Framework For Fairness In Generative AI", International Journal of Science & Engineering Development Research (www.ijrti.org), ISSN:2455-2631, Vol.10, Issue 8, page no.b366-b372, August-2025, Available :http://www.ijrti.org/papers/IJRTI2508148.pdf
Downloads:
000348
ISSN:
2456-3315 | IMPACT FACTOR: 8.14 Calculated By Google Scholar| ESTD YEAR: 2016
An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 8.14 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator