The games of imitation: AI and a philosophy towards future equilibrium


  • Todd J. Barry Hudson County Community College



Artificial Intelligence, Game Theory, Turing Test, Nash Equilibrium


This brief conceptual article starts with an argument for Artificial Intelligence (AI)’s ability to “think.”  This outgrowth relates to human’s and AI’s power over nature, and to AI’s increasing power in its humanness, measured by the results of competing with humans and other AI machines in the Turing Test, and economic “game theory.”  Both, and especially the latter challenge, can be quintessentially human by measuring how one values the self as opposed to society, under varying conditions.  Given AI’s advancements enabling it to presumably “win” in the most humanness of games, beyond even reaching a universally beneficial “social optimal” outcome, and thus possibly even having more power than humankind, the article argues for an equilibrium of balanced powers in innovation between AI and humans.  Therefore, managers, broadly construed, can function as key brokers between government policy makers and innovators as AI and humans continue to develop further into the future.


Afrouzi, A.E. (2018). “The Dawn of AI Philosophy.” Blog of the APA. Retrieved December 18, 2020, from

Barry, T.J. & Aho, M. (2016). “Technological Unemployment and Socio-Economic Development: Historical Perspectives and the Future.” Communications in Applied Sciences. Dec. 2016.

Barry, T.J. (2018). “Human vs. Robot Decision Making on the Battlefield: War and Rational Choice Theory.” Philosophy for Business (electronic journal), Issue 84, June 23, 2018.

Conitzer, V. (N/A). “CPS 270: Artificial Intelligence: Game Theory.” Duke University. Retrieved August 3, 2020. Available at: game_theory.pfd.

Cooley, C. H. (1902). Human Nature and the Social Order. New York: Scribner.

Dennett, D.C. (1987). The Intentional Stance. Cambridge, MA: Bradford.

Di Vaio, A., Palladino, R., Hassan, R., & Escobar, O. (2020). “Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review.” Journal of Business Research, 121(Dec. 2020), pg. 283-314. DOI: 10.1016/j.jbusres.2020.08.019

Dietrich, Eric. (2020). “Godelian Arguments Against AI.” PhilPapers. Retrieved December 18, 2020, from

Geisel, Ann. (2018). “The Current And Future Impact Of Artificial Intelligence On Business.” International Journal of Scientific & Technology Research, 7(5), pg. 116-122.

Katsamakas, E. & Pavlov, O. (2020). “AI and Business Model Innovation: Leverage the AI Feedback Loops.” Journal of Business Models, 8(2), pg. 22-30.

Jonas, H. (1979). “Toward a Philosophy of Technology.” Hastings Center Report, 9(1): pg. 34- 43.

Kosinski, J. (1999). Being There. Reprint Edition. New York: Grove Press.

McCarthy, J. (2006). “The Philosophy of AI and the AI of Philosophy.” Stanford University. Retrieved December 18, 2020, from

Mumford, L. (1934). Technics and Civilization. New York: Harcourt.

Nisioti, E. (2018). “In need of evolution: game theory and AI.” Free Code Camp. Retrieved August 3, 2020. Available at: where-it-all-started-and-where-it-should-all-stop-82f7bd53a3b4/.

Ong, W. J. (1982). Orality and Literacy: The Technologizing of the Word. London: Routledge.

Ray, T. (2019). “Google’s AI surfs the “gamescape” to conquer game theory.” ZDNet. Retrieved August 3, 2020. Available at: gamescape-to-conquer-game-theory/.

Reichling, M.J. (1996). “On the Question of Method in Philosophical Research” Philosophy of Music Education Review, 4(2), pg. 117-127.

Searle, J. (1980). “Minds, brains, and programs.” Behavioral and Brain Sciences 3: pg. 417-457.

Teichert, Leonhard. (2018). “The Impact of AI on the Future of Work: A Philosophical Approach (Part I). The Hive. Retrieved December 18, 2020, from hivedata/the-impact-of-ai-on-the-future-of-work-a-philosophical-approach-part-i-e5aff3bb0a30.

Turning, A. (1950). “Computing Machinery and Intelligence.” Mind 59: pg. 433-460.