Abstract
Artificial intelligence is the new technological buzzword. Everything from camera apps on your mobile phone to medical diagnosis algorithms to expert systems are now claiming to be ‘AI’, and many more facets of our lives are being colonized by the application of AI/ML systems (henceforth, ‘AI’). But what does this entail to designers, users and to society at large? Most of the philosophical discourse in this context has focused on the analysis and clarification of the epistemological claims of intelligence within AI and on the moral implications of AI. Philosophical critiques of the plausibility of artificial intelligence do not have much to say about the real-world repercussions of introducing AI systems into every possible domain in the name of automation and efficiency; similarly, most of the moral misgivings about AI have to do with conceiving them as autonomous agents beyond the control of human actors. These discussions have clarified the debate surrounding AI to a great extent; however, certain crucial questions remain outside the ambit of these debates. Arguments in support of AI often take advantage of this void by emphasizing that AI systems are no different than previously existing ‘unintelligent’ technologies, thereby implying that the economic, existential, and ethical threats posed by these systems are either similar in kind as those posed by any other technology or grossly misplaced and exaggerated. In this chapter, we shall think through this assumption by investigating into the ontology of AI systems vis-à-vis ordinary (non-AI) technical artefacts to see wherein lies the distinction between the two. I shall examine how contemporary ontologies of technical artefacts (e.g., intentionalist and non-intentionalist theories of function) apply to AI. Hence, clarifying the ontology of AI is crucial to understand their normative and moral significance and the implications therefrom.