I was excited about AI. I’ve always been excited about AI. When I taught Introduction to Philosophy, I used to teach a section on AI and introduce the class to a bot I know. I own an Echo Dot. So, I’ve been very excited. That is, until I talked to a programmer I know about schizophrenia.
You see, I think that programmers, particularly those who do AI, may know more than philosophers about the mind. They try, after all, to replicate it at times.
But that, of course, means they need to know the many ways in which the mind works, including various mental disabilities, even if to just understand it, not to necessarily replicate it.
And talking to a (granted, single) programmer about schizophrenia is tough. They tried to envision it. “Is it like daydreaming?” they asked.
“No.” I said. “It’s not at all like daydreaming.”
A philosopher who shall remain nameless told me that if you understand schizophrenia, you understand the whole of the human mind.
So, if my programmer friend can’t understand schizophrenia, what hope do I have for AI?
What does this mean for the future of AI? I don’t know. But it seems like programmers definitely have a normative view of what type of human mind should be replicated. And it’s not a mind with schizophrenia—even if, as my philosopher friend says, understanding schizophrenia is everything.