At the end of every spring semester, my high school has a “Special Projects” week. Essentially, a small group of students and a teacher take one week to talk about a (not necessarily educational) topic of the teacher’s choice. In my case, I learned about the philosophical implications of Artificial Intelligence and thought that this might be a good place to share/flesh out some of my thoughts.
Note: a lot of our discussion was based on the book “The Mind’s I” by Douglas Hofstadter and Daniel Dennet so some of my ideas may seem very similar to those presented in the book.
One topic that interested me in particular was the question of whether AI entities are capable of, or will ever be capable of thought: are they conscious? Wikipedia defines consciousness as:
the quality or state of being aware of an external object or something within oneself.
By that definition, modern AIs are certainly conscious. We have robots that can recognize emotions, play catch, and drive cars. All of these actions require knowledge and awareness of external things. Perhaps a better question is whether or not it is possible for an AI to be sentient, which Wikipedia defines as:
the ability to feel, perceive, or be conscious, or to experience subjectivity.
Ah hah! Now we’re getting to something more interesting. What does it mean to feel? How does feeling feel? Can computers feel? At first glance, the answers seem obvious. Of course computers can’t feel! Computers are metal boxes made of wires and gears and plastic. But this argument immediately fails: humans are nothing more than cells and chemicals, and flesh. So, in my opinion, physical make up doesn’t have much influence on the sentience of an entity. If aliens are made of something other than flesh and bones, does that make them unintelligent?
One interesting point that came up is this: how do you know that I am conscious? Because I look like you? Because I act like you? You see we give each other the benefit of the doubt when it comes to consciousness but we dont afford computers that same luxury. We expect computers to prove that they are conscious. I don’t know that I can prove that I’m conscious and yet I’ve never had my sentience questioned. So the question remains: is it sufficient for a computer to mimic intelligence? How would we know the difference? According to Alan Turing (arguably the father of Artificial Intelligence and computer science in general) imitation is indeed sufficient. His famous “Turing Test” serves to measure just how well computers can mimic humans. The gist of the test is that an artificially intelligent entity is questioned by a human interrogator. The interrogator doesn’t know if (s)he is talking to a computer or another human. If the AI is truly intelligent, it will be able to make the interrogator think that it is human. The popular internet chat-bot Cleverbot passes the Turing test 59% of the time!
So what do you think? How do you define artificial intelligence? Is it possible? Personally, I think it’s definitely achievable. I don’t think modern AI is anywhere close to truly reaching human-level intelligence, but with the way modern computing is advancing, it may not be as far off as I think. Maybe I really will get my own Jarvis.