As I’ve been working on VSGL recently I’ve found myself tripping on the same stepping stone over and over. As developers, we’re taught that flexible, generic, adaptable code is the best code. Code that can be reused for a wide variety of projects is ideal. As such, it is very tempting to make code usable in a wide variety of applications. This was my mindset when I started building VSGL. I found myself simply building wrapper functions for many of OpenGL’s native functions. In reality, I wasn’t making a Very Simple Graphics Library at all, I was just mimicing OpenGL. At this point I took a step back and reevaluated my objectives. What was the point of the library? Conveniently, I had pretty much spelled it out for myself. I was building a Very Simple Graphics Library. Not a Very Adaptable Graphics Library, not a Super Robust Graphics Library—I was aiming for simplicity. Simplicity meant I had to make some decisions for the user. Letting them set every little option and variable themselves would totally defeat the purpose. This meant that the code might not be ideal for 1% of users, but it would be perfect for 99% of users. I have a tendency to go above and beyond to make my projects work in 100% of reasonable scenarios, but in this case, making the code work for that 1% may have made the library unusable for the other 99%. Sometimes the 1% just isn’t worth it.
The other day I was visiting a museum that had these cool pieces of physics-based art. One was a small plastic chair that was on one end of a see-saw type mount. Independent of the mount was a small surface upon which there was a small plastic cat. The surface would move back and forth as the chair moved up and down. Occasionally, the chair would collide with the cat resulting in the chair bouncing even higher up (video).
My friend and I stared at this “kinetic sculpture” for at least 15 minutes debating whether or not it was a chaotic system (it is) and discussing how hard it would be to model the system in a simulation. I got home and immediately wanted to try it out. It was pretty obvious that a 2D simulation would suffice, but it wouldn’t be nearly as cool. I want 3D! So I began looking at graphics libraries. It was scary. Lots of vectors and rotation matrices and big words I didn’t even know. None of it was simple. Even creating a basic shape like a sphere required at least 20 lines of code, and then I would have to mess with shaders and vertices…no thank you. The rendering logic should take no more than 6 lines of code:
// Create a new scene
Scene scene (400, 400, 400);
// Create a sphere
sphere.setPosition (200, 200, 200);
// Make it a red sphere
sphere.setTexture (SolidFIll (1, 0, 0);
// Render it
As of now, I have yet to find a 3D graphics library that can render a sphere that easily. I’m sure it exists, but it’s hiding very well. So, I’m writing one. Even if there is such a library, I do have other motives for the project…
3D graphics seem really cool, and I’d like to understand them
I’m forcing myself to learn vim, so if I can stand the inevitable speed decrease, this should help with that
I need a project, and something to write about
I’ve started a very basic library that can, as of now, draw a circle on a window, but I want to do a little bit more work before I post it on GitHub (which I will). I’m hoping that writing about it will give me more motivation to actually finish the project, unlike the twenty unfinished projects haunting my Programming folder.
If you know any 3D graphics libraries that seem to resemble the type I’m describing, let me know on Twitter, I’d love to check them out.
At the end of every spring semester, my high school has a “Special Projects” week. Essentially, a small group of students and a teacher take one week to talk about a (not necessarily educational) topic of the teacher’s choice. In my case, I learned about the philosophical implications of Artificial Intelligence and thought that this might be a good place to share/flesh out some of my thoughts.
Note: a lot of our discussion was based on the book “The Mind’s I”by Douglas Hofstadter and Daniel Dennet so some of my ideas may seem very similar to those presented in the book.
One topic that interested me in particular was the question of whether AI entities are capable of, or will ever be capable of thought: are they conscious? Wikipedia defines consciousness as:
the quality or state of being aware of an external object or something within oneself.
By that definition, modern AIs are certainly conscious. We have robots that can recognize emotions, play catch, and drive cars. All of these actions require knowledge and awareness of external things. Perhaps a better question is whether or not it is possible for an AI to be sentient, which Wikipedia defines as:
the ability to feel, perceive, or be conscious, or to experience subjectivity.
Ah hah! Now we’re getting to something more interesting. What does it mean to feel? How does feeling feel? Can computers feel? At first glance, the answers seem obvious. Of course computers can’t feel! Computers are metal boxes made of wires and gears and plastic. But this argument immediately fails: humans are nothing more than cells and chemicals, and flesh. So, in my opinion, physical make up doesn’t have much influence on the sentience of an entity. If aliens are made of something other than flesh and bones, does that make them unintelligent?
One interesting point that came up is this: how do you know that I am conscious? Because I look like you? Because I act like you? You see we give each other the benefit of the doubt when it comes to consciousness but we dont afford computers that same luxury. We expect computers to prove that they are conscious. I don’t know that I can prove that I’m conscious and yet I’ve never had my sentience questioned. So the question remains: is it sufficient for a computer to mimic intelligence? How would we know the difference? According to Alan Turing (arguably the father of Artificial Intelligence and computer science in general) imitation is indeed sufficient. His famous “Turing Test” serves to measure just how well computers can mimic humans. The gist of the test is that an artificially intelligent entity is questioned by a human interrogator. The interrogator doesn’t know if (s)he is talking to a computer or another human. If the AI is truly intelligent, it will be able to make the interrogator think that it is human. The popular internet chat-bot Cleverbot passes the Turing test 59% of the time!
So what do you think? How do you define artificial intelligence? Is it possible? Personally, I think it’s definitely achievable. I don’t think modern AI is anywhere close to truly reaching human-level intelligence, but with the way modern computing is advancing, it may not be as far off as I think. Maybe I really will get my own Jarvis.