A lot has been said recently of Artificial Intelligence (AI). That machines and systems will finally come alive isn't so far into the future. Personally, I think we need not be afraid until these AI systems take the final step to becoming sentient. The thing is that some of them have already taken this step.
This final step will be taken along two fronts or properties. I would like to define these properties of an artificial intelligence using a human analogy: its brain and its arms.
The arms are the ability of a system to accomplish a task. As technology becomes more advanced, the arms of AI grow longer and more powerful by the day. We have machines doing more and more. They have gone from dumb calculators to advanced systems but most times with human or human laws being at the helm of affairs. What brings us nearer to the AI revolution is the growing capacity of the brain. The AI brain, just like the human's, is the decision making part of a system. Initially a set of rigid logical rules defined by humans, it is growing rapidly to a dense network of self-modifying decision pathways.
None of these is news so why the recent fear? Here again, I will talk about this on two fronts: arms and the brain.
Most systems have a lot of knowledge about users and their properties and can interpret their actions but is it aware of itself's? Take MySQL for example, a relational database management system. In simpler terms, it handles databases, saving data in tables of columns and rows and handling users: authentication, access and so on. The beauty of MySQL is that it is itself a database. It does not use any convoluted way to store databases and tables. When you create a database, MySQL adds a new record to its database table. A new table is handled in a similar manner. When you create a column of a particular datatype, MySQL notes that "this is a column under foobar table with foobar type" with the type selected from an existing table of types. You want to add a new user with a password? No problem. It sticks it in with others in the users table, hashing the password and saving the access properties in columns. It is so simply efficient that it is beautiful. It is a very good example of the Don't Repeat Yourself (DRY) principle.
That might be a little too technical. Let's use a simpler example. Suppose Facebook was designed in this way. Say a user named facebook was on Facebook with all the properties like a profile picture, friends, groups, pages, brands and events. With such a model, new users are then friends of facebook so that the same code that handles a friend count also tells you the number of users in Facebook, with an unfriend from facebook doubling as an account closure. That is good code reuse right there.
This brings a lot of advantages. The first being that an application programming interface (API) becomes easier to design since the needs of a regular user are similar to the system's needs. Therefore, we write once for the system and deploy for all users. A second advantage is bug discovery. When a system is its own biggest user, you avoid scenarios where a client informs you of glitches in your own system like YouTube's view counter.
What I've just described is the ability of an AI arm. On this front, it is inevitable. We cannot stop our technology march in this direction. Machines will get better and better at handling society. At some point in the future, the concepts of "sign-up", "log-out" and "upload" will be obsolete, they will become physical birth, physical death which will be interpreted as "inactive" and "occurred".
But stronger arms for our AI systems are not the problem. The brain of future AI is where the war lies. Let's extend our facebook user scenario. We have a system where every Facebook user is a friend of the user facebook. Say, the system's 'arms' have become perfect in being able to tell if a video is that of a decapitation which is not drama. Said system can now add names of all users which upload this video or similar to a list.
Now, if said AI decides to run this list through some algorithm and determines that people on this list share certain geographical, religious or gender properties or all have names which are similar and decides to delay new uploads of users who fit this criteria until the content can be screened, would it be stereotypical? How would such an AI be convinced this isn't the most efficient behaviour since it already works for weather prediction or cancer detection?
A delay in uploading creative content to a social medium might not be too limiting. An AI which applies similar rules to banking, travel and security might be more disruptive. This is what I fear, AI with Atlas like arms to carry the world but with a brain whose super power isn't an advantage.