AI and morality, will they ever coexist?

GeekOWT - AI

We are at a turning point in tech. With the recent continued announcements of advances in AI, it seems that we are on the verge of a technological renaissance, the beginning of a period of technological advancement that the world has never before seen. On one hand, this could represent a significant evolution of human understanding, but at the same time, it could also put humanity at serious risk.

With new breakthroughs happening everyday, and with all industries rushing to find new uses for AI. From AI lawyers to AI shopping assistants, it seems that everywhere you turn, it is there. Beyond the consumer applications for AI, the military is increasingly dependent on automation and smart systems to keep their soldiers out of harms way, but at the same time distancing them from the actual atrocities of war. Letting autonomous machines handle more and more of the decision making will be inevitable, especially when time is of the essence. The majority of research is so caught up with can be achieved, very few are stopping to consider whether it should be do or if it is being done in the right way?

Artificial Intelligence, should we be doing it?

The answer to this question, I would say it is an absolute “yes!”

Human intelligence is a finite resource. We are limited by physical and biological constraints that will not allow the human mind to solve many of puzzles that we currently face. By leveraging AI, we will be able to tackle far greater problems in much shorter times.

In this respect there is no question that AI will be of great value and an indispensable tool. With the untiring mind of AI, it will continue to run and forge on unlimited until a solution is reached. The mounds of data that are generated daily by people, can easily be sifted through by AI in a matter of hours, in what would have taken a human a year or possibly decades.

So there is no doubt that AI is a critical part of our future, how we invite it to be a part of our future is the question.

AI and Morality

If you have been following the recent developments, you will know that AI has branched off into more specialized areas called “Deep Learning” or neural networks.

Without delving too deep in the tech, it is basically like teaching a child.

The AI slowly learns through interaction with the programmer or through observation about how the world is.  For example, you show the AI thousands of pictures of cats. The AI will analyze each picture and pick out distinctive features that it will proceed to associate with the concept of cat. Pointy ears, whiskers, fur claw etc. After that, you will give it another batch of photos with possible cats and other animals and ask it to find the cats. It will proceed to analyze and determine what it thinks are cats and present them to the programmer. The programmer will correct mistakes in recognition and present another batch to the AI.

Through this process, the AI will progressively learn what constitutes a cat. But, unlike a child, this iterative learning process will not require years of practice and trail and error. It can occur over a couple days if not hours.

This of course, is an over simplification. But the basis of its process is true. This is where the parallels with a human child are drawn. Just like a child, it will learn and grow, albeit at a much more exponential rate. In theory, there is really nothing AI can’t learn if given a big enough pool of data.

To take this a step further, recent scientists have experimented with developing robots that can feel pain. This is in an attempt to allow the robots to be self-aware and not allow themselves to be damaged, or in certain cases cause damage or injury to others.

At the same time researchers are also attempting to create AI that has emotion. The researchers believe that with emotions, AI will able to better interact with humans and be more understanding and compassionate.

But all these areas of research bring into question some serious moral questions. Using the child analogy, a child takes time to learn the rights and wrongs of the world. Through trial and error and numerous mistakes, he or she will learn the morals of their given societal and cultural surroundings. In most cases, this trail and error process is harmless and causes no lasting harm to the child or those around them. Also, since the process takes years to occur, those around are usually prepared to guide and protect them.

With AI, though its mind is like a child, its actions are not. Right from the start, AI will have the precision and the ability to execute actions and calculations without error or hesitation. The speed and accuracy will not be underdeveloped or hindered in anyway. It has also become evident that the rate of learning of these Deep  Learning systems are outpacing the understanding of their creators. In some some cases, the complexity of the learning is so high, that the creators may never fully understand how the systems were able to do what they did.

Google AlphaGO, was able to beat a human at the Chinese game of GO, this was still based on systematic analysis of game patterns. But a similar system was recently recently tasked with sending a message that only another system could read. While another was asked to eavesdrop. Initially, the eavesdropping system was able to figure out the message easily, but after 15,000 iterations, the systems were able to work out an encryption method that only they could decrypt. Though the encryption method was very basic, the programmers have no idea how the systems were able to create the this “New Language”, an this should give people pause.

While we race to create a more advanced AI, we should at the same time be working equally hard to create a morally sound AI.

These moral issues, having always been investigated in the realms of science fiction. From Asimov’s “Three Laws of Robotics” to more recent films “ExMachima”. We are on the verge of sci-fi becoming a reality. And now is the time to start thinking about the consequences of unchecked development of AI.

Human Morality

On the other end of the spectrum is the question of human morality. With ever increasing use of automation in warfare and cyber terrorism, how well are we monitoring our morality in the use of technology itself?

Assuming that AI remains within our control for the foreseeable future, how we utilize AI will quickly become the next topic of consideration. With the current explosive growth of AI and its power, will AI be the modern version of the nuclear arms race? Will countries race to build more powerful AI super networks as a deterrent to foreign powers? It would seem that as warfare becomes increasingly cyber, this would be a possible outcome. If Iran builds that most powerful AI on the planet, would it not be in the interest of the US or neighboring nations to build an equally, if not better, AI to counter it.

Elon Musk, Stephen Hawking, Bill Gates and Steve Wozniak have all spoken out about unchecked AI development as a very real threat to humanity’s existence.  They are not speaking of the uprising of robots that will suddenly decide that humans are a threat, but rather the idea that AI, if unchecked will leave room for mistakes that we as humanity cannot recover from. The increasing use and dependance of AI and automation in the military. The use of AI for surveillance and profiling.  The list continues to grow.  As Hawking put it,

“humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

It is these questions that we should be asking and seeking to answer just as feverishly as we build and develop AI. Will we as people, be even capable of developing proper ethical AI guidelines when we seemingly are still struggling to deal with our own ethics in the real world. Without asking the questions, we will never know the answers.

Larger corporations have started to take serious look into the ethics of AI, but it is still with very limited fanfare and public oversight.

Partnership on AI” formed by Google, Facebook, Amazon, IBM and Microsoft are one of the most notable, yet they are also some of the corporations that are doing the most research in AI.

OpenAI” is backed by Elon Musk and is a non-profit artificial intelligence research company with a focus to carefully promote and develop friendly AI in such a way as to benefit, rather than harm, humanity as a whole.

 

With all that said, I am very much in support of the development of AI, I just hope that we will be able to temper our enthusiasm for AI with the proper ethical and moral considerations that will be needed. We don’t need another Nuclear Arms race. We as a human race, should be able to work together to secure our future and utilize AI to help us grow beyond our limitations and differences, and a much brighter existence, that is, if we act now.

– GeekOWT

 

 

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.