Augmented Intelligence: Cooperation with AI

Recent press surrounding artificial intelligence seems fixated on the fear that intelligent machines of the future will steal jobs from humans and essentially replace us. While it is certain that our society has and will continue the quest to automate, the increasing role of AI is no more a threat to human value than the invention of the tractor, the self-checkout lane (soon to be realized more completely as Amazon Go), e-mail, or any other of the thousands of inventions of the past century that make daily life easier, cheaper, and less labor intensive.

At Causality Link, we envision a bright and thriving economy in which machines are pervasive participants. It is a mistake to consider AI as an all-or-nothing proposition with regards to human participation. The simple reminders and suggestions from Alexa, Siri, and Cortana represent just a tiny fraction of the potential for AI systems to cooperate with humans and in fact increase their overall capabilities in the process.

On the surface, creating an AI-based system which has the potential to make us better at our jobs is difficult to fault: we all need help, and being more efficient at one’s job should be a very desirable goal of every competent worker. The invention of tools has been the cornerstone of progress in efficiency as we systematically identify ways to improve repetitive tasks necessary in getting work done.

However, implementing such a system is much more complex than building an autonomous, ‘black-box’ AI system. We will explore some of the reasons for this complexity.

If we first analyze the historical AI based on Expert Systems, we find that the sequence of implementation steps was rather straightforward: knowledge engineers would interview an expert in a specific field, then use available technology to transform this knowledge into executable code: typically, a mix of ontologies, control structures and rules. The resulting system would then be iteratively benchmarked against various test cases, the original expert would analyze the differences between his answers and the system results, modifications or additions of elements of the ontology, the control structure or the rules were made, until at last the system was considered ready for deployment.

In short, the AI system was told what to do, and would then repeat a fast, complex but understandable behavior against various scenarios. The advantage of the system was that it composed lots of smaller readable grains of knowledge, and experts could productively debate the validity of each of them until a consensus would be reached.

Fast forward to neural networks where the process to build a system is rather different: the system requires an enormous number of training cases, which are fed to a neural net re-arranging itself automatically to optimize the quality of its responses and finally checked for accuracy in its ability to correctly tackle test cases which were not part of the training set. This process is tweaked and repeated with the hopes that the system will reach a level of performance which justifies its deployment. In this case, the AI system is provided with an experience, and it infers a desirable behavior when provided with similar-enough cases.

If one wants to create a collaboration between humans and an AI system, where humans can provide the system both an experience and guide it in its learning process, as well as learn and improve from the system’s operation, the underlying process and architecture become much more complex. A balance must be achieved between teaching and inferring.

We are faced with questions such as:

  • How can we represent what the system has learned so that humans can approve it, correct it, or disregard it?
  • How can we protect an instruction provided by a human, so that it is not erased by a new experience, and how can we detect that this instruction must be re-evaluated because it has become obsolete?
  • How do we efficiently trace the versions of a system with so many moving pieces (explicit knowledge, training data sets, code versions) that performing non-regression testing becomes quite a challenge?
  • How do we properly balance the moving frontier between what must be told and what can be learned automatically?
  • How do we support contradictions between AI and humans, or between human experts themselves?

There is no one right answer to any of these questions, but clearly there are ramifications during system design that must be accommodated if we are to have a chance of answering the questions with increasing sophistication as the solution evolves.

Here are a few of the design guidelines that we have found essential in building an AI platform whose contribution to analyzing the stock market is understandable and predictable, but also flexible in adjusting itself to ever changing conditions:

  • Build a flexible and distributed event-driven architecture capable of easily integrating new human or computer-based contributors to the “debate”.
  • Take the time necessary to build and maintain sophisticated, deterministic test cases that can be automatically run on a regular basis – even within production systems. Consider these to be health monitors just as important as the system level checks good systems use to keep tabs on CPU, memory, network, IO, etc. It won’t be acceptable or even possible to propagate conditions to a staging or test environment hours after something has changed that breaks core assumptions.
  • Carry as much explicit context from end-to-end as is practical. Reproduction in a distributed system can be very difficult. Engineers are trained to optimize and will be constantly tempted to persist only the output achieved from each task they ask of the system. When expert users collaborate in the system, they must quickly understand the conditions that led to a specific outcome in a part of the system, otherwise we will quickly fall into a track of ‘guess-and-check’ improvement.
  • Become analytically driven. Instrumentation and proactive reporting, even if only delivered internally, is essential in understanding how the system is evolving. Creating a very efficient learning-loop between AI and internal analysts is essential. This requires keeping close track of changes over time and continually applying improvements, then reporting back the net changes that occur.
  • Enable the system to degrade gracefully in case of errors and contradictions. A collaborative system will by nature vary in its imperfections; human feedback quality will depend on the user base and motivation to contribute. One of the gears in the machine may suddenly stop moving, or even worse, turn unpredictably. The architecture must prioritize setting a sound foundation of deterministic services and then allow a subset of less predictable contributors, which may be humans collaborating or numerous AI services.

We all participate in some sort of workflow in undertaking the tasks that make up our daily professional lives. Computer based intelligence systems can be very effective at providing context and information relevant in making the decisions necessary to make progress in this natural flow. Today’s distributed cloud computing systems have more than enough horsepower to evaluate conditions at each stop along our workflow, and even predict the likelihood of our human decisions along each step of the way.

However, rather than generate new black boxes that simply arrive at the most likely conclusion, our approach adopts the above principles to quickly and efficiently guide users through the ‘thought process’, allowing input and adjustment along the way, so that machine and man work together to arrive where the human wanted to be in less time.  

Historically, experts and scientists have collaborated for millennia through debates that either question previous assumptions or apply them to arrive at profound new conclusions. Crucially, each step of the thought process journey was transparently documented before being accepted by peers.

As the voice of AI becomes increasingly sophisticated and is able to contemplate quantities of data no human could match, we need architectures which seamlessly juxtapose this voice and its reasoning alongside the chorus of human experts. Our ability to inspect, refute, and build upon the logic of both machine and man will accelerate our ability to progress in our understanding of the world, and guarantee the perpetual value and need for humans to be active participants in progress.