Fil Salustri's Design Site

Site Tools


Intelligent Simulation Architecture

Adding AI concepts to standard discrete event simulation.

by Filippo A. Salustri, Christophe Lucas, David Porte, and others.

Domain Description

A factory, plant or other industrial process system is assumed to exist.


  • Under normal conditions, the plant operates at some known, baseline level.
  • Data is available by which that baseline level, and deviations from it, can be meassured.
  • The measure of baseline is the plant's productivity, P.

Whenever a resource fails in the plant, the productivity P changes.

The goal of this project is to build a system that will “learn” how to respond to different resource failures by altering the operation of the plant to minimize the change of productivity arising from the failure (dP). That is, we are not trying to learn how to fix the failure, but rather how to continue production while the failure is being fixed.

Assumption: the only resource failures of interest are machine failures.

There are a variety of solution strategies that can be used to re-task the plant while the failed machine is fixed. The goal of these strategies is to minimize the change in productivity (dP) during the fix period. A solution strategy is a general algorithmic technique that addresses a particular category of problems. In this case, a solution strategy is a function mapping instances of a category of plant failures to particular directions by which the change in productivity during the fix period can be minimized.

In the general case, any strategy may potentially be applied to any failure type. In specific cases, however, some strategies are better than others. So, for a given failure type, we expect there to exist a weight for each solution strategy.

The weight is a measure of how well that solution strategy does when applied to a particular failure type exemplified in a case.

Given a problem (i.e. A failure, F), one chooses the solution strategy whose weight is the greatest (i.e. has the highest likelihood of success).

The project's goal can then be rephrased as follows: to build a system that can learn, by training of some sort, how to select strategies for arbitrary kinds of failures.

Solution Architecture

The software agent (SA) being designed incorporates aspects of both CBR and RL. CBR is used to pair past failures with their solutions, and to categorize these cases so that reasoning tasks can be carried out to match new failures to those in the case library. RL is used to assign rewards and punishments for solutions suggested by the agent to new problems, and to keep a history of past “experiences” (rewards and punishments), so that the system's behaviour can be expected to improve with use.

All solution strategies, S, are grouped into a set {S}. A solution strategy is a case-independent description of a technique to solve a class of problems.

The problem to be solved is defined by a failure F. F has the same format as the problem component of a case.

A case is denoted by C. The case library is a set of cases, denoted {C}. A case consists of a failure F and a sequence of pairs of solutions strategies S and weighting functions W.

Each solution strategy S can be weighted with respect to (1) its past performance in solving a failure of type C.F, (2) the similarity of a library case C.F to some new failure F to be solved, and (3) characteristics of the strategy that depend on certain (types of) characteristics of the case. Thus a weight are represented by a function W that combines these 3 components in some way. This means solution strategy weights are implemented as functions of both F and C.F, to take into account the similarities between cases, the differences between cases, and the environmental considerations in which the cases occur.

A case then consists of a representative failure F, a list of solution strategies, and a list of weighting functions:

  • C: [F, (S), (W)]

such that each solution strategy S has a corresponding weight W.

Let the following notation be used:

  • C.F is the failure associated with case C;
  • C.S is a (one of many) solution strategy associated case C; and
  • W.C.S is the weighting function associated with C.S.

It is not necessary to assume the IA (Intelligent Agent) has already been trained; that is, the training (learning) process and the standard operating process are integrated. Further, we assume the IA can query the simulation or plant control system. In general terms, that process is as follows:

  1. The IA is notified of a plant failure F.
    • F is defined using the same structure as a case problem.
  1. All cases in the library are compared to F to determine the similarity index of each C.F with respect to F.
  1. The cases are then sorted in order of descending similarity index.
  1. The most similar (highest rank) cases are chosen.
  1. There is a threshold similarity; any case C that is at least that similar to F is considered a matching case.
  1. If there are no matching cases in a particular instance, then
    1. a new case is added to the library, whose C.F is F,
    2. all reasonable solutions (based on input requirements of each solution) are associated with the new case, and
    3. all the solutions in the new case are given equal weights (so no solution is initially de facto preferred).
  1. For each matching case C, each solution's weighting function is called with F and C.F as its arguments. The function may also access general information about the environment in which the cases are defined, but that is not defined as part of the case per se.
  1. The solutions are then sorted in order of descending weight and the solution with the best weight is put forward as the solution for F.
  1. The plant's simulation is then implements the chosen solution, and the a new dP is measured after implementation.
  1. If the new dP is less than the old dP, then the AI is rewarded; if the new dP is more than the old dP, the AI is punished.
    • The amount of the reward or punishment is: dP.old - / dP.old.
    • This amount is added (how? summed? weighted sum?) to the weight of the chosen solution for the chosen case by making it available to the weighting function for that solution strategy.

Similarity Index

The similarity index is calculated by comparing corresponding attributes of C.F and F. The resulting value is normalized to be in the range of [0-1], where 0 means no similarity at all, and 1 means that C.F and F are identical (with respect to the attributes).

Threshold Similarity

The threshold similarity is intended to distinguish good (or “strong”) correspondences between cases and new problems, from spurious or very weak correspondences. A low similarity is typical of cases for which the system has not yet been well trained, and so should be avoided as chosen cases. As the system trains, instances of cases being matched but not chosen should decrease.

Question: to what value should this threshold be set?


F and C.F are represented as objects with attributes as follows:

  • RID: resource identifier uniquely identifying the machine that failed.
  • RT: type of resource
    • It is assumed that the class hierarchy for resources has three levels.
    • The most abstract is machine, which allows distinguishing between any object that is a machine and any other object.
    • The next level consists of specific types of machines: lathe, mill, etc.
    • The next level consists of instances of machine types that are the actual machines in the plant.
    • This means we do not distinguish between different types of, say, lathes. This is a simplifying assumption only.
    • Attribute RT may be calculated, rather than stored, by using class hierarchy information; e.g. calling a class-of function.
  • D: duration of fixing process; the amount of time needed to fix the failure. It is a stochastic value.
  • L: location of the failed machine in the plant.
    • This could be implemented as a function or a set of functions that, say, return the list of other machines near the failed machine, or that could access a mechanism (say, a linked list) that allows traversal of the sequence of machines.
  • T: time at which failure occurred or is expected to occur.

Assumption: At this time, the location of a failed machine does not include physical location information (e.g. how far it is from other machines), but rather only topological information.


Multiple Solution Checks

Instead of choosing only the best solution from a given case, we could check all solutions for a particular failure F, using the simulation to calculate a dP for each solution and then choosing the one with the best reward value. That solution is then returned as the best solution, and the solution's weight is altered to include the difference between the simulated dP versus the actual dP from implementing the solution in the real plant.

This same technique can be used for learning purposes if sufficient real cases are available to perform the comparison noted above.

Application to Other Domains

Real-time Control of Traffic Lights

The same architecture could be recast to deal with the problem of failing traffic lights. Assume a number of traffic lights are controlled by a computer. Sensor information is available in real-time from existing sensors in the pavement. It is assumed this sensor information is available to the controlling computer in real-time. A failure in this case could be a power failure at some section of the light grid; some of the lights are no longer controllable. The control computer would use its knowledge-base to re-time the remaining functional traffic lights to help minimize traffic congestion while the malfunctioning lights are being repaired.

In the long run, any change to the state of the traffic could be inputted to the system. The timing of the lights is dependent on current traffic. If the sensors are reporting traffic information in real time, changes in traffic flow can be treated as “failures,” in that the system's timing of the lights is only optimal for a given traffic level. Changes to traffic trigger the SA to select and implement a different strategy for timing lights.

Air Traffic Control

When a new aircraft enters a controlled airspace, an air traffic controller is responsible for directing it. As air traffic changes, different strategies are used to direct the traffic. The proposed architecture could be modified by loading different solution strategies and cases to implement the transmission of instructions to aircraft.

See Also


<refnotes> notes-separator: none </refnotes>

research/intelligent_simulation_architecture.txt · Last modified: 2020.03.12 13:30 (external edit)