Wednesday, July 28, 2010

Perception.

PERCEPTION

The definition of AI is based on the nature of the problems it tackles, namely those for which humans currently outperform computers. Also , it includes cognitive tasks. A part from those two aspects, there are many other tasks(that also fall with in this realm) such as basic perceptual and motor skills in which even lower animals posses phenomenal capabilities compared to computers.

Perception involves interpreting sights, sounds, smells and touch. Action includes the ability to negative through the world and manipulate objects. If we want to build robots that live in the world, we must understand these processes. Figure 4.3 shows a design for a complete autonomous robot. Most of AI is concerned with only cognition, we will simply add sensors and effectors to them. But the problems in perception and action are substantial in their own right and are being tackled by researchers in the field of robotics.



In the past, robotics and AI have been largely independent endeavors , and they have developed different techniques to solve different problems. One key difference between AI programs and robots is that AI programs usually operate in computer-stimulated worlds, robots must operate in physical world. For example, in the case of moves in chess, an AI program can search millions of nodes in a game tree without ever having to sense or touch anything in the real world. A complete chess-playing robot, on the other hand , must be capable of grasping pieces, visually interpreting board positions, and carrying on a host of other actions. The distinction between real and simulated worlds has several implications as given below:



A design for an Autonomous Robot:
1. The input to an AI program is symbolic in form (example : a typed English sentence), whereas the input to a robot is typically an analog signal ,such as a two dimensional video image or a speech wave form.



2. Robots require special hardware for perceiving and affecting the world, while AI programs require only general-purpose computers.



3. Robot sensors are inaccurate, and their effector are limited in precision.



4. many robots must react in real time. A robot fighter plane, for example, cannot afford to search optimally or o stop monitoring the world during a LISP garbage collection.



5. the real world is unpredictable, dynamic, and uncertain. A root cannot hope to maintain a correct and complete description of the world. This means that a robot must consider the trade-off between devising and executing plans. This trade-off has several aspects. For one thing a robot may not possess enough information about the world for it to do any useful planning. In that case, it must first engage in information gathering activity . furthermore, once it begins executing a plan, the robot must continually the results of its actions. If the results are unexpected, then re-planning may be necessary.



6. Because robots must operate in the real world, searching and back tracking can be costly.

Recent years have seen efforts to integrate research in robotics and AI. The old idea of simply sensors and effectors to existing AI programs has given way to a serious rethinking of basic AI algorithms in light of the problems involved in dealing with the physical world. Research in robotics is likewise affected by AI techniques , since reasoning about goals and plans is essential for mapping perceptions onto appropriate actions.



At this point one might ask whether physical robots are necessary for research purposes. Since current AI programs already operate in simulated worlds, why not build more realistic simulations, which better model the real world? Such simulators do exist. There are several advantages to using a simulated world: Experiment can be conducted very rapidly, conditions can easily be replicated, programs can return to previous states at no cost, and sensory input can be treated no fragile, expensive mechanical parts. The major drawback to simulators is figuring out exactly which factors to build in. experience with real robots continue4s to expose tough problems that do not arise even in the most sophisticated simulators . the world turns out – not surprisingly to be an excellent model of itself, and a readily available one.

We perceive our environment through many channels: sight, sound, touch, smell, taste. Many animals processes these same perceptual capabilities , and others also able to monitor entirely different channels. Robots, too, can process visual and auditory information, and they can also equipped with more exotic sensors. Such as laser rangefinders, speedometers and radar.

Two extremely important sensory channels for human are vision and spoken language. It is through these two faculties that we gather almost all of the knowledge that drives our problem-solving behaviors.
Vision:Accurate machine vision opens up a new realm of computer applications. These applications include mobile robot navigation, complex manufacturing tasks analysis of satellite images, and medical image processing. The question is that how we can transform raw camera images into useful information about the world.
A Video Camera provides a computer with an image represented as a two-dimensional grid of intensity levels. Each grid element, or pixel, may store a single bit of information (that is , black/white) or many bits(perhaps a real-valued intensity measure and color information). A visual image is composed of thousands of pixels. What kinds of things might we want to do with such an image? Here are four operations, in order of increasing complexity:

1. Signal Processing:- Enhancing the image, either for human consumption or as input to another program.



2. Measurement Analysis:- For images containing a single object, determining the two-dimensional extent of the object depicted.



3. Pattern Recognition:- For single – object images, calssifying the object into a category drawn from a finite set of possibilities.



4. image Understanding :- For images containing many objects, locating the object in the image, classifying them, and building a three-dimensional model of the scene.



There are algorithms that perform the first two operations. The third operation, pattern recognition varies in its difficulty. It is possible to classify two-dimensional (2-D) objects, such as machine parts coming down a conveyor belt, but classifying 3-D objects is harder because of the large number of possible orientations for each object. Image understanding is the most difficult visual task, and it has been the subject of the most study in AI. While some aspects of image understanding reduce to measurement analysis and pattern recognition, the entire problem remains unsolved , because of difficulties that include the following:



1. An image is two-dimensional, while the world is three-dimensional some information is necessarily lost when an image is created.



2. One image may contain several objects, and some objects may partially occlude others.



3. The value of a single pixel is affected by many different phenomena, including the color of the object, the source of the light , the angale and distance of the camera, the pollution in the air, etc. it is hard to disentangle these effects.



As a result, 2-D images are highly ambiguous. Given a single image, we could construct any number of 3-D worlds that would give rise to the image . it is impossible to decide what 3-D solid it should portray. In order to determine the most likely interpretation of a scene , we have to apply several types of knowledge.



Speech Recognition: Natural Language understanding systems usually accept typed input, but for a number of applications this is not acceptable. Spoken language is a more natural form of communication in many human-computer interfaces. Speech recognition systems have been available for some time, but their limitations have prevented widespread used . Below are five major design issues in speech systems. These issues also provide dimensions along which systems can be compared with one another.



1. Speaker Dependence versus Speaker Independence :



A speaker –independent system can liten to any speakear and translate the sounds into written text. Speaker independence ishard to achieve because of the wide variations in pitch and accent. It is easier to build a speaker –dependent system, which can be trained on the voice



General Learning Model.

General Learning Model: - AS noted earlier, learning can be accomplished using a number of different methods, such as by memorization facts, by being told, or by studying examples like problem solution. Learning requires that new knowledge structures be created from some form of input stimulus. This new knowledge must then be assimilated into a knowledge base and be tested in some way for its utility. Testing means that the knowledge should be used in performance of some task from which meaningful feedback can be obtained, where the feedback provides some measure of the accuracy and usefulness of the newly acquired knowledge.
General Learning Model
general learning model is depicted in figure 4.1 where the environment has been included as a part of the overall learner system. The environment may be regarded as either a form of nature which produces random stimuli or as a more organized training source such as a teacher which provides carefully selected training examples for the learner component. The actual form of environment used will depend on the particular learning paradigm. In any case, some representation language must be assumed for communication between the environment and the learner. The language may be the same representation scheme as that used in the knowledge base (such as a form of predicate calculus). When they are hosen to be the same, we say the single representation trick is being used. This usually results in a simpler implementation since it is not necessary to transform between two or more different representations.

For some systems the environment may be a user working at a keyboard . Other systems will use program modules to simulate a particular environment. In even more realistic cases the system will have real physical sensors which interface with some world environment.

Inputs to the learner component may be physical stimuli of some type or descriptive , symbolic training examples. The information conveyed to the learner component is used to create and modify knowledge structures in the knowledge base. This same knowledge is used by the performance component to carry out some tasks, such as solving a problem playing a game, or classifying instances of some concept.

 given a task, the performance component produces a response describing its action in performing the task. The critic module then evaluates this response relative to an optimal response.

Feedback , indicating whether or not the performance was acceptable , is then sent by the critic module to the learner component for its subsequent use in modifying the structures in the knowledge base. If proper learning was accomplished, the system’s performance will have improved with the changes made to the knowledge base.

The cycle described above may be repeated a number of times until the performance of the system has reached some acceptable level, until a known learning goal has been reached, or until changes ceases to occur in the knowledge base after some chosen number of training examples have been observed.

There are several important factors which influence a system’s ability to learn in addition to the form of representation used. They include the types of training provided, the form and extent of any initial background knowledge , the type of feedback provided, and the learning algorithms used. 

The type of training used in a system can have a strong effect on performance, much the same as it does for humans. Training may consist of randomly selected instance or examples that have been carefully selected and ordered for presentation. The instances may be positive examples of some concept or task a being learned, they may be negative, or they may be mixture of both positive and negative. The instances may be well focused using only relevant information, or they may contain a variety of facts and details including irrelevant data.

There are Many forms of learning can be characterized as a search through a space of possible hypotheses or solutions. To make learning more efficient. It is necessary to constrain this search process or reduce the search space. One method of achieving this is through the use of background knowledge which can be used to constrain the search space or exercise control operations which limit the search process.

Feedback is essential to the learner component since otherwise it would never know if the knowledge structures in the knowledge base were improving or if they were adequate for the performance of the given tasks. The feedback may be a simple yes or no type of evaluation, or it may contain more useful information describing why a particular action was good or bad. Also , the feedback may be completely reliable, providing an accurate assessment of the performance or it may contain noise, that is the feedback may actually be incorrect some of the time. Intuitively , the feedback must be accurate more than 50% of the time; otherwise the system carries useful information, the learner should also to build up a useful corpus of knowledge quickly. On the other hand, if the feedback is noisy or unreliable, the learning process may be very slow and the resultant knowledge incorrect.

exmaple:

Knowledge Acquisition By Expert System.

KNOWLEDGE ACQUISITION

The  success of knowledge based systems lies in the quality and extent of the knowledge available to the system. Acquiring and validating a large croups of consistent, correlated knowledge is not a trivial problem . This has give the acquisition process an especially important role in the design and implementation of these systems. Consequently, effective acquisition methods have become one of the principal challenges for the AI researches.

The goals of this branch of AI are the discovery and development of efficient, cost effective methods of acquisition. Some important progress has recently been made in this area with the development of sophisticated editors and some general concepts related to acquisition and learning.

Definition :- Knowledge acquisition is the process of adding new knowledge to a knowledge base and refining or otherwise improving knowledge that was previously acquired. Acquisition is usually associated with some purpose such as expanding the capabilities of a system or improving its performance at some specified task. It is goal oriented creation and refinement of knowledge . It may consist of facts, rules , concepts, procedures, heuristics, formulas, relationships, statistics or other useful information. Sources of this knowledge may include one or more of the following.

Experts in the domain of interest

Text Books

Technical papers

Databases

Reports

The environment

To be effective, the newly acquired knowledge should be integrated with existing knowledge in some meaningful way so that nontrivial inferences can be drawn from the resultant body of knowledge . the knowledge should, of course, be accurate, non redundant, consistent(non contradictory ), and fairly complete in the sense that it is possible to reliably reason about many of the important conclusions for which the systems was intended.

Types of learning:- Classification or taxonomy of learning types serves as a guide in studying or comparing a differences among them. One can develop learning taxonomies based on the type of knowledge representation used (predicate calculus , rules, frames), the type of knowledge learned (concepts, game playing, problem solving), or by the area of application(medical diagnosis , scheduling , prediction and so on).

The classification is intuitively more appealing and is one which has become popular among machine learning researchers . it is independent of the knowledge domain and the representation scheme is used. It is based on the type of inference strategy employed or the methods used in the learning process. The five different learning methods under this taxonomy are:

Memorization (rote learning)

Direct instruction(by being told)

Analogy

Induction

Deduction

Learning by memorization is the simplest form of learning. It requires the least5 amount of inference and is accomplished by simply copying the knowledge in the same form that it will be used directly into the knowledge base. We use this type of learning when we memorize multiplication tables ,

for example.

A slightly more complex form of learning is by direct instruction. This type of learning requires more understanding and inference than role learning since the knowledge must be transformed into an operational form before being integrated into the knowledge base. We use this type of learning when a teacher presents a number of facts directly to us in a well organized manner.

The third type listed, analogical learning, is the process of learning an ew concept or solution through the use of similar known concepts or solutions. We use this type of learning when solving problems on an examination where previously learned examples serve as a guide or when we learn to drive a truck using our knowledge of car driving. We make frewuence use of analogical learning. This form of learning requires still more inferring than either of the previous forms, since difficult transformations must be made between the known and unknown situations. This is a kind of application of knowledge in a new situation.

The fourth type of learning is also one that is used frequency by humans. It is a powerful form of learning which, like analogical learning, also requires more inferring than the first two methods. This form of learning requires the use of inductive inference, a form of invalid but useful inference. We use inductive learning when wed formulate a general concept after seeing a number of instance or examples of the concept. For example, we learn the concepts of color sweet taste after experiencing the sensation associated with several examples of colored objects or sweet foods.

The final type of acquisition is deductive learning. It is accomplished through a sequence of deductive inference steps using known facts. From the known facts, new facts or relationships are logically derived. Deductive learning usually requires more inference than the other methods. The inference method used is, of course , a deductive type, which is a valid from of inference.

In addition to the above classification, we will sometimes refer to learning methods as wither methods or knowledge-rich methods. Weak methods are general purpose methods in which little or no initial knowledge is available. These methods are more mechanical than the classical AI knowledge – rich methods. They often rely on a form of heuristics search in the learning process.

Expert Systems - Dendral,Mycin.

DENDRAL

DENDRAL is a program that analyses organic compounds to determine their structure. It is one of the early example of a successful AI program . it uses a strategy called plan-generate-test in which a planning process that used constraint-satisfaction techniques, creates lists of recommended and contraindicated substructures.

 MYCIN

Mycin is a program that diagnoses infectious diseases. It reasons backward from its goal of determining the cause of a patient illness. It attempts to solve its goal of recommending a therapy for a particular patient by first finding the cause of the patient’s illness. It uses its production rule4s to reason backward from goals to clinical observations. To solve the top-level diagnostic goal, it looks for rules whose right sides suggest diseases. It then uses the left sides of those rules (the preconditions) to set up sub goals whose success would enable the rules to be invoked . these sub goals are again matched against rules, and their preconditions are used to set up additional sub goals.

Mycin is a well known rule based deduction system. Its expertise lies in the domain of bacterial Infections. Physicians usually must begin antibiotic treatment for patient who have bacterial infections without knowledge exactly which organism is the culprit. There is no time to wait for definitive laboratory culture evidence, which accumulates too slowly. For the desperately sick, therapy must begin at once – not 2 days from can either prescribe a broad – spectrum drug that covers all possibilities , or she can prescribed a better, disease – specific drug.

Mycin helps the physician to prescribe disease – specific drugs. Mycin in-forms it self about particular cases by requesting information from the physician about a patient’s symptoms, general condition. History, and laboratory – test results that can be obtained easily and quickly. At each point, the question mycin asks is determined by Mycin’s current hypothesis and the answers to all previous questions. Thus, the questions start as though taken from a checklist, but the questions then vary as evidence builds. Here is an except from a sample sessions:

What is the patient’s Name ?

john Doe.

- Male or female?

Mael

- Age ?

He is 55.

- Have you obtained positive cultures indicating general type ?

yes

- What type of inflection is it ?

Primary bacteremia

- When did symptoms first appear ?

May 5

- Let’s call the most positive culture CI. From what site was CI taken ?

from the blood.

- When ?

May 9

- Let’s call the first significant organism from this culture UI. Do you know identify of UI?

no

- Is UI a rod or a coccus or something else ?

Rod

- What is the gram stain of UI?

Gram negative.

- Have there been positive culture from sites that are not associated with the

inflections about which you are seeking advice?

No

As we proceed through the processing stages of compute vision. We will no doubt be impressed by the similarities and parallel one can draw between vision processing and natural language processing . The - sensor stage in vision corresponds to speech recognization language understanding, the low and intermediate processing levels of vision correspond to syntactic and semantic language processing respectively, and high level processing, in both cases corresponds to the process of building and interpreting high level knowledge structures.

Knowledge Representation.

KNOWLEDGE REPRESENTATION:-

For the purpose of solving complex problems c\encountered in AI, we need both a large amount of knowledge and some mechanism for manipulating that knowledge to create solutions to new problems. A variety of ways of representing knowledge (facts) have been exploited in AI programs. In all variety of knowledge representations , we deal with two kinds of entities.

A. Facts: Truths in some relevant world. These are the things we want to represent.

B. Representations of facts in some chosen formalism . these are things we will

actually be able to manipulate.

One way to think of structuring these entities is at two levels : (a) the knowledge level, at which facts are described, and (b) the symbol level, at which representations of objects at the knowledge level are defined in terms of symbols that can be manipulated by programs.

The facts and representations are linked with two-way mappings. This link is called representation mappings. The forward representation mapping maps from facts to representations. The backward representation mapping goes the other way, from representations to facts.

One common representation is natural language (particularly English) sentences. Regardless of the representation for facts we use in a program , we may also need to be concerned with an English representation of those facts in order to facilitate getting information into and out of the system. We need mapping functions from English sentences to the representation we actually use and from it back to sentences.

Knowledge Acquisition.

KNOWLEDGE ACQUISITION:-

How can we build expert systems? Typically a knowledge engineer interviews a domain expert to elucidate expert knowledge, which is then translated into rules. Then initial system is to be built. It must be refined until it approximates expert –level performance. This process is expensive and time-consuming, so it is necessary to look for more automatic ways of constructing expert acquisition systems that exist. Yet, there are many programs that interact with domain experts to extract knowledge efficiently. These programs provide support for the following activities:

A. Entering Knowledge

B. Maintaining knowledge base consistency

C. Ensuring knowledge base completeness

Further ,statistical techniques. Such as multivariate analysis, provide an alternative approach to building expert level systems.

Expert Systems.

EXPERT SYSTEMS

OBJECTIVES

On completion of this lesson , you should be able to

- Explain the meaning of expert system.

- Explain the capabilities of expert system.

- Explain the role of knowledge acquisition.

- Explain the importance of knowledge representation.

- Explain the approaches to knowledge representation.

- Explain the issues in knowledge representation.

- Explain what is frame problem ?

- Explain the importance of predicate logic.

- Explain the use of rules in representing knowledge.

- Explain forward versus backward reasoning.

- Explain matching

- Explain the statistical reasoning, fuzzy logic, semantic nets, frames, concept

dependency and scripts.

- Explain case-based reasoning

-Give a short note on a DENDRAL and MYCIN.

Expert systems solve problems that are normally solved by human “experts”. To solve expert level problems.

 
(A) Expert systems need access to a substantial domain knowledge base, which

must be built as efficiently as possible.

(B) Expert systems also need to exploit one or more reasoning mechanisms to

apply their knowledge to the given problems.

(C)Expert systems need a mechanism for explaining that they have done to the

users who rely on them.

(D) Expert systems represent applied AI in a very broad sense.

The problem that expert systems deal with are highly diverse. There are some general issues that arise across varying domains. Also, there are powerful techniques that can be defined for specific classes of problems. Some key problem characteristics play an important role in guiding the design of the problem solving systems. For example, tools that are developed to support one classification or diagnosis task are often useful for another, while different tools are useful for solving various kinds of design tasks.

Expert Systems are complex AI programs. Almost all the techniques of AI (that includes heuristic techniques) are used in expert systems. The most widely used way of representing in expert systems. The most widely used way of representing domain knowledge in expert systems is a set of production rules which are often coupled with a frame system that defines the objects that occur in the rules. MYCIB is one such system.

In an expert system is to be an effective tool, people must be able to interact with it easily . to facilitate this interaction, the expert system must have the following two capabilities in addition to the ability to perform its underlying task.

Explain its Reasoning:-

In many of  the domains in which expert systems operate, people will not accept results unless they have been convinced of the accuracy of the reasoning process that produced those results. This is particularly true, for example, in medicine, where a doctor must accept ultimate responsibility for a diagnosis, even if that diagnosis was arrived at with considerable help from a program. Thus it is important that the reasoning process used in such programs proceed in understandable steps and that enough meta-knowledge (knowledge about reasoning process) be available so that the explanations of those steps can be generated.

(B) Acquire new knowledge and modifications of old knowledge:-

Since expert system derive their power from the richness of the knowledge bases they exploit, it is extremely important that those knowledge bases be as complete and as accurate as possible. But often there exists no standard codification of that knowledge; rather it exists only inside the heads of human experts. One way with human expert. Another way is to have the program learn expert behavior from new data.



Statistical Reasoning and Fuzzy Logic.

STATISTICAL REASONING

there are several techniques that can be used to augment knowledge representation techniques with statistical measures that describe levels of evidence and belief . an important goal for many problem solving systems is to collect evidence as the systems goes along and to modify its behavior , we need a statistical theory of evidence. Bayesian statistics is such a theory which stresses the conditional probability as fundamental notion.

FUZZY LOGIC

In fuzzy logic, we consider what happens if we make fundamental changes to our idea of set membership and corresponding changes to our definitions of logical operations. While traditional set-theory defines set membership as a Boolean predicate, fuzzy set theory allows us to represent set membership as a possibility distribution such as tall-very for the set of tall people and the set of very tall people. This contrasts with the standard Boolean definition for tall people where one is either tall or not and there must be a specific height that defines the boundary. The same is true for very tall. In fuzzy logic, one’s tallness increases with one’s height until the value 1 is reached. So it is a distribution. Once set membership has been redefined in this way, it is possible to define a reasoning system based on techniques for combining distributions. Such reasoners have been applied in control systems for devices as diverse as trains and washing machines.

Maching.

MATCHING

So far, we have seen the process of using search to solve problems as the application of appropriate rules to individual problem states to generate new states to which the rules can then be applied, and so forth, until a solution is found. Clever search involves choosing from among the rules that can be applied at a particular point, the ones that are most likely to lead to a solution. We need to extract from the entire collection of rules, those that can be applied at a given point. To do so requires some kind of matching between the current state and the preconditions of the rules. How should this be done?

One way to select applicable rules is to do a simple search through all the rules comparing one’s preconditions to the current state and extracting all the ones that match . this requires indexing of all the rules. But there are two problems with this simple solutions:

A. It requires the use of a large number of rules. Scanning through all of them would be hopelessly inefficeint.

B. It is not always immediately obvious whether a rule’s preconditions are satisfied by a particular state.

Sometimes , instead of searching through the rules, we can use the current state as an index into the rules and select the matching ones immediately. In spite of limitations, indexing in some form is very important in the efficient operation of rules based systems.

A more complex matching is required when the preconditions of rule specify required properties that are not stated explicitly in the description of the current state. In this case, a separate set of rules must be used to describe how some properties can be inferred from others. An even more complex matching process is required if rules should be applied and if their pre condition approximately match the current situation. This is often the case in situations involving physical descriptions of the world.

Case Based Reasoning.

Case-Bade Reasoning:

AI programs solve problems by reasoning from first principles. They can explain their reasoning by reporting the string of deductions that led from the input data to the conclusion, with the Human Experts.

An expert encountering a new problem is usually reminded of similar cases seen in the past, remembering the result of those cases and perhaps the reasoning behind. those results.

Medical expertise follow this pattern. Computer systems that solve new problems by analogy with old ones are often called Case Base Reasoning (CBR).

A successful CBR systems must answer the following questions.

1. How are cases organized in memory ?

2. How are relevant cases retrieved from the memory ?

3. How can previous cases be adopted to new problems ?

4. How are cases originally acquired ?

The memory structures we discussed in the previous section are clearly relevant to CBR. The use of memory effectively, we must have a rich indexing mechanism. When we are presented with a problem we should be reminded of relevant past experience. Important features are not always the most obvious ones.

X described how his wife would never cook his steak as rare as he like it. When X told this to Y, Y was reminded of a time. 30 years earlier, when he tried to get his hair cut in England and the barber just would not cut it as short as he wanted it.

Clearly the indices steak, wife and Rare are insufficient to remind Y of the barbershop episode.

Some features are only important in certain contexts.

Memory Organization.

Memory Organization:

Memory is the central to commonsense behavior. Human memory contains an immense amount of knowledge about the world. Memory is also basis for learning. A system that cannot learn cannot in practice, possess common sense. A complete theory of Human memory has not yet been discovered, but we do have a number of facts at our disposal. Some of these facts come from neurobiology, while others are psychological in nature. Computer models of neural memory are interesting, but they do not serve as theories but how memory is used in every day common sense reasoning. Psychologically AI seeks to address these issues.

Psychological studies suggest several distinctions in Human memory. One distinction is between Short Term Memory (STM) and Long Term Memory (LTM).

LTM is often divided into episodic memory and semantic memory. Episodic memory contains information about past, personal experiences. Semantic memory on the other hand contains facts like “Bird Fly”. These facts are no longer connected with personal experiences.

Models for episodic memory grew out of research on scripts. Recall that a script is a stereotyped sequence of events such as those involved in going to the dentist. In general it is difficult to know which script to retrieve one reason for this is that scripts are too monolithic. It is hard to do any kind of partial matching. It is also hard to modify a script. More recent work reduces scripts into individual scenes.

Usually three distinct memory organizations packets (MOPS) e code knowledge about an even sequence.

One MOP represents the Physical sequence of events.

Another MOP represents the set of social events that takes place.

Third MOP revolves around the goals of the person in the particular episode.

MOP’s organize scenes, and they themselves are further organized into higher level MOP’s. For example, the MOP for visiting the office of a professional may contain a sequence of obstruct general scenes, such as talking to an assistant, waiting and meeting. High level MOP’s contain no actual memories. New MOP’s are created upon the failure of expectations. With MOP’s memory is both a constructive and reconstructive process. It is constructive because new experiences create new memory structures. It is reconstructive because even if the details of a particular episode are lost, the MOP provides information about what was likely to have happened. The ability to do this kind of reconstruction is an important facture of Human Memory.

There are several MoP based computer programs. CYRUS program that contains episodes taken from the life of a particular individual. CYRUS can answer questions that require significant amounts of memory reconstruction. The I I P program accepts stories about terrorist attacks and stores them in an episodic memory. These structures improve the ability of understand.

Natural Language Processing.

NATURAL LANGUAGE PROCESSING


 Processing written text, using lexical, syntactic and semantic knowledge of the language as well as the required real world information.

Processing spoken language using all the information needed above plus additional knowledge about phonology as well as enough added information to handle the further ambiguities that arise in speech.

1) Source Language.

2) Target Representation.

Steps in Natural Language Processing:

1) Morphological Analysis: Individual worlds are analyzed into their components and non word tokens, such as punctuation are separated from the words.

2) Syntactic Analysis: Linear sequences of words are transformed into structures that show how the words relate each other. Some word sequences may be rejected if they violate the languages rules for how words may be combined.

3) Semantic Analysis: The structures created by the syntactic analyzer are assigned meanings.

4) Discourse Integration: The meaning of an individual sentences may depend on the sentences that precede it and may influence the meanings of the sentence( may depend on the sentences that precede it) that follow it.

5) Pragmatic Analysis: The structure representing what was said is reinterpreted to determine that what was actually meant. For example, the sentence “Do you know what time it is?” should be interpreted as a request to be told the time.

To make the overall language understanding problem tractable, it will help if we distinguish between the following two ways of decomposing a program.


The processes and the knowledge required to perform the task.

The global control structure that is imposed on those process.

Tuesday, July 27, 2010

Common Sense Anthologies.

A computer program that interacts with the real world must be able to reason about things like time, space, and materials. As fundamental and commonsense as these concept may be modeling them turns out to present some problems. and commonsense as these concept may be modeling them turns out to present some problems.

Time: While physicists and philosophers still debate the true nature of time, we all manage to get by on a few basic commonsense motions. These notions help us to decide when to initiate actions how to reason about others actions and how to determine relationships between events.

For Instance: A is preceded by B and C is after B then we can easily infer C is after A. A commonsense theory of time must account for reasoning of this kind. The most basic notation of time is that it is occupied by events. These events occur during intervals, continues spaces of time. What kinds of thins might we want to say about an interval?

An interval has a starting point and an ending point and a duration defined by these points. Intervals can be related to other intervals. The following diagram shows that there are exactly thirteen ways in which two-non-empty time intervals can relate to one another. There are actually only seven distinct relationships. As is clear from the figures. The relationship of equality plus six other relationships that have their own inverses.

Thirteen possible relationships between two time intervals.


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Space:

Eg: Blocks word problem: Primitives in this include block names, action like PICKUP & STACK and predicates like ON(x,y). If want a real robot to achieve ON(x,y), then that robot had better know what on really means, where x and y are located, how big they are, how they are shaped, how to align x on top of y so that x want fall off and so forth. These requirements become more apparent if we want to issue commands like “Place block x near block y”. Commansense notations of space are critical for living in the real world.

Object have spatial extent, while events have temporal extent. We might therefore try to expand our commonsense theory of time into a commonsense theory of space. Because space is three-dimensional, there are far more than thirteen possible special relationships between two objects.

For instance consider one block top of another. The objects are equal in the length and width dimensions while they meet in height dimension, but we must use the special equivalent of IS-DURING to describe the length and width relationships. The main problems with this approach is that it generates a vast number of relations (namely 133 = 2197), many of which are not commonsensical.

In our discussion of Qualitative Physics we saw how to build abstract models by transforming real valued variables into discrete quantity spaces. We can also view objects and spaces at various levels of abstraction. Choosing a set of relevant properties amounts to viewing the world at a particular level of granularity. Since, different granularities are systematically related to each other.

Materials: Why can’t you walk on water? What happens if you turn a glass of water upside down? What happens when you pour water into the soil of a potted plant?

Liquid present a particular interesting and challenging domain to formalize. “Hayers”(1985) presented one attempt to describe them before we write down any properties of liquids, we must decide what kinds of objects those properties will describe. We defined special relations in terms of the spaces occupied by the objects, not in terms of objects themselves. It is particularly useful to take this point of view with liquids, since liquid “objects” can be split and merged easily.

Ex: We consider a river to be a piece of liquid, then what happens to the river when the liquid flows out into ocean? Instead of continually changing out characterization of river, it is more convenient to view the river as a fixed shape occupied by the water.

Containers play an important role in the world of liquids. Since we do not want to refer to liquid objects, we must have another way of starting how much liquid is in a container. We can define a CAPACITY function to bound the amount of liquid that a space S can hold. The space is FULL when the AMOUNT equals the CAPACITY.

CAPACITY(S) ≥ AMOUNT(L,S) > none

FULL(S) Ξ AMOUNT (L,S) = CAPACITY(S)

We can also define an amount function.

AMOUNT(water,glass) > none.

This statement means “There is water in the Glass”. Hence water refers to the generic concept of water and Glass refers to the space enclosed by a particular Glass.

Suppose, our robot encounters a room with 6 inches of water on floor. What will happen if the robot touches the floor, by the definition of TOUCHING we have

£ d1: OUTER (d1,Robot) ^ OUTER (d1, Floor)

Since the floor has only one face d1. We can conclude

OUTER (d, Robot) ^ OUTER (d, Floor)

Combining first class with the fact WET-BY (d, water) gives us IS-WET(Robot). Recall that at the end, our robot was about to try crossing a river without using a bridge. It might find this fact useful.

INSIDE(S1,S2) ^ FREE(S1) ^ FULL(S2,L) FULL(S1,L)

It is straight forward to show that if the robot is submerged in the first place requires some envisionment.

We also need general rules describing how liquids themselves over time.

The following diagram shows five enivisonment for lazy, bulk liquids. A containment even can become a falling even if the container tips. The falling event becomes a wetting even and then a spreading one. Depending on where the spreading takes place, further falling flowing events may ensure. When all the liquid has left the container, the spreading will stop, and some time afterward, a drying event will begin.

Other material behave differently. Solids can be rigid or flexible. A string can be used to pull an object but not to push it. We can see that common sense knowledge representation has a strongly taxonomic flavor.


Qualitative Physics.

Qualitative Physics:





People know a great deal about the how the physical world works. Consider the above three situations.

a) The ball will probability bounce on the ground several times an comes to rest.

b) The ball will travel to upward right direction then downward.

c) The ball will swing repeatedly from left to right finally comes to rest in middle. How can we build a computer program to do this kind of reasoning?

The obvious answer is to program in the equation governing the physical motion of objects. These equation dates back introduced in classical physics.

Ex: The initial velocity of ball in fig(b) is V0. The angle of its departure from the ground is θ. The balls position ‘t’ seconds after being launched is given by:

Height = V0. t. Sin(θ) – ½ gt2

Distance= V0 .t.Cos(θ).

The goal of qualitative physics is to understand how to build and reason with abstract, number less representations. Once might object to qualitative physics on the grounds that computers are actually well suited to model physics processes. The goal of qualitative physics is not replace the traditional physics, but rather to provide a foundation for programs that can reason about the physical world. One such program might be a physics Expert System.

 Representation Qualitative Information:

Qualitative physics seeks to understand physical processes by building models of them. A model is an abstract representation that eliminates irrelevant details.

Traditional physics models build up from real valued variables, quantity spaces, rates of change expressions, equations and states.

Qualitative physics provides similar building blocks ones which are more abstract and mon-numeric.

Variables: In traditional physics, real valued variables are used to represent features of objects, such as position, velocity, angle, Temperature. Qualitative physics retains this notation but restrict to each variable to a small finite set of possible values.

For example the amount of water in a pot will be represented as on of [Empty, between, full] and its temperatures are [from, between, boiling].

Quantity Space: A small set of discrete values for a variable is called “Quantity Space”;

Rates of Change: Variables take on different values at different times. A real valued Rate of change (dx/dt) can be modeled qualitatively with the quantity space [decreasing, steadily, increasing].

Expressions: Variables can be combined to form expressions. Consider representing volume of water in a glass as [empty, between, full]. If we pour contents of one glass to another how much water will contain in second glass?

Empty + Empty = Empty.

Empty + Between = Between.

Empty + Full = Full.

Between + Between = [Between, Full].

Between +Full = Full + Overflow.

Full + Full = Full + Overflow.

Equations: Expressions and variables can be linked to one another via equations. The simplest equation states that variable x increases as variable y increases. This gives us an abstract representation of the actual functions relating x and y.

States: A state is a single snap shot in which each variable possesses one value. With in qualitative physics there are several different ways of formulating state information.

Reasoning with Qualitative Information:
A common reasoning method in qualitative physics is called “Quantitative Simulation”. The idea is to construct sequence of discrete “Episodes” that occur as qualitative variables change values. States are linked to other states by qualitative rules. Some rules are very general.

For example one simulation rule states that variables reach closer values before reaching further ones, and another rule states that changing from one values to another consumes some finite amount of time. Other rules, such as the rules governing the motion of objects through the air are more specific.

A network of all possible states and transitions for a qualitative system is called an evisionment. The above diagram shows an envisionment of the bouncing ball system. They are often many paths through an envisionment. Each path is called a history. Envisionments are useful in a number of applications. Most importantly, envisionments provide explanations for physical systems and those explanations can be used to predict feature behavior. In addition, if a system is an Artificial one , such as a mechanical device, envisionments can be used to diagnose problems that occur when components fail to behave correctly. Envisionments can also be used to represent and/or repair inaccurate mental models that people may have.

In order to write programs that automatically construct envisionments. We must represent qualitative knowledge about the behavior of particular kinds of processes substances, spaces, devices and so on.

Common Sense.

Computer have an entirely deserved reputation for lacking “Common Sense”. An AI program may passes more knowledge than an accounting program but it still computers using primitives that it knows nothing about. Consider the following interaction between a Medical Diagnosis System and a Human.

System: How old the patient?

Human: 33

System: Are there any spots on patient’s body.

Human: Yes (Notice rust spots)

System: What color are the spots?

Human: Reddish Brown.

System: The patient has measles (Probability 0.9).

Obviously, the system does not really know what measles are, or what the difference between ears and people is. Ever with in its speciality, the system is unaware of fundamental facts. For Eg. That Human have two arms. Clearly what the system lacks is knowledge. No of AI techniques that can be used to enable an AI program to represent and reason with Common Sense Knowledge.

For example in predicate logic one can state facts such as “If you die, you are dead at all later times.”

Frames can describe everything objects and scripts can describe the typical sequence of events. Non-monotonic logics can support default reasoning, an important aspect of common sense.

Memory is another key aspect of common sense. We look at how a memory can organize experiences, generalize them and use them to solve new problems.

Expert System Experts.

Expert System Experts



For 25 years Exsys Inc has focused exclusively on knowledge automation expert systems and in creating development tools that are powerful enough to handle real-world problems, but still easy for non-programmers to quickly master. Applications built with Exsys tools are in use worldwide by business, government and military users. Exsys was one of the first companies to create expert system development tools, the first to create and effective way to deliver interactive expert systems over the Web, and now the first to put a powerful knowledge automation inference engine under Adobe Flash.



When Exsys Inc started in 1983, expert systems were viewed as an exotic solution to decision-making, more at home in Hollywood than in reality. In the 25 years since then, the rule based expert system approach to automating complex decision-making tasks has proven to be by far the most effective, reliable and proven way to capture and deliver the logic and process of human experts to solve complex problems. If you ask an expert how they made a particular decision, in most cases, they will explain it in “If/Then” style rules. This is the natural way that people think and describe how they made a decision. It is the same way that rules are written in Exsys Corvid. The development environment makes it easy to write rules, using just English (or whatever language you prefer) and algebra. The rules can be read and understood without learning a complex programming syntax. Corvid makes it easy to structure the rules so that they are complete and cover all scenarios. In fact, building a system is a very effective way to present the expert with all the possible cases and capture a much wider scope of their knowledge.



In addition to rules, there must be an Inference Engine to process them. The inference Engine is what drives the end user interaction. It determines what information is needed to reach a conclusion, whether the needed information can be derived from lower level rules and what to ask of the user. It processes user input to analyze the data, including using probabilistic rules to consider the “likelihood” of various options, and in the end, generates recommendations and advice based on the same logic and process as the human expert that built the system. The inference engine drives an interactive session with the user, asking questions as needed in a focused way that emulates a conversation with a human expert.



There is “If/Then” syntax in virtually every computer language, but it is the Inference Engine that gives rule based expert systems their special capabilities. The Exsys Inference Engine is the result of 25 years of enhancement, refinement and application to real-world problems. It is extensively used and proven in thousands of industry, government and military applications. The Exsys Corvid Inference Engine is Java based making it portable and ideal for Web applications.



The user interface of an application is the third important factor is system development. Exsys makes it easy to build and field system with the Exsys Applet Runtime. The look and feel of the system can be easily set from within the Corvid development environment. For more complex interfaces, the Exsys Servlet Runtime can be used, allowing interface screens to be designed using HTML. For the most complex systems, the user interface can be designed using the multimedia capability of Adobe Flash integrated with the Exsys Servlet Runtime.



Exsys development tools have been used to build tens of thousands of expert systems and are the most proven environment you can use for your expert system project.




For 25 years Exsys Inc has focused exclusively on knowledge automation expert systems and in creating development tools that are powerful enough to handle real-world problems, but still easy for non-programmers to quickly master. Applications built with Exsys tools are in use worldwide by business, government and military users. Exsys was one of the first companies to create expert system development tools, the first to create and effective way to deliver interactive expert systems over the Web, and now the first to put a powerful knowledge automation inference engine under Adobe Flash.



When Exsys Inc started in 1983, expert systems were viewed as an exotic solution to decision-making, more at home in Hollywood than in reality. In the 25 years since then, the rule based expert system approach to automating complex decision-making tasks has proven to be by far the most effective, reliable and proven way to capture and deliver the logic and process of human experts to solve complex problems. If you ask an expert how they made a particular decision, in most cases, they will explain it in “If/Then” style rules. This is the natural way that people think and describe how they made a decision. It is the same way that rules are written in Exsys Corvid. The development environment makes it easy to write rules, using just English (or whatever language you prefer) and algebra. The rules can be read and understood without learning a complex programming syntax. Corvid makes it easy to structure the rules so that they are complete and cover all scenarios. In fact, building a system is a very effective way to present the expert with all the possible cases and capture a much wider scope of their knowledge.



In addition to rules, there must be an Inference Engine to process them. The inference Engine is what drives the end user interaction. It determines what information is needed to reach a conclusion, whether the needed information can be derived from lower level rules and what to ask of the user. It processes user input to analyze the data, including using probabilistic rules to consider the “likelihood” of various options, and in the end, generates recommendations and advice based on the same logic and process as the human expert that built the system. The inference engine drives an interactive session with the user, asking questions as needed in a focused way that emulates a conversation with a human expert.



There is “If/Then” syntax in virtually every computer language, but it is the Inference Engine that gives rule based expert systems their special capabilities. The Exsys Inference Engine is the result of 25 years of enhancement, refinement and application to real-world problems. It is extensively used and proven in thousands of industry, government and military applications. The Exsys Corvid Inference Engine is Java based making it portable and ideal for Web applications.



The user interface of an application is the third important factor is system development. Exsys makes it easy to build and field system with the Exsys Applet Runtime. The look and feel of the system can be easily set from within the Corvid development environment. For more complex interfaces, the Exsys Servlet Runtime can be used, allowing interface screens to be designed using HTML. For the most complex systems, the user interface can be designed using the multimedia capability of Adobe Flash integrated with the Exsys Servlet Runtime.



Exsys development tools have been used to build tens of thousands of expert systems and are the most proven environment you can use for your expert system project.


Script.

A structured representation of  background  world knowledge. This structure contains knowledge about objects, actions, and situations that are described in the input text.
If we consider the Knowledge about Shooping or Entering  into the Restraunt. This  kind  of  stored Knowledge about stereotypical events is  called a Script.

Frames.

FRAMES

Semantic Networks and conceptual depednecy can be used to represent specific events or experiences. A frame structure is used to analyze new situations from scratch and then build new knowledge structures to describe those situations. Typically , a frame describes a class of objects, such as CHAIR or ROOM . it consists of a collection of “slots” that describe aspects of the objects. Associated with each slot may be a set of conditions that must be met by any filler for it. Each slot may also be filled with a default value, so that in the absence of specific information, things can be associated to be as they usually are. Procedural information may also be associated with particular slots. The AI systems exploit not one but many frames. Related frames can be grouped together to form a frame system.

Frames represent an object as a group of attributes. Each attributes in a particular frame is stored in a separate slot. For example, when a furniture salesman says “ I have a nice chair, that I want you to see”, the word ‘chair’ would immediately trigger in our minds a series of expectations. We would probably expect to see an object with four legs, a seat , a back and possibly (but not necessarily) two arms. We would expect it to have a particular size and serves a place to sit. In an AI system, a frame CHAIR might include knowledge organized as shown below:

Frame : CHAIR

Parts : seat, back, legs, arms

Number of legs : 4

Number of arms: 0 or 2

Default : 0