Tuesday, 31 January 2017

AI Approach



AI APPROACHES

1.      CYBERNETICS AND BRAIN SIMULATION  ( CYBERNETICS AND COMPUTATIONAL NEUROSCIENCE )
ü  There is no consensus on how closely the brain should be simulated.

ü  In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics.

ü  Some of them built machines that used electronic networks to exhibit rudimentary intelligence.

ü  By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

2.      SYMBOLIC ( GOOD OLD FASHIONED ARTIFICIAL INTELLIGENCE )

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.

Cognitive simulation

ü  Economist Herbert Simon and Alan Newell studied human problem solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science.

ü  Their research team performed psychological experiments to demonstrate the similarities between human problem solving and the programs.

Logic based

ü  John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.

ü  Used formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.

"Anti-logic" or "scruffy"

ü  Marvin Minsky and Seymour Papert found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

ü  Roger Schank described their "anti-logic" approaches as "scruffy". Commonsense knowledge bases are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.

Knowledge based

ü  When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.

3.      SUB SYMBOLIC

Bottom-up, embodied, situated, behavior-based or nouvelle AI

ü  Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive. Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. These approaches are also conceptually related to the embodied mind thesis.

 

Computational Intelligence

ü  Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s. These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence

4.      STATISTICAL

ü  In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific sub problems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes.

ü  The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).

5.      RULE BASED SYSTEMS

ü  The simplest form of artificial intelligence, which is generally used in industry, is the rule-based system, also known as the expert system.

ü  A rule-based system is a way of encoding a human expert's knowledge in a fairly narrow area into an automated system.

ü  The knowledge of the expert is captured in a set of rules, each of which encodes a small piece of the expert's knowledge. Each rule has a left hand side and a ride hand side. The left hand side contains information about certain facts and objects, which must be true in order for the rule to potentially, fire (that is, execute).

ü  Any rules whose left hand sides match in this manner at a given time are placed on an agenda. One of the rules on the agenda is picked (there is no way of predicting which one), and its right hand side is executed, and then it is removed from the agenda. The agenda is then updated (generally using a special algorithm called the Rete algorithm), and a new rules is picked to execute. This continues until there are no more rules on the agenda.

6.      KNOWLEDGE REPRESENTATION

ü  Knowledge representation is an area in artificial intelligence that is concerned with how to formally "think", that is, how to use a symbol system to represent "a domain of discourse" - that which can be talked about, along with functions that may or may not be within the domain of discourse that allow inference (formalized reasoning) about the objects within the domain of discourse to occur.

ü  Generally speaking, some kind of logic is used both to supply a formal semantics of how reasoning functions apply to symbols in the domain of discourse, as well as to supply (depending on the particulars of the logic), operators such as quantifiers, modal operators, etc. that, along with an interpretation theory, give meaning to the sentences in the logic. 

ü  When we design a knowledge representation (and a knowledge representation system to interpret sentences in the logic in order to derive inferences from them) we have to make trades across a number of design spaces, described in the following sections.

Best Mitre Saw

  Best Mitre Saw Precisely cut wood is the reason behind the elegance of any wooden work piece. Mitres and cross cuttings in the long haul...