DOC PREVIEW
UMD CMSC 421 - Intelligent Agents

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Intelligent AgentsIntelligent AgentsRussell and Norvig: Chapter 2CMSC421 – Fall 2003Intelligent AgentDefinition: An intelligent agent perceives its environment via sensors and acts rationally upon that environment with its actuators.environmentagent?sensorsactuatorsperceptsactionsSensors:  Eyes (vision), ears (hearing), skin (touch), tongue (gustation), nose (olfaction), neuromuscular system (proprioception)Percepts:  At the lowest level – electrical signals  After preprocessing – objects in the visual field (location, textures, colors, …), auditory streams (pitch, loudness, direction), …Actuators: limbs, digits, eyes, tongue, …Actions: lift a finger, turn left, walk, run, carry an object, …e.g., HumansNotion of an Artificial AgentNotion of an Artificial Agentenvironmentagent?sensorsactuatorslaser range findersonarstouch sensors2Notion of an Artificial AgentNotion of an Artificial Agentenvironmentagent?sensorsactuatorsVacuum Cleaner WorldABPercepts: location and contents, e.g. [A, Dirty]Actions: Left, Right, Suck, NoOpVacuum Agent Function…Suck[A, Clean], [A, Dirty]Right[A, Clean], [A, Clean]Suck[B, Dirty]Left[B, Clean]Suck[A, Dirty]Right[A, Clean]ActionPercept SequenceRational AgentWhat is rational depends on: Performance measure - The performance measure that defines the criterion of success Environment - The agents prior knowledge of the environment Actuators - The actions that the agent can perform Sensors - The agent’s percept sequence to dateWe’ll call all this the Task Environment (PEAS)3Vacuum Agent PEASPerformance Measure: minimize energy consumption, maximize dirt pick up. Making this precise: one point for each clean square over lifetime of 1000 steps. Environment: two squares, dirt distribution unknown, assume actions are deterministic and environment is static (clean squares stay clean)Actuators: Left, Right, Suck, NoOpSensors: agent can perceive it’s location and whether location is dirtyAutomated taxi driving systemPerformance Measure: Maintain safety, reach destination, maximize profits (fuel, tire wear), obey laws, provide passenger comfort, …Environment: U.S. urban streets, freeways, traffic, pedestrians, weather, customers, …Actuators: Steer, accelerate, brake, horn, speak/display, …Sensors: Video, sonar, speedometer, odometer, engine sensors, keyboard input, microphone, GPS, …AutonomyA system is autonomous to the extent that its own behavior is determined by its own experience.Therefore, a system is not autonomous if it is guided by its designer according to a priori decisions.To survive, agents must have:  Enough built-in knowledge to survive.  The ability to learn.Properties of Environments Fully Observable/Partially Observable If an agent’s sensors give it access to the complete state of the environment needed to choose an action, the environment is fully observable.  Such environments are convenient, since the agent is freed from the task of keeping track of the changes in the environment. Deterministic/Stochastic An environment is deterministic if the next state of the environment is completely determined by the current state of the environment and the action of the agent.  In an accessible and deterministic environment, the agent need not deal with uncertainty.4Properties of EnvironmentsStatic/Dynamic.  A static environment does not change while the agent is thinking.  The passage of time as an agent deliberates is irrelevant. The agent doesn’t need to observe the world during deliberation. Discrete/Continuous. If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous. NoNoNoNoMedical diagnosisNoNoNoNoInternet shoppingNoNoNoNoTaxi drivingYesYesNoYesBackgammonYesYesYesNoSolitaireDiscreteStaticDeterministicFully Observable→ Lots of real-world domains fall into the hardest case!Environment CharacteristicsSome agent types(0) Table-driven agents use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table. (1) Simple reflex agents are based on condition-action rules, implemented with an appropriate production system. They are stateless devices which do not have memory of past world states. (2) Model-based reflex agents have internal state, which is used to keep track of past states of the world. (3) Goal-based agents are agents that, in addition to state information, have goal information that describes desirable situations. Agents of this kind take future events into consideration. (4) Utility-based agents base their decisions on classic axiomatic utility theory in order to act rationally. (0) Table-driven agentsTable lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that stateProblems  Too big to generate and to store (Chess has about 10120states, for example)  No knowledge of non-perceptual parts of the current state  Not adaptive to changes in the environment; requires entire table to be updated if changes occur  Looping: Can’t make actions conditional on previous actions/states5(1) Simple reflex agentsRule-based reasoning to map from percepts to optimal action; each rule handles a collection of perceived statesProblems  Still usually too big to generate and to store Still no knowledge of non-perceptual parts of state  Still not adaptive to changes in the environment; requires collection of rules to be updated if changes occur  Still can’t make actions conditional on previous state(0/1) Table-driven/reflex agent architectureSimple Vacuum Reflex Agentfunction Vacuum-Agent([location,status])returns Actionif status = Dirtythen return Suckelse if location = Athen return Rightelse if location = B then return Left(2) Model-based reflex agentsEncode “internal state” of the world to remember the past as contained in earlier percepts.Needed because sensors do not usually give the entire state of the world at each input, so perception of the environment is captured over time. “State” is used to encode different "world states" that generate the same immediate percept.6(2)Model-based agent architecture(3) Goal-based agentsChoose actions so as to achieve a (given or computed) goal.A goal is a description of a desirable situation.Keeping track of the current state is often not enough − need to add goals to decide which situations are good Deliberative instead of


View Full Document

UMD CMSC 421 - Intelligent Agents

Download Intelligent Agents
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Intelligent Agents and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Intelligent Agents 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?