Task about Navigating A Robot Out Of A Maze

Topics: Perception

The folllowing sample essay on Your Goal Is To Navigate A Robot Out Of A Maze. The Robot Starts In The Center Of The Maze Facing North. You Can Turn The Robot To Face North, East, South, Or West. You Can Direct The Robot To Move Forward A Certain Distance, Although It Will Stop Before Hitting A Wall. discusses it in detail, offering basic facts and pros and cons associated with it. To read the essay’s introduction, body and conclusion, scroll down.

Anything where memory is required to do well will thwart a reflex agent. ) There exists a task environment in which every agent is rational. True. Consider a task environment in which all actions (including no action) give the same, equal reward (d) The input to an agent program is the same as the input to the agent donation.

False, The input to a agent function is the percept history, The input to a agent program is only the current percept; is up to the agent program to record any relevant history needed to make actions, (e) Every agent function is implantable by some program/machine combination, False.

Consider an agent whose only action is to return an integer, ND who perceives a bit each turn.

It gains a point of performance if the integer returned matches the value of the entire biting perceived so far. Eventually, any agent program will fail because it will run out of memory. Suppose an agent selects its action uniformly at random from the set Of possible actions.

Get quality help now
writer-Charlotte
Verified

Proficient in: Perception

4.7 (348)

“ Amazing as always, gave her a week to finish a big assignment and came through way ahead of time. ”

+84 relevant experts are online
Hire writer

There exists a deterministic task environment in which this agent is rational. True. Again consider the “all actions always give equal reward” case (g) It is possible for a given agent to be perfectly rational in two distinct task environments. True. Consider two environments based on betting on the outcomes of a roll of two dice.

There Exists A Task Environment In Which Every Agent Is Rational

In one environment, the dice are fair, in the other, the dice are biased to always give 3 and 4. The agent can bet on what the sum of the dice will be, with equal reward on all possible outcomes for guessing correctly. The agent that always bets on 7 will be rational in both cases. (h) Every agent is rational in an unobservable environment. False. Built-in knowledge can give a rational agent in an unobservable environment. A vacuum- agent that cleans, moves, cleans moves would be rational, but one that never moves would not be. (i) A perfectly playing poker-playing agent never loses.

False, Pit two perfectly playing agents against each other. Someone (the one with poorer luck) must lose. 2. (Exercise 2. 4) For each of the following activities, give a PEAS description of the task environment and characterize it in terms of the properties listed in Section 2_3. 2 (Properties of Task Environments in RUN 2nd De) ; Playing soccer. P- Win/Lose E- Soccer field A- Legs,Head,lisper body S- Eyes,Ears. Partially observable, multivalent, stochastic, sequential, dynamic, continuous, unknown 1 HOW 1 – Solutions 0171 ; Exploring the subsurface oceans of Titan.

P- Surface area mapped, extraterrestrial life found E. Obfuscate oceans of Titan A. Steering accelerator, break, probe arm, S- camera, sonar, probe sensors. Partially observable, single agent, stochastic, sequential, dynamic, continuous, unknown ; Shopping for used AAA books on the Internet-P- Cost of book. Quality/relevance/ correct edition E- Internet’s used book shops A- key entry, cursor website interfaces, browser. Partially observable, multivalent, stochastic, sequential, dynamic, continuous, unknown ; Playing a tennis match.

P- Win/Lose E- Tennis court A- Tennis racquet, Legs S- Eyes, Ears. Partially observable, multivalent, stochastic, sequential, dynamic, continuous, unknown practicing tennis against a wall. P- Improved performance in future tennis matches E- Near a wall A- Tennis racquet, Legs S- Eyes, Ears. Observable, single ; Performing a high jump. P- Clearing the jump or not E- Track A- Legs, Body S- Eyes observable, single agent, stochastic, sequential, dynamic, continuous, unknown ;

Knitting a sweater. P- Quality Of resulting sweater E- Rocking chair A- Hands,Needles S- Eyes. Observable, single agent, stochastic, sequential, dynamic, continuous, unknown ; Bidding on an item at an auction. P- Item acquired, Final price paid for item E- Auction House (or online) A- Bidding S- Eyes, Ears. Partially observable, multivalent, stochastic (tie-breaking for two simultaneous bids), episodic, dynamic, continuous, known 3. (Exercise 2. 5) Define in your own words the following terms: agent, agent function, agent program, rationality, autonomy, reflex agent, model-based agent, goal. Based agent, utility-based agent, learning agent. Agent: An algorithmic entity capable of displaying intelligent-like behavior.

Agent function: a mapping from input-sequences to actions defining the behavior of an agent. Agent program: physical program implementing or approximating an agent function, Rationality: he behavior of maximizing one’s own reward or performance. Reflex agent: agent only capable of considering it’s current perception of the world. Model- based agent: agent that attempt to internalize aspects tooth world through an approximating model, Goal-based agent: agent whose performance measure does not directly depend on local actions but on some (potentially) distant goal.

Utility-based agent: agent whose performance measure is given by a utility function which determines which states are preferable and which are not on a continuous or many-valued scale. Learning agent: An agent whose performance an improve with experience. 4. (Exercise 2. 6) This exercise explores the differences between agent functions and agent programs. (a) Can there be more than one agent program that implements a given agent function? Give an example, or show why one is not possible. Yes. Assume we are given an agent function whose actions only depend on the previous p percepts.

One program can remember the previous p percepts to implement the agent function, while another could remember greater than p percepts and still implement the same agent function. (b) Are there agent functions that cannot be implemented by any agent program? Yes, See 1 (e) (c) Given a fixed machine architecture, does each agent program implement exactly one agent function? Yes. Given a percept sequence, an agent program will select an action.

To implement multiple agent functions this would require the agent program to select different actions (or different distributions of actions) given the same percept sequence. D) Given an architecture with n bits of storage, how many different possible agent programs are there? If a is the total number of actions, then the number of possible n programs are AAA an internal states and a choices for each state (e) Suppose e keep the agent program fixed but speed up the machine by a factor Of two.

Does that change the agent function? No, not directly. However this may allow the program to compress it’s memory further and to retain a better model Of the world. 5. (Exercise 3. 2) Your goal is to navigate a robot out Of a maze.

The robot Starts in the center of the maze facing north. You can turn the robot to face north, east, south, or west. You can direct the robot to move forward a certain distance, although it will stop before hitting a wall. (a) Formulate the problem. How large is the state space? Initial State: (0. ) Facing( (O, 1) Successor Function( At(x), Facing(y) (Or-I) > 1,0) 1} > k blocks), (At(x 4 y min(k, Adam (x, where Adam (x, y) is the maximum distance the robot can move in direction y from point x without hitting a wall. Goal State: At(x), x e G, where G is the set of locations outside the maze.

If the maze is comprised of S blocks, then the total number of states is AS. (b) In navigating a maze, the only place we need to turn is at the intersection of two or more corridors. Reformulate this problem using this observations How large is the state space now? The successor function remains the same for intersections, ND for locations x Vichy are straight corridors: Successor Function( At(x), Facing(y) ): k blocks), {At(x * y min(k, Adam (k, Thus if the maze has I intersection blocks then the size of the state space is 41 + 2(S – l).

HI ; solutions (c) From each point in the maze, we can move in any of the four directions until we reach a turning point, and this is the only action we need to do. Reformulate the problem using these actions. Do we need to keep track of the robots orientation now? For intersections x, Certifications( At(x) J: x 4′ (O, l)Damn (x, (O, At(x+ (1, 0)Damn (x, (1, x 4 (O, -?l)Damn (x, (O, At(x + (-1, C)Damn (x, (-1, here Damn (x, y) is the minimum distance from to an intersection in they direction. We no longer need to keep track of the robot’s orientation since the new actions now contain the turning motions within them.

The total number of states is now l. (d) In our initial description Of the problem we already abstracted from the real world, restricting actions and removing details. List three such simplifications we made. (I) The robot can move only in one Of four directions. (2) The robot can sense walls perfectly. (3) After attempting to move a certain distance, the robot knows With certainty how far it has moved. 6. (Exercise 3. ) Consider the n-queens problem using the “efficient” incremental oversimplification on page 72 (page 67 RUN 2nd De.

Explain why the state space has at least 3 n! States and estimate the largest n for which exhaustive exploration is feasible. (Hint: Derive a lower bound on the branching factor by considering the maximum number of squares that a queen can attack in any column. ) We want a lower bound on the size of the state space of this formulation of the n-queens problem. In this formulation, each column contains a queen, and queens are filled in neighboring columns in locations that are not attacked by previous queens.

Cite this page

Task about Navigating A Robot Out Of A Maze. (2019, Dec 07). Retrieved from https://paperap.com/paper-on-hw-solutions-3/

Task about Navigating A Robot Out Of A Maze
Let’s chat?  We're online 24/7