DOC PREVIEW
Berkeley COMPSCI 188 - Bayesian Networks’s Solution

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Assignment 5: Bayesian Networks’s SolutionPart 1 SolutionQuestion 2Part 2 Solution class ExactStaticInferenceModule(StaticInferenceModule): """ You will implement an exact inference module for the static battleship game. See the abstract 'StaticInferenceModule' class for descriptions of the methods. The current implementation below is broken, returning all uniform distributions. """ def getShipTupleDistributionGivenObservations(self, observations): # BEGIN SOLUTION p_ShipTuple_and_observations = Counter() p_ShipTuple = self.game.getShipTupleDistribution() for shipTuple in self.game.getShipTuples(): p = p_ShipTuple.getCount(shipTuple) for observation in observations.items(): sensor, reading = observation p_Reading_given_shipTuple = self.game.getReadingDistributionGivenShipTuple(shipTuple, sensor)p_reading_given_shipTuple = p_Reading_given_shipTuple.getCount(reading) p *= p_reading_given_shipTuple p_ShipTuple_and_observations.setCount(shipTuple, p) p_ShipTuple_given_observations = normalize(p_ShipTuple_and_observations) return p_ShipTuple_given_observations # END SOLUTION def getReadingDistributionGivenObservations(self, observations, newLocation): # BEGIN SOLUTION oldReadingForNewLocation = self.fetch(newLocation, observations) p_NewReading_and_observations = Counter() p_ShipTuple_given_observations = self.getShipTupleDistributionGivenObservations(observations) for shipTuple, p_shipTuple_given_observations in p_ShipTuple_given_observations.items(): p_Reading_given_shipTuple = self.game.getReadingDistributionGivenShipTuple(shipTuple, newLocation) for reading in Readings.getReadings(): p_reading_given_shipTuple = p_Reading_given_shipTuple.getCount(reading) if oldReadingForNewLocation != None: p_observation_given_ship = 0.0 if oldReadingForNewLocation == reading: p_observation_given_ship = 1.0 p_NewReading_and_observations.incrementCount(reading, p_shipTuple_given_observations * p_reading_given_shipTuple) p_NewReading_given_observations = normalize(p_NewReading_and_observations) return p_NewReading_given_observations # END SOLUTION # BEGIN SOLUTION def fetch(self, key, pairList): for key2, value2 in pairList: if key == key2: return value2 return None # END SOLUTIONclass StaticVPIAgent(StaticBattleshipAgent): """ Computer-controlled battleship agent. This agent plays using value of (perfect) information calculations. The initial implementation is broken, always taking a random bombing action without sensing. You will rewrite it to greedily sense if any sensing action has an expected gain in utility / score (taking into account the cost of sensing). If no sensing action has a greedy gain, then you will select a position tuple to bomb. In this case, your agent should bomb the tuple with the highest expected utility / score according to its current beliefs. """ def getAction(self): # BEGIN SOLUTION self.game.display.pauseGUI() observations = self.observations expectedUtilities = self.getExpectedUtilities(observations)currentBestEU, currentBestBombingOptions = maxes(expectedUtilities) utilityGain = Counter() for location in self.game.getLocations(): if location in observations.keys(): continue expectedNewMEU = 0 p_Reading_given_observations = self.inferenceModule.getReadingDistributionGivenObservations(observations, location) for reading in Readings.getReadings(): outcomeProbability = p_Reading_given_observations.getCount(reading) if outcomeProbability == 0.0: continue newObservations = dict(observations) newObservations[location] = reading outcomeExpectedUtilities = self.getExpectedUtilities(newObservations) outcomeBestEU, outcomeBestActions = maxes(outcomeExpectedUtilities) expectedNewMEU += outcomeBestEU * outcomeProbability utilityGain[location] = expectedNewMEU - currentBestEU - abs(SENSOR_SCORE) bestGain, bestSensorLocations = maxes(utilityGain) if bestGain > 0: return [Actions.makeSensingAction(l) for l in bestSensorLocations] return [Actions.makeBombingAction(o) for o in currentBestBombingOptions] # END SOLUTION # BEGIN SOLUTION def getExpectedUtilities(self, observations): p_ShipTuples = self.inferenceModule.getShipTupleDistributionGivenObservations(observations) expectedUtilities = Counter() for option in self.game.getBombingOptions(): expectedUtility = 0 for ships in p_ShipTuples.keys(): p = p_ShipTuples[ships] utility = BATTLESHIP_SCORE * self.numMatches(option, ships) expectedUtility += p * utility expectedUtilities[option] = expectedUtility return expectedUtilities def numMatches(self, option, ships): matches = 0 for ship in ships: if ship in option: matches += 1 return matches # END


View Full Document

Berkeley COMPSCI 188 - Bayesian Networks’s Solution

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Bayesian Networks’s Solution
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Bayesian Networks’s Solution and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Bayesian Networks’s Solution 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?