DOC PREVIEW
MIT 16 412J - Study Guide

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

16.412J - Cognitive Robotics Problem Set #1 Jeremie Pouly Part A: The three topics in reasoning applied to embedded systems I would like to learn more about are the following. The first two are related to the project I would like to pursue, and the third one to something I already partially study and that is a real key issue for incoming space exploration. • Search algorithms: I don’t know much about this topic which is the most important for chess players. Today’s computers allow us to do amazing calculations and to explore tons of possibilities in very short times, and it is easy to forget that for even pretty simple real-life problems, the pool of answers in so huge that we can’t just explore everything. The real issue when we want to solve a problem is not the capability of the computer to theoretically solve it, but rather the time it needs to solve it. Indeed if it will need six months to find an answer, it can be useless since the problem itself could change during these six months. This is even truer for potentially infinite problems such as what a chess player should play in a certain chessboard configuration. Most of the time there won’t be any solutions that will lead to victory whatever the human player choose to play (100% probability victory). In such cases we have to decide when to stop the algorithm searching for the best move, knowing that it could be a potentially better move that has not been found yet. As the limit is usually given by the calculation time – actually it is the deepness of the search but this is chosen functioned of the calculation time – the real challenge is to find faster search algorithms. Thus the level of a chess player is firstly given by the search algorithm that it uses. • Machine learning: by that I mean the ability of a computer program to evaluate from its original state to improve its performance. This is also a real issue in robotics and any AI system. Theoretically I believe that if we can develop a good enough learning process, we will be able to create human-like robot able to grow up from “childhood-like state” to adult state and that will really be unique and autonomous as we are. As far as current interest, I would like to learn more about this topic because this is the real essence of AI and robots. If we are to send robots on Mars, it would be a realimprovement if those could act more by themselves and not wait all the time for the eight minutes communication delay with Earth. However as we don’t really know the Martian environment, this could only succeed if the robots are able to learn alone from what they see and touch. Closer to my project, it can be useful to develop a chess player that can improved itself by playing against the same group of human and even against humans in general (for I think humans in general – with the exception of world champions – tend to play according to a certain scheme). It could allow the AI chess player to define its own heuristic process that will lead it to victory. • Communication with humans: Future space exploration should involve human / robot cooperation since one day we will be bored to send only robots and we will like to bring the space exploration to another level, and because, at the same time, it’s cheaper and safer to send robots than humans. Then if we want to use both robots and humans in cooperation, we must find a way for them to communicate while carrying a mission on Mars for example. The robots are currently seen as “extra-hands” for the astronauts, therefore they must be able to understand the orders and request the humans will tell them. This is the first part of communication and I already worked on this problem of voice recognition. I coded a Matlab program able to recognize a small dictionary based on voice characteristics. But at the same time robots should be able to address concerns, warning or simple answers to the astronauts: we would largely improve the cooperation if communication could go both ways. I would be glad to conceive a robot able to discuss with humans, play trivia or tell jokes and stories in a pretty fluent language – even limited to certain fields. Part D: I chose three papers on computer chess player dealing with both learning and search algorithms as it is the foundation of the AI that I will use. My last paper is not directly related to what I will do during my project, but is rather an opening to show what we can do with AI chess player beside only play normal chess. “An Evolutionary Approach for the Tuning of a Chess Evaluation Function using Population Dynamics”. Graham Kendall, Glenn Whitwell. I chose this paper because it explains how to use computers real power, which is to be able to do the same thing hundreds of time in a really short time with no error, to find the best chess evaluation function. It has the advantage of giving an optimal evaluation function with a rather low cost. I also chose this article because it shows that even light modifications to the base of the chess player, like the weighing of the pieces, can really change the way the AI plays.The paper presents how to use machine learning in order to find the optimal evaluation function just by comparison between all the possibilities. Instead of imposing a certain weighing to the evaluation function, the idea of the article is to use machine learning techniques to allow the computer to choose the best estimate of the optimal evaluation function within a random population. The algorithm first step is to generate the random population of evaluation function candidates. Then the members of the population compete against each other, to find out which one is the more efficient. When a candidate looses a game, it is discarded from the pool and a clone of the winning candidate is added to the population instead, in order to accelerate the convergence, until there is only one member left in the population. The authors point out that if we apply just a naïve method like that, we might end up with the best estimate of the optimal evaluation function within the population, but this may still be far from the optimal itself. In order to reach this objective, they present another way to replace the discarded candidates by a crossover evaluation function child of the two parents with weightings which depends upon the result of the game and the standard deviation of the parents’ parameters. This allows us to find a final


View Full Document

MIT 16 412J - Study Guide

Documents in this Course
Load more
Download Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?