DOC PREVIEW
UCI ICS 273A - Machine Learning

This preview shows page 1-2-14-15-29-30 out of 30 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Machine Learning ICS 273AWhat is Expected?SyllabusMachine Learning according toSome ExamplesCan Computers play Humans at Chess?2005 DARPA ChallengeWhy is this cool/important?Types of LearningIngredientsSupervised Learning I1 nearest neighbors (your first ML algorithm!)1NN Decision SurfaceDistance MetricRemarks on NN methodsNon-parametric MethodsLogistic Regression / PerceptronThe logit / sigmoidObjectiveAlgorithm in detailParametric MethodsHypothesis SpaceInductive BiasGeneralizationSlide 25Slide 26Slide 27Slide 28Slide 29Cross-validationMachine LearningICS 273AInstructor: Max WellingWhat is Expected?•Class•Homework (10%)•A Project (40%)•Random Quizzes (10%)•Final (40%)Programming in MATLAB.Syllabus•week 1: introduction: overview, examples, goals, algorithm evaluation, statistics.•week 2: classification I: decision trees, random forests, boosting, k-nearest neighbors.week 3: neural networks: perceptron, logistic regression, multi-layer networks, back- propagation.•week 4: clustering & dimensionality reduction: k-means, expectation-maximization, PCA.•week 5: reinforcement learning: MDPs, TD- and Q-learning, value iteration.week 6: classification II: kernel methods & support vector machines.•week 7: Bayesian methods: conditional independence, generative models, naive Bayes classifier.•week 8: optimization: stochastic gradient descend, Newton’s method, IRLS, annealing, genetic algorithms.week 9: computational learning theory: PAC bounds.•week 10: project presentations.week 11: final exam.Machine Learningaccording to •The ability of a machine to improve its performance based on previous results.•The process by which computer systems can be directed to improve their performance over time. Examples are neural networks and genetic algorithms.•Subspecialty of artificial intelligence concerned with developing methods for software to learn from experience or extract knowledge from examples in a database.•The ability of a program to learn from experience — that is, to modify its execution on the basis of newly acquired information. •Machine learning is an area of artificial intelligence concerned with the development of techniques which allow computers to "learn". More specifically, machine learning is a method for creating computer programs by the analysis of data sets. Machine learning overlaps heavily with statistics, since both fields study the analysis of data, but unlike statistics, machine learning is concerned with the algorithmic complexity of computational implementations. ...Some Examples• ZIP code recognition• Loan application classification • Signature recognition• Voice recognition over phone• Credit card fraud detection• Spam filter• Suggesting other products at Amazone.com • Marketing• Stock market prediction• Expert level chess and checkers systems• biometric identification (fingerprints, DNA, iris scan, face)• machine translation• web-search• document & information retrieval• camera surveillance• robosoccer• and so on and so on...Can Computers play Humans at Chess?•Chess Playing is a classic AI problem–well-defined problem–very complex: difficult for humans to play well•Conclusion: YES: today’s computers can beat even the best humanGarry Kasparov (current World Champion)Deep BlueDeep ThoughtPoints Ratings2005 DARPA ChallengeThe Grand Challenge is an off-road robot competition devised by DARPA (Defense Advanced Research Projects Agency) to promote research in the area of autonomous vehicles. The challenge consists of building a robot capable of navigating 175 miles through desert terrain in less than 10 hours, with no human intervention.Why is this cool/important?• Modern technologies generate data at an unprecedented scale.• The amount of data doubles every year.“One petabyte is equivalent to the text in one billion books, yet many scientific instruments, including the Large Synoptic Survey Telescope, will soon be generating several petabytes annually”. (2020 Computing: Science in an exponential world: Nature Published online: 22 March 2006)• Computers dominate our daily lives• Science, industry, army, our social interactions etc.We can no longer “eyeball” the images captured by some satellitefor interesting events, or check every webpage for some topic.We need to trust computers to do the work for us.Types of Learning• Supervised Learning• Labels are provided, there is a strong learning signal.• e.g. classification, regression. • Semi-supervised Learning.• Only part of the data have labels. • e.g. a child growing up.• Reinforcement learning.• The learning signal is a (scalar) reward and may come with a delay.• e.g. trying to learn to play chess, a mouse in a maze.• Unsupervised learning• There is no direct learning signal. We are simply trying to find structure in data.• e.g. clustering, dimensionality reduction.Ingredients• Data: • what kind of data do we have?• Prior assumptions:• what do we know a priori about the problem?• Representation:• How do we represent the data?• Model / Hypothesis space:• What hypotheses are we willing to entertain to explain the data?• Feedback / learning signal:• what kind of learning signal do we have (delayed, labels)?• Learning algorithm:• How do we update the model (or set of hypothesis) from feedback?• Evaluation:• How well did we do, should we change the model?Supervised Learning IExample: Imagine you want to classify versus Data: 100 monkey images and 200 human images with labels what is what.,,{ 0}, 1,...,100{ 1}, 1,...,200i ij jx y ix y j= == =rrwhere x represents the greyscale of the image pixels andy=0 means “monkey” while y=1 means “human”.Task: Here is a new image: monkey or human?1 nearest neighbors(your first ML algorithm!)Idea: 1. Find the picture in the database which is closest your query image.2. Check its label.3. Declare the class of your query image to be the same as that of the closest picture.queryclosest imageHomework: write pseudo-code for the k-nearest neighbor algorithm1NN Decision Surfacedecision curveDistance Metric•How do we measure what it means to be “close”?•Depending on the problem we should choose an appropriate distance metric.Hamming distance:( , ) | | { discrete};Scaled EuclideanDistance:( , ) ( ) ( ) { .};n m n mTn m n m n mD x x x x xD x x x


View Full Document

UCI ICS 273A - Machine Learning

Documents in this Course
Load more
Download Machine Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Machine Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Machine Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?