As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn". At a general level, there are two types of learning: inductive, and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets.
The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. Hence, machine learning is closely related not only to data mining and statistics , but also theoretical computer science.
Machine learning has a wide spectrum of applications including natural language processing, syntactic pattern recognition, search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion.
Human interaction
Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data is to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the scientific method.
Some statistical machine learning researchers create methods within the framework of Bayesian statistics.
Algorithm types
Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm. Common algorithm types include:
- Supervised learning — in which the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate) the behavior of a function which maps a vector into one of several classes by looking at several input-output examples of the function.
- Unsupervised learning — An Agent which models a set of inputs: labeled examples are not available.
- Semi-supervised learning — which combines both labeled and unlabeled examples to generate an appropriate function or classifier.
- Reinforcement learning — in which the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm.
- Transduction — similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and test inputs which are available while training.
- Learning to learn — in which the algorithm learns its own inductive bias based on previous experience.
The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Machine learning refers to a system capable of the autonomous acquisition and integration of knowledge. This capacity to learn from experience, analytical observation, and other means, results in a system that can continuously self-improve and thereby offer increased efficiency and effectiveness.
If an expert system--brilliantly designed, engineered and implemented--cannot learn not to repeat its mistakes, it is not as intelligent as a worm or a sea anemone or a kitten.
-Oliver G. Selfridge, from The Gardens of Learning.
- Oliver G. Selfridge, in AI's Greatest Trends and Controversies
Good Places to Start
The Discipline and Future of Machine Learning.video of Tom Mitchell's March 1, 2007 seminar talk at the Carnegie Mellon University School of Computer Science's Machine Learning Department: "Over the past 50 years the study of machine learning has grown from the efforts of a handful of computer engineers exploring whether computers could learn to play games, and a field of statistics that largely ignored computational considerations, to a broad discipline that has produced fundamental statistical-computational theories of learning processes, has designed learning algorithms that are routinely used in commercial systems from speech recognition to computer vision, and has spun off an industry in data mining to discover hidden regularities in the growing volume of online data. This talk will provide a personal view of the current state of machine learning, and where I think the field might (should) be headed over the coming decade. I’ll propose several specific research areas which seem to me to have great potential, and will leave plenty of time at the end for audience discussion."
An AI Bite by Simon Colon. Sponsored by, and available from, The Society for the Study of Artificial Intelligence and Simulation of Behavior. "Given a task we want the computer to do, the idea is to repeatedly demonstrate how the task is performed, and let the computer learn by example, i.e., generalize some rules about the task and turn these into a program."
Introduction to Machine Learning - Draft of Incomplete Notes. By Nils J. Nilsson. "The notes survey many of the important topics in machine learning circa 1996. My intention was to pursue a middle ground between theory and practice. The notes concentrate on the important ideas in machine learning---it is neither a handbook of practice nor a compendium of theoretical proofs. My goal was to give the reader sufficient preparation to make the extensive literature on machine learning accessible."
- Chapter 1, Introduction: What is Machine Learning? "Machine learning usually refers to the changes in systems that perform tasks associated with artificial intelligence (AI). Such tasks involve recognition, diagnosis, planning, robot control, prediction, etc. ... To be slightly more specific, we show the architecture of a typical AI 'agent' in Fig. 1.1. ... One might ask 'Why should machines have to learn? Why not design machines to perform as desired in the first place?' There are several reasons why machine learning is important. ..."
Software That Learns by Doing. Machine-learning techniques have been used to create self-improving software for decades, but recent advances are bringing these tools into the mainstream. By Gary H. Anthes. Computerworld (
Machine learns games 'like a human.' By Will Knight. New Scientist News (
Machine Learning Lecture Notes. From Professor Charles R. Dyer,
"The importance of learning, however, is beyond question, particularly as this ability is one of the most important components of intelligent behavior. ... Although learning is a difficult area, there are several programs that suggest that it is not impossible. One striking program is AM, the Automated Mathematician, designed to discover mathematical laws (Lenat 1977, 1982). Initially given the concepts and axioms of set theory, AM was able to induce such important mathematical concepts as cardinality, integer arithmetic, and many of the results of number theory. AM conjectured new theorems by modifying its current knowledge base and used heuristics to pursue the 'best' of a number of possible alternative theorems. ... Early influential work includes Winston's research on the induction of structural concepts such as 'arch' from a set of examples in the blocks world (Winston 1975 a)."
Machine Learning. Preprint of Thomas G. Dietterich's article in Nature Encyclopedia of Cognitive Science,
- Also see Professor Dietterich's home page for links to ML resources and more information about his research at Oregon State University: "The focus of my research is machine learning: How can we make computer systems that adapt and learn from their experience? How can we combine human knowledge with massive data sets to expand scientific knowledge and build more useful computer applications? My laboratory combines research on machine learning fundamentals with applications to problems in science and engineering."
Two courses from MIT's OpenCourseWare "a free and open educational resource for faculty, students, and self-learners around the world."
- Machine Learning; Fall 2002. Professor Tommi Jaakkola. "6.867 is offered under the department's 'Artificial Intelligence and Applications' concentration. The site offers a full set of lecture notes, homework assignments, in addition to other materials used by students in the course. 6.867 is an introductory course on machine learning which provides an overview of many techniques and algorithms in machine learning, beginning with topics such as simple perceptrons and ending up with more recent topics such as boosting, support vector machines, hidden Markov models, and Bayesian networks. The course gives the student the basic ideas and intuition behind modern machine learning methods as well as a bit more formal understanding of how and why they work."
Videos of lectures & interviews from the 2006
Glossary of Terms. Special Issue on Applications of Machine Learning and the Knowledge Discovery Process. Ron Kohavi and Foster Provost, eds. Machine Learning, 30: 271-274 (1998). "To help readers understand common terms in machine learning, statistics, and data mining, we provide a glossary of common terms."
AI in the news: Machine Learning
Applying Metrics to Machine-Learning Tools: A Knowledge Engineering Approach. Fernando Alonso, Luis Mate, Natalia Juristo, Pedro L. Munoz, and Juan Pazos. AI Magazine 15(3): Fall 1994, 63-75. "The field of knowledge engineering has been one of the most visible successes of AI to date. Knowledge acquisition is the main bottleneck in the knowledge engineer's work. Machine-learning tools have contributed positively to the process of trying to eliminate or open up this bottleneck, but how do we know whether the field is progressing? How can we determine the progress made in any of its branches? How can we be sure of an advance and take advantage of it? This article proposes a benchmark as a classificatory, comparative, and metric criterion for machine-learning tools. The benchmark centers on the knowledge engineering viewpoint, covering some of the characteristics the knowledge engineer wants to find in a machine-learning tool."
Machine Learning: A Historical and Methodological Analysis. By Jaime G. Carbonell, Ryszard S. Michalski, and Tom M. Mitchell. AI Magazine 4(3): Fall 1983, 69-79. Abstract: "Machine learning has always been an integral part of artificial intelligence, and its methodology has evolved in concert with the major concerns of the field. In response to the difficulties of encoding ever-increasing volumes of knowledge in modern AI systems, many researchers have recently turned their attention to machine learning as a means to overcome the knowledge acquisition bottleneck. This article presents a taxonomic analysis of machine learning organized primarily by learning strategies and secondarily by knowledge representation and application areas. A historical survey outlining the development of various approaches to machine learning is presented from early neural networks to present knowledge-intensive techniques."
Brain learns like a robot- Scan shows how we form opinions. By Tanguy Chouard. Nature Science Update (
Machine Learning Research: Four Current Directions. By Tom Dietterich. AI Magazine 18(4): Winter 1997, 97-136. Abstract: "Machine-learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (1) the improvement of classification accuracy by learning ensembles of classifiers, (2) methods for scaling up supervised learning algorithms, (3) reinforcement learning, and (4) the learning of complex stochastic models."
Learning. An overview by Patrick Doyle. Very informative, though there are some spots that are quite technical.
Journal of Machine Learning Research. "The Journal of Machine Learning Research (JMLR) provides an international forum for the electronic and paper publication of high-quality scholarly articles in all areas of machine learning."
Bookish Math - Statistical tests are unraveling knotty literary mysteries. By Erica Klarreich. Science News (
Machine Learning, Neural and Statistical Classification. Donald Michie, D. J. Spiegelhalter, and C. C. Taylor, editors. "[This] book (originally published in 1994 by Ellis Horwood) is now out of print. The copyright now resides with the editors who have decided to make the material freely available on the web."
AI and the Impending Revolution in Brain Sciences Powerpoint slides of Tom Mitchell's AAAI Presidential Address, August 2002. [An associated video file is also available from his home page"Thesis of This Talk: The synergy between AI and Brain Sciences will yield profound advances in our understanding of intelligence over the coming decade, fundamentally changing the nature of our field."
Statistical Data Mining Tutorials - Tutorial Slides by Andrew Moore, professor of Robotics and Computer Science at the
Machine learning on physical robots [slide show with audio]. By Peter Stone, The
Automated Learning and Discovery State-Of-The-Art and Research Topics in a Rapidly Growing Field. By Sebastian Thrun, Christos Faloutsos, Tom Mitchell, and Larry Wasserman. AI Magazine 20(3): Fall 1999, 78-82. "This article summarizes the Conference on Automated Learning and Discovery (CONALD), which took place in June 1998 at
AI on the Web: Machine Learning A resource companion to Stuart Russell and Peter Norvig's "Artificial Intelligence: A Modern Approach" with links to reference material, people, research groups, books, companies and much more.
"The Adaptive Systems Group at the
- Continuous and Embedded Learning (Anytime Learning): "Continuous and embedded learning is a general approach to continuous learning in a changing environment. ... The basic idea is to integrate two continuously running modules: an execution module and a learning module. This work is part of an ongoing investigation of machine learning techniques for solving sequential decision problems."
Applications of Machine Learning collection from the Alberta Ingenuity Centre for Machine Learning.
"Grammatical Inference, variously refered to as automata induction, grammar induction, and automatic language acquisition, refers to the process of learning of grammars and languages from data. Machine learning of grammars finds a variety of applications in syntactic pattern recognition, adaptive intelligent agents, diagnosis, computational biology, systems modelling, prediction, natural language acquisition, data mining and knowledge discovery. ... This homepage is designed to be a centralized resource information on Grammatical Inference and its applications. We hope that this information will be useful to both newcomers to the field as well as seasoned campaigners"
Index of Machine Learning Courses. Maintained by Vasant Honavar, Artificial Intelligence Research Group, Department of Computer Science,
MLnet OiS. "Welcome to the MLnet Online Information Service (the successor of the ML-Archive at GMD). This site is dedicated to the field of machine learning, knowledge discovery, case-based reasoning, knowledge acquisition, and data mining. Get information about research groups and persons within the community. Browse through the list of software and data sets, and check out our events page for the latest calls for papers. Alternatively have a look at our list of job offerings if you are looking for a new opportunity within the field." This web site is funded by the European Commission. Here are some links to just a few of their collections:
- Applications of Machine Learning Methods
- Courses for Machine Learning, Knowledge Discovery, Data Mining
- Learning Methods, including: First Order Regression, Incremental decision tree learning, Naive Bayes, Neural Nets, Regression Rules, Star, Support Vector Machine (SVM) ... and many more.
Machine Learning at IBM. "The Machine Learning Group [
Machine Learning and Applied Statistics at Microsoft. "The Machine Learning and Applied Statistics (MLAS) group is focused on learning from data and data mining. By building software that automatically learns from data, we enable applications that (1) do intelligent tasks such as handwriting recognition and natural-language processing, and (2) help human data analysts more easily explore and better understand their data."
Machine Learning and Data Mining Group at the Austrian Research Institute for Artificial Intelligence (ÖFAI). Projects, publications, and more.
The Machine Learning Department, an academic department within Carnegie Mellon University's School of Computer Science and successor to CALD, the Center for Automated Learning and Discovery. "We focus on research and education in all areas of statistical machine learning."
- "What is Machine Learning? Machine Learning is a scientific field addressing the question 'How can we program systems to automatically learn and to improve with experience?' We study learning from many kinds of experience, such as learning to predict which medical patients will respond to which treatments, by analyzing experience captured in databases of online medical records. We also study mobile robots that learn how to successfully navigate based on experience they gather from sensors as they roam their environment, and computer aids for scientific discovery that combine initial scientific hypotheses with new experimental data to automatically produce refined scientific hypotheses that better fit observed data. To tackle these problems we develop algorithms that discover general conjectures and knowledge from specific data and experience, based on sound statistical and computational principles. We also develop theories of learning processes that characterize the fundamental nature of the computations and experience sufficient for successful learning in machines and in humans."
- Be sure to check out their collections of current research projects and past research projects.
Machine Learning Dictionary. Compiled by Bill Wilson, Associate Professor in the Artificial Intelligence Group,
Machine Learning in Games. Maintained by Jay Scott. "How computers can learn to get better at playing games. This site is for artificial intelligence researchers and intrepid game programmers. I describe game programs and their workings; they rely on heuristic search algorithms, neural networks, genetic algorithms, temporal differences, and other methods. I keep big list of online research papers. And there's more."
Machine Learning and Inference (MLI) Laboratory at George Mason University (GMU) "conducts fundamental and experimental research on the development of intelligent systems capable of advanced forms of learning, inference, and knowledge generation, and applies them to real-world problems."
Machine Learning Resources. Maintained by David Aha. Links to a wealth of information await you at this site.
The Machine Learning Systems (MLS) Group at the Jet Propulsion Laboratory, California Institute of Technology. Read about projects such as OASIS, the Onboard Autonomous Science Investigation System: "Rover traverse distances are increasing at a faster rate than downlink capacity is increasing. As this trend continues, the quantity of data that can be returned to Earth per meter traversed is reduced. The capacity of the rover to collect data, however, remains high. This circumstance leads to an opportunity to increase mission science return by carefully selecting the data with the highest science interest for downlink. We have developed an onboard science analysis technology for increasing science return from missions."
"Sodarace [a joint venture between: soda and queen mary,
- ML Programs. You'll find FOCL, Hydra, and others.
- Repository. "This is a repository of databases, domain theories and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms.
- Research. "Machine learning investigates the mechanisms by which knowledge is acquired through experience. ... Our research involves the development and analysis of algorithms that identify patterns in observed data in order to make predictions about unseen data. New learning algorithms often result from research into the effect of problem properties on the accuracy and run-time of existing algorithms."
Machine Learning is an international forum for research on
computational approaches to learning. The journal publishes articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems, including but not limited to:
Learning Problems: Classification, regression, recognition, and prediction; Problem solving and planning; Reasoning and inference; Data mining; Web mining; Scientific discovery; Information retrieval; Natural language processing; Design and diagnosis; Vision and speech perception; Robotics and control; Combinatorial optimization; Game playing; Industrial, financial, and scientific applications of all kinds.
Learning Methods: Supervised and unsupervised learning methods (including learning decision and regression trees, rules, connectionist networks, probabilistic networks and other statistical models, inductive logic programming, case-based methods, ensemble methods, clustering, etc.); Reinforcement learning; Evolution-based methods; Explanation-based learning; Analogical learning methods; Automated knowledge acquisition; Learning from instruction; Visualization of patterns in data; Learning in integrated architectures; Multistrategy learning; Multi-agent learning.
1 comment:
Write a little more about deductive machine learning.
"At a general level, there are two types of learning: inductive, and deductive". I want to learn more about the deductive learning.
Post a Comment