Kevin B. Korb

On April 25, 2011, in Bio, Presenter, Summit, by Adam A. Ford
<a href=”http://2011.singularitysummit.com.au/wp-content/uploads/2011/03/kevin_b_korb_rc100x120.png”><img title=”kevin_b_korb_rc100x120″ src=”http://2011.singularitysummit.com.au/wp-content/uploads/2011/03/kevin_b_korb_rc100x120.png” alt=”Kevin B Korb” width=”100″ height=”120″ /></a>

Kevin B Korb

Kevin’s research is in artificial intelligence and the philosophy of  science  and the interrelation between the two, and especially the  automation of  scientific induction, or causal discovery. He is also  co-founder of Psyche: An Interdisciplinary Journal of Research on  Consciousness.

Recent presentations:   <a href=”http://www.csse.monash.edu.au/%7Ekorb/lmps.pdf”><em>The Philosophy of Computer Simulation</em></a>, an invited talk at the <a href=”http://www.clmps2007.org/”> 13th International Congress of Logic Methodology and Philosophy of Science, Beijing, 9-15 August, 2007 </a>.
<h2>Two Technical Reports</h2>
Kevin B. Korb, Carlo Kopp and Lloyd Allison (1997) <a href=”http://www.csse.monash.edu.au/%7Ekorb/policy.pdf”><em>A Statement on Higher Education Policy in Australia.</em></a> Dept Computer Science, Monash University, Melbourne, 1997.  This is our  submission to the West Higher Education Review Committee.

Kevin B. Korb (1998) <a href=”http://www.csse.monash.edu.au/%7Ekorb/reswrite.pdf”><em>Research Writing in Computer Science.</em></a> This is an updated (1998) version of: Technical Report 97/308. Dept  Computer Science, Monash University, Melbourne, 1997.  This explains  some of what goes into good research writing, including argument  analysis and an understanding of cognitive errors that people are prone  to make.  It also discusses research ethics.

<h2>“The Ethics of AI”</h2>

Kevin’s paper on the subject can be <a href=”http://2011.singularitysummit.com.au/wp-content/uploads/2011/04/Kevin-Korb-The-Ethics-of-AI-IEEE.pdf” target=”_blank”>found here in pdf format – Kevin Korb – The Ethics of AI (IEEE)</a>.

Kevin gave a presentation at the Singularity Summit AU 2010.
<strong>Abstract: </strong>”There are two questions about the ethics of artificial intelligence (AI) which are central:
* How can we build an ethical AI?
* Can we build an AI ethically?
The first question concerns the kinds of AI we might achieve — moral, immoral or amoral. The second concerns the ethics of our achieving such an AI. They are more closely related than a first glance might reveal. For much of technology, the National Rifle Association’s neutrality argument might conceivably apply: “guns don’t kill people, people kill people.” But if we build a genuine, autonomous AI, we arguably will have to have built an artificial moral agent, an agent capable of both ethical and unethical behavior. The possibility of one of our artifacts behaving unethically raises moral problems for their development that no other technology can. Both questions presume a positive answer to a prior question: Can we build an AI at all? We shall begin our review there. ”

<iframe src=”http://player.vimeo.com/video/16675122″ width=”400″ height=”225″ frameborder=”0″></iframe><p><a href=”http://vimeo.com/16675122″>Kevin B. Korb – “The Ethics of AI” – SingSum AU 2010</a> from <a href=”http://vimeo.com/singularity”>Adam A. Ford</a> on <a href=”http://vimeo.com”>Vimeo</a>.</p>
&nbsp;

<h2>Computational Intelligence Society</h2>
<a href=”http://ieee-cis.org/” target=”_blank”><img src=”http://2011.singularitysummit.com.au/wp-content/uploads/2011/04/banner_cis_home-300×56.jpg” alt=”IEEE Computational Intelligence Society” title=”banner_cis_home” width=”300″ height=”56″ /></a>

<h2>Current Research Projects and Publications</h2>
For a complete list of my publications since 1993 see the <a href=”http://www.csse.monash.edu.au/cgi-bin/pub_search?publication_type=0&amp;year=&amp;authors=korb&amp;title=” target=”_blank”> publication page</a> at Monash University.
<ul>
<li>
<h4>Causal Discovery</h4>
The aim of this project is to develop methods for automating the  learning of causal structure from observational and experimental data.   This has become a very hot topic in the artificial intelligence and  statistics communities during the 1990′s as the importance of graphical  representations (Bayesian networks, causal models) of probabilistic  reasoning for AI has become clear, and so the need for automating their  learning grows.  This project has generated a number of programs which  learn causal models from observational data, using greedy search,  genetic algorithms and stochastic sampling to search the space of causal  models, and a Minimum Message Length (MML) encoding to weigh them by  posterior probability.  The programs can discover networks with discrete  or continuous variables.

<em> Publications:</em>

&nbsp;
<ul>
<li> <em>Bayesian Artificial Intelligence</em> (2004) (with Ann Nicholson) is our textbook on knowledge engineering and data mining with Bayesian networks, published by <a href=”http://www.crcpress.com/”> CRC Press</a>.  See our <a href=”http://www.csse.monash.edu.au/bai/” target=”_blank”>BAI book page</a> for material supplementing the book, including illustrative source  code, networks for use with problems and an updated appendix reporting  Bayesian net and causal discovery tools.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/aug.pdf”> The Power of Intervention</a> (with Erik Nyberg). Forthcoming in <em>Minds and Machines</em>.  We present a mathematical theory of causal intervention in linear  models, demonstrating that, while causal faithfulness or simplicity may   be undesirable in special cases, their counterparts in the augmentation  (intervention) space are desirable.  We prove that (in somewhat  idealized circumstances) interventions have the capability of entirely  eliminating models alternative to the truth.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/CPReview.pdf”> An Information-theoretic Approach to Causal Power</a> (with Luke Hope). Technical Report 2005/176. We are developing a new  metric for assessing causal power in Bayesian networks. Unlike the  metrics of Glymour, Cheng and Hiddleston, ours allows for the full  representational power of Bayesian networks, including cases of  intransitive causality. A <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/CPReview-poster.pdf”> shortened version</a> was published in the Australasian AI’2005 conference.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/intervene.pdf”> Varieties of Causal Intervention</a> (with Lucas R. Hope, Ann E. Nicholson and Karl Axnick). PRICAI 2004.   The use of Bayesian networks for modeling causal systems has achieved  widespread recognition with Judea Pearl’s <em>Causality</em> (2000).   There, Pearl developed his do-calculus for reasoning about the effects  of deterministic causal interventions on a system.  Here we discuss some  of the different kinds of intervention that arise when indeterminstic  interventions are allowed, generalizing Pearl’s account.  We also point  out the danger of the naive use of Bayesian networks for causal  reasoning, which can lead to the mis-estimation of causal effects.  We  illustrate these ideas with a graphical user interface we have developed  for causal modeling.</li>
</ul>
&nbsp;</li>
<li>
<h4>Probabilistic Causality</h4>
The most plausible understanding of the probabilities in causal  Bayesian networks is in terms of the propensity interpretation (see,  e.g., recent work by Donald Gillies). On that basis it is possible to  start making sense of type and token causal relations in reference to  Bayesian networks.

<em> Publications:</em>

&nbsp;
<ul>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/unan.pdf”> A Criterion of Probabilistic Causality</a> (with Charles Twardy).  <em>Philosophy of Science</em>,  2004.  The investigation of probabilistic causality has been plagued by  a variety of misconceptions and misunderstandings. One has been the  thought that the aim of the probabilistic account of causality is the  reduction of causal claims to probabilistic claims.  Nancy Cartwright  (1979) has clearly rebutted that idea. Another ill-conceived idea  continues to haunt the debate, namely the idea that contextual unanimity  can do the work of objective homogeneity.  It cannot.  We argue that  only objective homogeneity in combination with a causal interpretation  of Bayesian networks can provide the desired criterion of probabilistic  causality.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/techrept.pdf”> Causal Reasoning with Causal Models </a> (with Charles Twardy, Toby Handfield and Graham Oppy). Technical Report 2005/183; under submission to <em>Synthese</em>.   We introduce and discuss the use of Bayesian networks for causal  modeling. Despite their growing popularity and utility in this  application, numerous objections to it have been raised.  We address the  claims that Chickering’s arc reversal rule undermines a causal  interpretation and that failures of Reichenbach’s Common Cause  Principle, or again failures of faithfulness, invalidate causal  modeling. We also argue against Pearl’s deterministic interpretation of  causal models.  Against these objections we propose new model-building  principles which evade some of the difficulties, and we put forward a  concept of causal faithfulness which holds when faithfulness simpliciter  fails. Finally, we particularize our account of type causal relevance  to token causal relevance, providing an alternative to the recent  deterministic accounts of token causation due to Hitchcock and Halpern  and Pearl.</li>
</ul>
&nbsp;</li>
<li>
<h4>Evaluation Theory</h4>
Recently Luke Hope and I have been investigating means of evaluating  machine learning algorithms when cost-sensitive classification is not an  option, using the concept of information reward. I am currently  investigating also improved methods for assessing causal discovery  algorithms in particular.

<em> Publications:</em>

&nbsp;
<ul>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/inforeward.pdf”> A Bayesian Metric for Evaluating Machine Learning Algorithms</a> (with Luke Hope). The Australasian AI Conference, 2004.  How to assess  the performance of machine learning algorithms is a problem of  increasing interest and urgency as the data mining application of myriad  algorithms grows.  The standard approach of employing predictive  accuracy has, we argue rightly, been losing favor in the AI community.  The alternative of cost-sensitive metrics provides a far better  approach, given the availability of useful cost functions.  For  situations where no useful cost function can be found we need other  alternatives to predictive accuracy. We propose that  information-theoretic reward functions be applied.  The first such  proposal for assessing specifically machine learning algorithms was made  by Kononenko and Bratko (1991). Here we improve upon our earlier  Bayesian metric (<a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/ai02.pdf”>Hope and Korb, 2002</a>),  which provides a fair betting assessment of any machine learner. We  include an empirical analysis of various Bayesian classification  learners, ranging from Naive Bayes learners to causal discovery  algorithms.</li>
</ul>
&nbsp;</li>
<li>
<h4>Informal Logic and Argumentation</h4>
Our NAG project  produced the first computational model of  argumentation to employ Bayesian networks to model inductive and  uncertain reasoning in argument generation and analysis.  Two distinct  networks are employed, one for modeling the cognition of the human user  and one for modeling normative reasoning in the domain.  The human  cognitive model is the first to incorporate the statistical illusions  widely studied by cognitive psychologists with a computational model of  Bayesian reasoning.

I am currently working on applying Bayesian principles to improve argument analysis.

<em> Publications:</em>

&nbsp;
<ul>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/fallacy.pdf”> Bayesian Informal Logic and Fallacy.</a> <em>Informal Logic</em>,  2005. Bayesian reasoning has been applied formally to statistical  inference, machine learning and analysing scientific method.  Here I  apply it informally to more common forms of inference, namely natural  language arguments.  I analyse a variety of traditional fallacies,  deductive, inductive and causal, and find more merit in them than is  generally acknowledged.  Bayesian principles provide a framework for  understanding ordinary arguments which is well worth developing.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/cogsci97.pdf”> A Cognitive Model of Argumentation</a> (with Richard McConachy and Ingrid Zukerman). <em>Cognitive Science, 1997</em>,  Stanford. In order to argue effectively one must have a grasp of both  the normative strength of the inferences that come into play and the  effect that the proposed inferences will have on the audience. In this  paper we describe a program, <em>NAG</em> (Nice Argument Generator),  that attempts to generate arguments that are both persuasive and  correct. To do so NAG incorporates two models: a normative model, for  judging the normative correctness of an argument, and a user model, for  judging the persuasive effect of the same argument upon the user. The  user model incorporates some of the common errors humans make when  reasoning. In order to limit the scope of its reasoning during argument  evaluation and generation NAG explicitly simulates attentional processes  in both the user and the normative models.</li>
</ul>
&nbsp;</li>
<li>
<h4>Artificial Evolution</h4>
This project applies artificial life simulation techniques to issues arising in  evolution theory and evolutionary psychology.

<em> Publications:</em>

&nbsp;
<ul>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/alife05.pdf”> The Evolution of Aging</a> (with Owen Woodberry and Ann Nicholson). <em>Australasian ALife 2005</em>.  Based on the early group selection model of Gilpin (1975) for the  evolution of predatory restraint, Mitteldorf (2004) designed an ALife  simulation that models the evolution of aging and population regulation.   Mitteldorf sees the evolution of aging as a case of extreme altruism  “in the sense that the cost to the individual is high and direct, while  the benefit to the population is far too diffuse to be accounted for by  kin selection.” We demonstrate that Mitteldorf’s simulation is dependent  on kin selection, by reproducing his ALife simulations and then  introducing a mechanism to remove all and only the effects of kin  selection within it. The result is the collapse of group selection in  the simulation, suggesting a new understanding of the relation between  group and kin selection is needed.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/pi05.pdf”> An ALife Investigation of the Origins of Dimorphic Parental Investments</a> (with Steven Mascaro and Ann Nicholson).  <em>Australasian ALife 2005</em>.   When Trivers (1972) introduced the concept of parental investment to  evolutionary theory, he clarified many of the issues surrounding sexual  selection. In particular, he demonstrated how sex differences in  parental investment can explain how sexually dimorphic structure and  behaviour develops in a species. However, the origins of dimorphic  parental investments also need explanation. Trivers and others have  suggested several hypotheses, including ones based on prior investment,  desertion, paternal uncertainty, association with the offspring and  chance dimorphism. In this paper, we explore these hypotheses within the  setting of an ALife simulation. We find support for all these  alternatives, barring the prior investment hypothesis.</li>
</ul>
&nbsp;</li>
<li>
<h4>Philosophy of AI</h4>
Are artificial intelligences possible? What kinds of design might  they take? Can logical reasoning suffice for an intelligence? What to  think of McCarthy and Hayes’s Frame Problem? What is thinking? What are  minds? And other ponderables…

<em> Publications:</em>

&nbsp;
<ul>
<li><a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/mam.pdf”> Machine Learning as Philosophy of Science</a>.  <em>Minds and Machines,</em> 2004. I consider three aspects in which machine learning and philosophy  of science can illuminate each other: methodology, inductive simplicity  and theoretical terms.  I examine the relations between the two  subjects and conclude by claiming these relations to be very close.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/induction.pdf”> Inductive Learning and Defeasible Inference</a>.  <em>Journal of Experimental and Theoretical AI,</em> 1995. The symbolic approach to artificial intelligence research has  dominated AI until recent times.  It continues to dominate work in the  areas of inference and reasoning in artificial systems.  I argue,  however, that non-quantitative methods are inherently insufficient for  supporting inductive inference.  In particular there are reasons to  believe that purely deductive techniques—as advocated by the naive  physics community—and their nonmonotonic progeny are insufficient for  supplying means for the development of the autonomous intelligence that  AI has as its primary goal.  The lottery paradox points to fundamental  difficulties for any such non-quantitative approach to AI. I suggest  that a hybrid system employing both quantitative and non-quantitative  modes of reasoning is the most promising avenue for developing an  intelligence that can avoid both the paralysis induced by computational  complexity and the inductive paralysis to which purely symbolic  approaches succumb.</li>
<li> <a href=”http://www.csse.monash.edu.au/%7Ekorb/pubs/dreyfus.pdf”> Symbolicism and Connectionism: AI Back at a Join Point,</a> <em>Information, Statistics and Induction in Science, ISIS 1996.</em> Artificial intelligence has always been a controversial field of  research, assaulted from without by philosophers disputing its  possibility and riven within by divisions of ideology and method.   Recently the re-emergence of neural network models of cognition has  combined with the criticisms of Hubert Dreyfus to challenge the  pre-eminent position of symbolicist ideology and method within  artificial intelligence.  Although the conceits of symbolicism are well  worth exposing, the marriage between connectionism and Dreyfus’s  philosophical viewpoint is unnatural and much of the disputation between  connectionism and “traditional” artificial intelligence misbegotten.</li>
<li> The Frame Problem: An AI Fairy Story. <em>Minds and Machines,</em> 1998. I analyze the frame problem and its relation to other  epistemological problems for artificial intelligence, such as the  problem of induction, the qualification problem and the “general” AI  problem. I dispute the claim that extensions to logic (default logic and  circumscriptive logic) will ever offer a viable way out of the problem.  In the discussion it will become clear that the original frame problem  is really a fairy tale: as originally presented, and as  tools for its  solution are circumscribed by Pat Hayes, the problem is entertaining,  but incapable of resolution. The solution to the frame problem becomes  available, and even apparent, when we remove artificial restrictions on  its treatment and understand the interrelation between the frame problem  and the many other problems for artificial epistemology. I present the  solution to the frame problem: an adequate theory and method for the  machine induction of causal structure.</li>
</ul>
</li>
</ul>
Kevin also heads the IEEE group ‘Computational Intelligence’.

<iframe src=”http://player.vimeo.com/video/20590796″ width=”400″ height=”225″ frameborder=”0″></iframe><p><a href=”http://vimeo.com/20590796″>Panel AI Roadmaps – Singularity Summit AU 2010</a> from <a href=”http://vimeo.com/singularity”>Adam A. Ford</a> on <a href=”http://vimeo.com”>Vimeo</a>.</p>

Leave a Reply