\documentclass{beamer}
\usepackage{hyperref}
\usepackage{graphicx}
%\usepackage[T1]{fontenc}
\usepackage{verbatim}
%\usepackage{lmodern}
\usepackage[english]{babel}
%\usepackage{natbib}
\mode<presentation>
\usetheme{CambridgeUS}
\usefonttheme{professionalfonts}
\setbeamercovered{transparent}
%\definecolor{blue}{rgb}{0.1,0.1,0.7}
\definecolor{blue}{rgb}{0.51,0.0,0.0} %bordeaux
\setbeamertemplate{navigation symbols}{}

\title[] % (optional, use only with long paper titles)
{A multiple process solution \\ 
to the logical problem of language acquisition\\
Brian MacWhinney}

\subtitle[]
{\emph{ in Journal of Child Language, vol. 31 (2004), pp. 883-914 }}

\author[] % (optional, use only with lots of authors)
{Andreas van Cranenburgh (0440949)}

\institute[] % (optional, but mostly needed)
{Cognitive Models of Language, University of Amsterdam}

\date[] % (optional)
{\today}

\subject{Talks}
% If you wish to uncover everything in a step-wise fashion, uncomment
% the following command:

%\beamerdefaultoverlayspecification{<+->}

\begin{document}
\section{Title}

%\begin{frame}
% Can't ... resist ...
%\includegraphics[scale=0.37]{tufte}
%\end{frame}

\begin{frame}
  \titlepage
\end{frame}


\renewcommand{\emph}[1]{\textcolor{blue}{#1}}
\section{Main talk}
\begin{frame}
\frametitle{Abstract} 

Outline 

\begin{itemize}
\item Logical problem of language acquisition:
	\begin{itemize}
	\item input to learner is inconsistent, incomplete (not enough positive and negative evidence)
	\item corrective feedback is ignored
	\item ergo, children rely on innate, universal constraints: \emph{UG}
	\end{itemize}
\item The article presents alternative solutions:
	\begin{itemize}
	\item conservatism
	\item item-based learning
	\item competition
	\item cue-construction
	\item monitoring
	\item indirect negative evidence
	\end{itemize}
\item Studies reveal that input to children is not as poor as claimed
\item Much of syntactic structures claimed to be unlearnable can be
	learned using demonstrably available positive data.
\end{itemize}

\end{frame}

\begin{frame}
\frametitle{The Gold framework} 

Inducing languages from evidence alone (empiricism according to rationalists)

Assumptions
\begin{itemize}
\item No negative evidence
\item Input is just a set of sentences (`text presentation')
\item Output is an exactly matching formal grammar
\end{itemize}

Results:
\begin{itemize}
\item An infinite language is \emph{not learnable}, \\ 
	not even in infinite time! (Gold 1967)
\item Without negative evidence or innate constraints \\
 there can be no recovery from overgeneralization
\item \emph{But}: input is more than just words (context, meaning, intentions)
\item \emph{But}: output probably probabilistic rather than exact grammar
\item \emph{But}: language is stochastic (past experience is representative of
				language as a whole)
\end{itemize}

Unfortunately, only the second {\em but} is discussed by MacWhinney.
\end{frame}

\begin{frame}
\frametitle{Error-free learning} 

In the 1980s POS arguments shifted from negative to positive evidence.

Claim: \emph{error-free learning} without positive evidence for eg.:

\begin{itemize}
\item Negative polarity items (*I ever walked)
\item Binding conditions (*He$_1$ said that Bill$_2$ hurt himself$_1$). \\
	\emph{But}: children do make mistakes.
\item \textsc{aux} fronting (*is the man who \_ talking is strange?). \\
	\emph{But}: lots of positive evidence from wh-questions.
\end{itemize}

Conclusion: no evidence for error-free learning without positive evidence.

\end{frame}

\begin{frame}
\frametitle{Solutions}

MacWhinney presents an \emph{emergentist} account. 

Acquisition is multiply-buffered by the following processes:

\begin{itemize}
\item conservatism: wait for positive evidence before extending grammar
\item item-based learning: constructions are first acquired separately, \\
	only later integrated into grammar
\end{itemize}

Recovery from overgeneralization:
\begin{itemize}
\item competition: different forms compete based on analogy and evidence
	(`goed' receives analogic pressure, `went' has evidence)
\item cue-construction: arbitrary features added to forms to restrict
		application of some construction (*I watered the flowers flat)
\item monitoring: listen to self, compare own productions against adult language
\item indirect negative evidence: keep track of frequencies, low counts are
				`negative' evidence 
\end{itemize}

\end{frame}

\begin{frame}
\frametitle{Competition}

\begin{itemize}
\item An item-based construction is a mapping of arguments to predicates.
\item Has one correct mapping (CM), many incorrect mappings (IM)
\item Both have analogic support, only CM has positive evidence
\item Groups of CMs form Feature-Based constructions, which support analogy making.
\item Construction is learned when CM dominates IM.
\end{itemize}

\begin{quote}
``In essence, the logical problem of language acquisition is then restated as
the process of understanding how analogical pressures lead to learning courses
that deviate from what is predicted by simple learning on positive exemplars
for individual item-based constructions.''
\end{quote}
\end{frame}


\begin{frame}
\frametitle{Consequences} 

\begin{itemize}
\item Recovery from overgeneralization is no longer a logical problem
	\begin{itemize}
	\item Little evidence for truly error-free learning without relevant positive data 
	\item Recovery possible with four processes: 
		competition, cue construction, monitoring, indirect negative evidence
	\item Alternative characterization of target grammar
	\item Input to children is not unparsable or degenerate
	\end{itemize}
\item Item-based pattern is pivotal, positive data crucial
\item There is an alternative to the UG.
\end{itemize}

\end{frame}

\begin{frame}
\frametitle{Criticism}

\begin{itemize}
\item Meaning only mentioned in passing (`predicates'), but meaning has
	a rich structure (synonomy, homonymy, hyperonymy etc. etc.)
\item Competition model: how to figure out which items are in competition with each other?
\item Gold framework irrelevant, eg. Probably Approximately Correct (PAC) framework more appropriate. Two results:
	\begin{itemize}
	\item a learnable language must have a finite VC-dimension (complexity of hypothesis space)
	\item learning algorithm must be efficient (`try all grammars' not feasible)
	\end{itemize}

	What does this mean for linguistics? Either the space of grammars is
		strongly constrained (UG), or the learning algorithm takes
		shortcuts.

\item Naigles: children already generalize in comprehension before they do in production, casts doubt on role of conservatism, comprehension not as item-based as production

\item MacWhinney does not address differences in comprehension and production
\end{itemize}
\end{frame}

\end{document}

