2017.10.30 |

Date | Thu 02 Nov |

Time | 10:30 — 11:30 |

Location | 5342-333, Ada |

**Abstract **

Usually, any machine learning task requires several steps including, e.g., data representation, modeling, inference, and validation. Moreover, the learning process might need to be adapted to different situations such as computational bottlenecks, the lack of access to annotated data and the external feedbacks. In this talk, I will describe a generic framework for learning in absence of annotated data, while taking the computational aspects into account. First, for data representation, I will introduce a general-purpose approach to apply graph-based distance measures, in particular minimax distances, to many machine learning problems while considering the computational and efficiency aspects. Second, I will propose efficient methods for modeling and extracting clusters over networks and graphs, based on detecting transitions of replicator dynamics, developed in the context of evolutionary game theory. Third, I will describe an adaptive information-theoretic principle for validating different machine learning models. In particular, I will demonstrate its application to validating different representations and clustering algorithms. Finally, I will briefly introduce a generic framework for interactive learning and sequential decision making, where the learning needs to be adapted appropriately w.r.t. the external feedbacks and observations.

5004 / i36