Learning, decision theoretic analysis search and visualization in EDA:

Here are some projects I'm interested in with the WELD group here at EECS UC Berkeley. For a more comprehensive list by the group, see Prof. Newton's page.

Statistical and decision theoretic analysis of very large search problems: perform context sensitive data analysis of the time-complexity and cost-reduction of different search operators so that decision theory can be applied to the task of search with those operators.

Learning and data analysis (or mining) applied to approximate and/or hierarchical search spaces: in the context of a complex search problem, how can a useful abstraction space be represented so that an adaptive system can optimize the hierarchies or approximations made during search, and allow interaction between levels of the hierarchy.

Learning and data analysis (or mining) applied to stochastic search: the choice of the "next move" in simulated annealing and other stochastic systems, and the choice of cooling schedule, and so forth, could be guided by information gleaned from data from the runtime search system.

Visualization as a tool for understanding decision theoretic search: What is happening during a complex search? In attempting to optimize the performance of a search system, one needs to understand at a global level the time cost of information gathered during the search and its potential value to the search system. Visualization is perhaps our only hope of understanding this mass of information.

Automated progamming of decision theoretic search systems: a decision theoretic system for search needs an efficient set of data structures and procedures as the basis for its search. How might we generate these from high-level specification, and subsequently allow decision analysis to choose an optimal configuration?

Compiling probabilistic algorithms: Algorithms implemented in silicon are becoming more and more complex, especially with the "billion gate chip" on the horizon. Adaptive probabilistic algorithms can be used for data mining, speaker independent speech recognition systems, and in complex instrumentation (for instance, medical or scientific). Develop a framework so that a compiler can generate probabilistic modules from high-level specification.

Data mining in silicon: Reconfigurable computing requires the generation of optimized designs. Algorithms for data mining come in many forms and the fundamental limitation at the moment (given the sizes of data bases) is the speed of the computer. Develop a framework that allows different data analysis algorithms to mapped down to silicon, using for instance the Bayesian network and the dataflow graph as intermediate forms.


Last change: Fri, Nov 8th, 10:38am, 1996.
wray@ic.eecs.berkeley.edu