Home >> Seminar 

Efficient Model Reduction Methods for PDEs with Random Inputs
廖奇峰，助理教授(上海科技大学信息科学与技术学院)
Thursday, January 5th, 2017, 10:00 AM 数学楼102报告厅 

题目和摘要如下：
Efficient Model Reduction Methods for PDEs with Random Inputs
Abstract:
Over the past few decades there has been a rapid development in numerical methods for solving partial differential equations (PDEs) with random inputs. This explosion in interest has been driven by the need of conducting uncertainty quantification for practical problems. In particular, uncertainty quantification for problems with highdimensional random inputs gains a lot of interest. It is known that traditional Monte Carlo methods converge slowly. New spectral methods such as polynomial chaos and collocation methods can converge quickly, but suffer from the socalled ``curse of dimensionality". Taking the sparse grid collocation method for example, when the probability space has high dimensionality, the number of points required for accurate collocation solutions can be large, and it may be costly to construct the solution. We first show that this process can be made more efficient by combining collocation with reduced basis methods, in which a greedy algorithm is used to identify a reduced problem to which the collocation method can be applied. We demonstrate with numerical experiments that this is achieved with essentially no loss of accuracy. To further resolve problems with very highdimensional parameters, we next develop hierarchical reduced basis techniques based on an ANOVA (analysis of variance) decomposition of parameter spaces.
This is joint work with Professor Howard Elman of the University of Maryland and Professor Guang Lin of Purdue University.



