Previous Next

Home >> Seminar
 Efficient Model Reduction Methods for PDEs with Random Inputs ÁÎÆæ·å£¬ÖúÀí½ÌÊÚ(ÉÏº£¿Æ¼¼´óÑ§ÐÅÏ¢¿ÆÑ§Óë¼¼ÊõÑ§Ôº) Thursday, January 5th, 2017, 10:00 AM  ÊýÑ§Â¥102±¨¸æÌü ÌâÄ¿ºÍÕªÒªÈçÏÂ£º Efficient Model Reduction Methods for PDEs with Random Inputs Abstract: Over the past few decades there has been a rapid development in numerical methods for solving partial differential equations (PDEs) with random inputs. This explosion in interest has been driven by the need of conducting uncertainty quantification for practical problems. In particular, uncertainty quantification for problems with high-dimensional random inputs gains a lot of interest. It is known that traditional Monte Carlo methods converge slowly. New spectral methods such as polynomial chaos and collocation methods can converge quickly, but suffer from the so-called curse of dimensionality". Taking the sparse grid collocation method for example, when the probability space has high dimensionality, the number of points required for accurate collocation solutions can be large, and it may be costly to construct the solution. We first show that this process can be made more efficient by combining collocation with reduced basis methods, in which a greedy algorithm is used to identify a reduced problem to which the collocation method can be applied. We demonstrate with numerical experiments that this is achieved with essentially no loss of accuracy. To further resolve problems with very high-dimensional parameters, we next develop hierarchical reduced basis techniques based on an ANOVA (analysis of variance) decomposition of parameter spaces. This is joint work with Professor Howard Elman of the University of Maryland and Professor Guang Lin of Purdue University.

 Links >> East China Normal University Center for PDE Center for Operator Algebras

 Resources >> ECNU Math Preprints WWW Interactive Mathmatics Server[WIMS] Mathematics Education Net in ECNU[MENET]