IEEE NAS 2010

JULY 15-17
Macau, Macau SAR, China

Invited Speakers

Dr. Peter Braam,

Founder of Cluster File Systems, Inc. and ClusterStor, Inc., USA

Dr. Peter Braam is a thought leader in scalable storage, and runs ClusterStor, a storage company focusing on distributed storage software.

He was the founder and CEO of Cluster File Systems, Inc., which developed the Lustre file system, which powers almost half of the worlds top 100 computers and was acquired by Sun in Oct 2007. Peter served as a vice president in Sun advising on Sun's entire storage software strategy. Peter has broad interests in storage software, numerical mathematics and software engineering. He is an adjunct professor of the Chinese Academy of Sciences and a Fellow of Merton College, Oxford University.

He received his PhD in 1987 in pure mathematics and subsequently worked with world top scientists at Oxford where he began to teach Computer Science in 1992. He joined Carnegie Mellon's faculty in 1996 where he led the Coda project for 3 years. In 1999 he moved to the Linux industry as a Cluster and File Systems Architect for Red Hat. Dr. Braam will give the following talk:



Modularity for clustered file data management:

In this talk, inspired by working on the Lustre cluster file system during the last ten years, we will take a tour of internal features that are required for recovery and data management to work reliably and with horizontal scaling. Scale now means handling 100's of servers and tens of thousands of clients in larger data centers, often with replicas spanning the globe.

Things that we will look at are search, striping, clustering of metadata services and its recovery, caching and replication as well as HSM, migration and other data management features. Currently such features are implemented in an ad-hoc manner, deep in the guts of many systems. We will demonstrate during this talk is that there is an opportunity to define concise semantics that enables all such features, including their recovery and reasonable algorithmic complexity. This can be the first step towards a more modular, interoperable approach to file based data management applications, including file servers, middleware, backends for data and metadata, as well as data management applications. This is similar to what the relational algebra did 30 years ago for database applications.


Almadena Y. Chtchelkanova

Program Director, Division of Computing and Communication Foundations, Computer & Information Science & Engineering Directorate, National Science Foundation, USA.
Dr. Almadena Chtchelkanova is a Program Director at the Directorate for Computer and Information Science and Engineering at the National Science Foundation. Dr. Chtchelkanova is in charge of the areas of High Performance Computing, Compilers, and Parallel and Distributed Algorithms. She is a Lead Program Director and inter-agency coordinator for High End Computing University Research Activity (HECURA).

Before joining NSF in 2005 Dr. Chtchelkanova worked for Strategic Analysis, Inc. as a Senior Scientist providing technical support to Defense Advanced Research Program Agency (DARPA).

Dr. Chtchelkanova spent four years working at the Laboratory for Computational Physics and Fluid Dynamics at the Naval Research Laboratory located in Washington, DC. Dr. Chtchelkanova has considerable experience in the area of High Performance Computing (HPC) applications. She developed and implemented portable, scalable, parallel adaptive mesh generation algorithms for computational fluid dynamics, weather forecast, combustion and contaminant transport.

Dr. Chtchelkanova holds an MA degree from the Department of Computer Sciences at the University of Texas at Austin (1996) and a Ph.D. degree in physics from Moscow State University in Russia (1988). Dr. Chtchelkanova will give the following talk:



Cyberinfrastructure Framework for Scientific Discovery and Engineering Innovations

Computational Science and Engineering is an intellectual discipline that brings together core areas of science and engineering, computer science, and computational and applied mathematics in a concerted effort to use the cyberinfrastructure (CI) for scientific discovery and engineering innovations.

At the heart of every Grand Challenge Problem (Prediction of Climate Change, Assembling the Tree of Life, CO2 Sequestration, Water Sustainability, Advanced New Materials, Cancer Detection and Therapy, etc.) is the need for more faithful computational models and more stable and robust algorithms that will adapt to emerging manycore and hybrid architectures. Of critical importance are methods that are informed by observational data in a way that can cope with uncertainty in data and quantify uncertainties in predictions. New methods need to be developed to facilitate multiscale modeling, scalable solvers for multiphysics and stochastic problems, and large-scale data-intensive simulations. Grand Challenges cannot be solved by advances in HPC alone: they also require extraordinary breakthroughs in computational models, algorithms, data and visualization technologies, software, and collaborative organizations uniting diverse disciplines.

This talk examines a host of broad issues faced in addressing the Grand Challenges of science and technology and explores how those can be met by advances in CI including advances in cutting edge research on networking, high-performance computer architecture, I/O and parallel and distributed data storage technologies.