|    | 
 | |||
| Biological Science Data | ||
| Program | Abstracts | Science 
            Specialty Session Abstracts Physical Science Data Biological Science Data Earth and Environmental Data Medical and Health Data Behavioral and Social Science Data Informatics and Technology Data Science Data Policy Technical Demonstrations Large Data Projects Roundtables Poster Sessions | 
| 
 1. 
                  Building the US National Biological Information Infrastructure: 
                  Synergy between Regional and National Initiatives In 1994, the U.S. President signed Executive Order 12906, "Coordinating Geographic Data Acquisition and Access: the National Spatial Data Infrastructure (NSDI)." The NSDI deals with the acquisition, processing, storage, and distribution of geospatial data, and is implemented by the Federal Geographic Data Committee (FGDC). At the same time, the national biotic resource information system became the NBII (web page - http://www.nbii.gov). The NBII is implemented through the auspices of the U.S. Geological Survey (USGS). The NBII works with the FGDC to increase access and dissemination of biological geospatial data through the NBII and the NSDI. The NBII biological metadata standard, is an approved "profile" or extension of the FGDC's geospatial metadata standard. In 1998, the Biodiversity and Ecosystems Panel of the President' s Committee of Advisors on Science And Technology (PCAST) released the report titled "Teaming With Life: Investing in Science to Understand and Use America's Living Capital". The PCAST report recommended that the federal government develop the "next generation NBII" or NBII-2. This would be accomplished through a system of nodes (interconnected entry points to the NBII). In 2001, the U.S. Congress allocated the funds for the development and promotion of the node based NBII-2. Development and implementation of the NBII nodes is underway and is being conducted in collaboration with every sector of society. There are three types of nodes. "Regional" nodes have a geographic area of responsibility and represent a regional approach to local data, environmental issues, and data collectors. Twelve (12) regional nodes are required to cover the entire U.S. "Thematic" nodes focus on a particular biological issue (i.e., bird conservation, fisheries and aquatic resources, invasive species, urban biodiversity, wildlife disease/human health, etc.). Such issues cross regional, national, and even international boundaries. "Infrastructure" nodes are focused on issues such as the creation, adoption, and implementation of standards through the development of common tool suites, hardware and software protocols, and geospatial technologies to achieve interoperability and transparent retrieval across the entire NBII network. This presentation will highlight NBII development, implementation, 
                  lessons learned, and successful user applications of two regional 
                  nodes, the Southern Appalachian Information Node (SAIN) and 
                  the Central Southwest/Gulf Coast Node (CSGCN). Specific NBII 
                  applications will include multiple country-, regional-, county-, 
                  and local- (site specific) level biological, environmental, 
                  and natural resource management issues.  
 2. 
                  Building a Biodiversity Information Network in India  
                  Biodiversity Informatics and Developing World: Status and Potentials The most of the striking feature of Earth is the existence of life, and the most striking feature of life is its diversity. Biodiversity, and the ecosystems that support it, contribute trillions of dollars to national and global economies. The basis of all efforts to effectively conserve biodiversity and natural ecosystems lies in efficient access to knowledgebase on biodiversity and ecosystems resources and processes. Most of the developed countries are well ahead in the race to take advantage of new electronic information opportunities to manage and build their biodiversity knowledge bases, the recognized cornerstone for their future economic, social and environmental well being. For developing nations, which harbors rich and diversified natural resources, much of the biodiversity information is neither available nor accessible. Hence there is a need for organized, well-resourced, national approach to build and manage biodiversity information through collaborative efforts by this group of Third World Nations. This paper reviews the state of information technology applications in the field of biodiversity informatics in these nations, with India as model nation. India is one of the 12 mea-biodiversity countries bestowed with rich floral and faunal diversity. With its deteriorating status of natural resources and developmental activities, India is one of the best model nation for such a review. Attempts made by the author's group to develop and implement cost-efficient, easy-to-use tools for biological data management are described in brief. Feasibility of employing available tools, techniques and standards for biological data acquisition, organization, analysis, modeling and forecasting has been discussed keeping in view the informatics awareness amongst the biologists and ecologists as well as planners. With specific reference to Indian biodiversity, authors suggest the framework to build national information infrastructure to correlate, analyze and communicate biological information to help these nations to generate sustainable wealth from nature. 
 3. 
                  Developing and Integrating Data Resources from a North American 
                  Perspective Biodiversity Information denotes a very heterogenous set of data formats, updating regimes, quality, and users. The data in the labels of biological specimens provide a natural organizing framework because the georeference and the taxonomic name can be used to link to geographically organized data (remote sensing, cartography) and to a variety of points of view (ecological or genetical data, legislation, traffic, etc.). Label data, however is widely distributed over hundreds of institutions. In this talk, we describe the technical and organizational problems that were solved to create REMIB (the World Network of Biodiversity Information), that links nearly 5 million specimens from 61 collections of 16 institutions in three countries. We also give one example of the use that such system may have. 
 Scientists within the Long-Term Ecological Research (LTER) Network have provided leadership in ecological informatics since the inception of LTER in 1980. The success of LTER, where research projects span wide temporal and spatial scales, depends on the quality and longevity of the data collected. Scientists have devised data collection, data entry, data access, QA/QC and archiving strategies for ensuring that high quality data are appropriately managed to meet the needs of a broad user base for decades to come. The LTER cross-site Network Information System (NIS) is being developed to foster data sharing and collaboration among sites. Recent and important milestones for LTER include adoption of Ecological Metadata Language as a standard as well as supporting metadata software. Current and future foci include developing data standardization protocols and semantic mediation engines, both of which will facilitate LTER modeling efforts. 
 5. 
                  The Global Biodiversity Information Facility (GBIF)  Challenges 
                  and Opportunities from a Global Perspective The Global Biodiversity Information Facility (GBIF) is a new international scientific cooperative project based on an agreement between countries, economies, and international organizations. The primary goal of GBIF is to establish an interoperable, distributed network of databases containing scientific biodiversity information in order to make the world's scientific biodiversity data freely available to all. GBIF will play a crucial role in promoting the standardization, digitization and global dissemination of the world's scientific biodiversity data within an appropriate framework for property rights and due attribution. Initially, GBIF will focus on species and specimen level data in 4 priority areas: data access and data interoperability; digitization of natural history collection data, electronic catalogue of names of known organisms; outreach and capacity building. With an expected staff of only 14, GBIF will work mostly with others in order to catalyse synergistic activities between participants, generate new investments and eliminate barriers to cooperation. In its first year of activity, GBIF has been concentrating on organisational logistics, staffing, and consultations with Scientific and Technical Advisory Groups (STAGs). Initial work plans are being drafted by the Science committee and its 4 subcommittees. Once functional, GBIF will allow to unlock and liberate vast amounts of biodiversity occurrence data for use in research and environmental decision-making. Life itself, in all its diversity (from molecules, to species, to ecosystems) will provide numerous new additional sets of data layers for integrated environmental analysis, modelling and forecasting. 
 1. 
                  A Proteomic Approach to the Study of Cancer During the past 20 years, high resolution two dimensional polyacrylamide gel electrophoresis (2D PAGE) has been the technique of choice for analysing the protein composition of cell types, tissues and fluids, as well as for studying changes in protein expression profiles elicited by various effectors. The technique, which was originally described by O'Farrell and Klose, separates proteins both in terms of their isoelectric point (pI) and molecular weight. Usually, one chooses a condition of interest and lets the cell reveal the global protein behavioral response as all detected proteins can be analyzed both qualitatively (post translational modifications) and quantitatively (relative abundance, corregulated proteins) in relation to each other [http://biobase.dk/cgi bin/celis]. Presently, high resolution 2D PAGE provides the highest resolution for protein analysis and is a key technique in proteomics, an emerging area of research of the post-genomic era that deals with the global analysis of gene expression using a plethora of technology to resolve (2D PAGE), identify (mass spectrometry, Western immunoblotting, etc.), quantitate and characterize proteins, identify interacting partners as well as to store (comprehensive 2D PAGE databases), communicate and interlink protein and DNA mapping and sequence information from ongoing genome projects. Proteomics, together with genomics, cDNA arrays, phage antibody libraries and transgenic models belong to the armamentarium of technology comprising functional genomics. Here I will report on our efforts to apply proteomic technologies to the study of bladder cancer. 
 2. 
                  A Proposition of XML Format for Proteomics Database We propose XML (eXtensible 
                  Markup Language) format for proteomics database to exchange 
                  proteome analysis data. The XML-based data is highly machine-readable 
                  and easy to represent information hierarchy and relationships. 
                  There have been several XML formats of proteome data which mainly 
                  represent the siquence information stored in the Protein Indentication 
                  Resource (PIR) and the Protein Data Base(PDB). 
 3. 
                  Proteomics : An Important Post-genomic Tool for Understanding 
                  Gene Function While the term proteomics is often synonymous with high-throughput 
                  protein profiling of normal versus diseased tissue by 2-D gel 
                  analysis, this definition is very limiting. Increasingly, the 
                  power of proteomics is being recognized for its ability to unravel 
                  intricate protein-protein interactions associated with intracellular 
                  protein trafficking and signaling pathways (i.e., cell-mapping 
                  proteomics). The technology issues associated with expression 
                  proteomics (the study of global changes in protein expression) 
                  and cell-mapping proteomics (the systematic study of protein-protein 
                  interactions through the isolation of protein complexes) are 
                  almost identical and only differ in front-end scale-up processes. 
                  The application of proteomics for studying various biological 
                  problems will be presented with representative examples of (a) 
                  differential protein expression for identifying surrogate markers 
                  for colon cancer progression, (b) a non-2D gel approach for 
                  dissecting complex mixtures of membrane proteins, (c) proteins 
                  that inhibit cytokine signal transduction, (d) proteins that 
                  are involved in the intricate pathway that leads to programmed 
                  cell death (apoptosis). 
 4. 
                  Human Kidney Glomerulus Proteome and proposition of a method 
                  for native protein profiling  To elucidate molecular 
                  mechanism of a chronic nephritis, the following proteome research 
                  of kidney glomeruli has been initiated. Pieces of cortex of 
                  kidney with normal appearance were obtained from patients underwent 
                  surgical nephrectomy due to renal tumor. Glumeruli preparation 
                  were carried out from the cortex by a standard sieving process 
                  using four sieves. The glomeruli on the 150 µm sieve were 
                  collected and further purified by picking up under a phase-contract 
                  microscopy. The glomeruli were spun down, homogenized in 2-DE 
                  lysis buffer and incubated. 1. 
                  Genetic diversity in food legumes of Pakistan as revealed through 
                  characterization, evaluation and biochemical markers 2. 
                  Visualization and Correction of Prokaryotic Taxonomy Using Techniques 
                  from Exploratory Data Analysis There are, at present, 
                  over 5,700 named prokaryotic species. There has long been a 
                  need to organize these species within a comprehensive taxonomy 
                  that relates each species to all the others. For some years, 
                  researchers have been sequencing the small subunit ribosomal 
                  RNA genes of many prokaryotes, initially to try and establish 
                  the evolutionary relationships among all prokaryotes and subsequently 
                  in order to aid in the identification of prokaryotes both known 
                  and unknown. These sequences have become an almost universal 
                  feature in the description of new species. Thus, for the purposes 
                  of classification, the sequences are probably the most useful, 
                  universally described characteristic of the prokaryotes. Small 
                  subunit rRNA gene sequences were used by the staff of the Bergeys 
                  Manual Trust to establish prokaryotic taxonomy above the Family 
                  level only recently. This effort was facilitated by the application 
                  of techniques drawn from the field of exploratory data analysis 
                  to visualize the evolutionary relationships among large numbers 
                  of sequences and, hence, among the organisms they represent. 
                  We describe the techniques used to develop the first maps of 
                  sequence space and the techniques we are currently using to 
                  ease the placement of new organisms in the taxonomy and to uncover 
                  errors in the taxonomy or in sequence annotation. A key advantage 
                  of these techniques is that they allow us to see and use the 
                  complete data set of over 9,200 sequences. We also present plans 
                  for the development of a tool that will allow all interested 
                  researchers to participate in the maintenance and modification 
                  of the taxonomy. 
 Quantitative information 
                  on the types of inter-atomic interactions at the MHC-peptide 
                  interface will provide insights to backbone/sidechain atom preference 
                  during binding. Protein crystallographers have documented qualitative 
                  descriptions of such interactions in each complex. However, 
                  no comprehensive report is available to account for the common 
                  types of inter-atomic interactions in a set of MHC-peptide complexes 
                  characterized by MHC allele variation and peptide sequence diversity. 
                  The available x-ray crystallography data for MHC-peptide complexes 
                  in the Protein Databank (PDB) provides an opportunity to identify 
                  the prevalent types of inter-atomic interactions at the binding 
                  interface. The prevalently dominant SB interaction at the interface suggests the importance of peptide backbone conformation during MHC-peptide binding. Currently available algorithms are well developed for protein side chain prediction upon fixed backbone templates. This study shows the preference of backbone atoms in MHC-peptide binding and hence emphasizes the need for accurate peptide backbone prediction in quantitative MHC-peptide binding calculations. 
 Eukaryotes have both intron-containing and intron-less genes and their proportion varies from species to species. Most eukaryotic genes are multi exonic with their gene structure being interrupted by introns. Introns account for a major proportion in many eukaryotic genomes. For example, the human genome is proposed to contain 24% introns and only 1.1% exons (Venter et al. 2001). Although most genes in eukaryotes contain introns, there are a substantial number of reports on intronless genes. We recently created a database (SEGE) for intronless genes in eukaryotes using GenBank 128 sequence data (http://intron.bic.nus.edu.sg/seg/). The eukaryotic subdivision files from GenBank were used to create a dataset containing entries that are reservedly considered as single exonic genes according to the CDS FEATURE convention. Single exon genes with prokaryotic architectures are of particular interest in gene evolution. Our analysis on this set of genes shows that structures are known for nearly 14% of their gene products. The characteristics and structural features of such proteins are discussed in this presentation. Reference 
 1. 
                  Shell Biodiversity Using Animation Technology 
 In the frog dissection 
                  system (http://ruby.kisti.re.kr/~museumfs), virtual dissection 
                  is enabled in order to eliminate these undesired effects and 
                  the factuality of organs is disguised using Photoshop to minimize 
                  the dislike of and aversion of students to the dissection process. 
                  In addition, the system was designed in such a way that, once 
                  a student replaces the dissected organs after observation is 
                  done, a frog is reanimated and jumps around so that the student 
                  does not treat the subject without care but instead treats it 
                  with respect for its life. Antarctica, the 
                  southernmost continent is a landmass of around 1.36 million 
                  square kilometers 98 percent covered by ice up to 4.7 kilometers 
                  thick. The continent remained neglected for decades after discovery, 
                  scientific research was initiated in early 1940s. Two species 
                  of phanerogams have been reported, whereas most of studies are 
                  carried out on cryptogams like algae, lichens and bryophytes. 
                  There are 700 species of terrestrial and aquatic algae in Antarctica, 
                  250 lichens and 130 species of bryophytes including100 species 
                  of mosses and 25-30 species of liverworts. The species composition 
                  and abundance are controlled by many environmental variables, 
                  such as nutrients, availability of water and increased ultraviolet 
                  radiation resulting from the depletion of the ozone hole. These 
                  cryptogams can be found in almost all areas capable of supporting 
                  plant life in Antarctica and exhibit a number of adaptations 
                  to the Antarctic environment. There is a need to apply molecular 
                  and cellular techniques to study biodiversity and genetic characteristics 
                  of flora of this region. Biochemical techniques including DNA 
                  sequencing and microsatellite markers are being used to obtain 
                  information about the genetic structure of plant populations. 
                  These analyses are designed to assess levels of biodiversity 
                  and to provide information on the origin, evolutionary relationships 
                  and dispersal patterns. Flora of Antarctica needs to be genetically 
                  evaluated for the characters related to survival in that unique 
                  environment that can be incorporated into the economically important 
                  plants using transformation.  4. 
                  Automatic Mapping and Monitoring of Invasive Alien Plant Species, 
                  the South African Experience Chinese Biodiversity 
                  Information System (CBIS) is a nation-wide distributed information 
                  system that collects, arranges, stores and disseminates data/information 
                  refers to biodiversity in China. It consists of a center system, 
                  5 disciplinary divisions and dozens of data source. The Center 
                  System of CBIS is located in the Institute of Botany, Chinese 
                  Academy of Sciences, Beijing. The 5 divisions are Zoological 
                  Division (in Institute of Zoology, CAS, Beijing), Botanical 
                  Division (in Institute of Botany), Microbiological Division 
                  (in Institute of Microbiology, CAS, Beijing), Inland Wetland 
                  Biological Division (in Institute of Hydrobiology, CAS, Wuhan) 
                  and Marine Biological Division (in South China Sea Institute 
                  of Oceanology, CAS, Guangzhou). The data sources cover 15 institutes 
                  in CAS and includes botanical garden, field research station, 
                  museum, cell bank, seed bank, culture collection and research 
                  group. The Center System is response for building up and maintaining 
                  integrated and national-scale biodiversity database, environmental 
                  factor and vegetation database, model base and expert system 
                  in ecosystem level, and platform and tools of modeling and expert 
                  system. The Disciplinary Divisions are response for building 
                  up and maintaining database, model base and expert systems on 
                  their fields focused on data and information of species level. 
                  Data Sources are response for building up and maintaining database 
                  based on their local situation and disciplinary character, combining 
                  with GIS technology to present biodiversity information and 
                  data both in table and graphics. In order to conserve 
                  and protect the very rich biological resources that have evolved 
                  in a unique natural environment, the government in Taiwan has 
                  set up a special committee and assigned a government agency, 
                  both at the cabinet level, to be in charge of planning and implementing 
                  relevant programs, respectively. Convening Prospects of 
                  Biodiversity, Biodiversity-1999 and Biodiversity in the 21st 
                  Century symposia has been the main means of building the 
                  national consensus to identify issues to be studied, which have 
                  motivated scientists to initiate the challenging task with the 
                  support of research funding from related agencies. There are 
                  6 national parks, 18 nature reserves, 13 wildlife and 24 nature 
                  protection areas, totally covering 12.2% of the land area. The 
                  Policy Formulating Committee for Climate Changes has recommended 
                  the enforcement of education on biodiversity (including all 
                  levels of school and general public education), and formulated 
                  the working plans on the national biodiversity preservation 
                  and bioresources survey. The research programs in progress, 
                  supported by the national funding, include surveys on species, 
                  habitants, ecosystems and genetic diversities, long-term monitoring 
                  of diversity, sustainable bioresource utilization and compilation 
                  of flora of Taiwan. Increase in the number of scientific publications 
                  and increased emphasis placed by news media show the increased 
                  concern of both academic and public domains on biodiversity 
                  issue. Besides, the material and information databases related 
                  to the biological resources of various categories have been 
                  established and revised regularly. The following bioscience 
                  databases have been established in Taiwan: National plant genetic 
                  resources information system, Multimedia databank of Taiwan 
                  wildlife, Taiwan Agricultural Institute plant information system, 
                  Distribution and resources of fishes in Taiwan, Herbaria at 
                  many sites, Cell bank, Asian vegetable genetic resources and 
                  seeds, Database of pig production, Registry of pure-bred swine, 
                  Mating, furrowing, performance and transfer of ownership of 
                  pure-bred swine, Food marketing information system database, 
                  Food composition table in Taiwan, Database on heavy metals in 
                  Taiwan soils, Greenhouse gases emission from agriculture, Global 
                  change database generated in Taiwan. 
 1. 
                  Unweaving regulatory networks: automated extraction from literature 
                  and 
 
 3. 
                  PIR Integrated Databases And Data-Mining Tools For Genomic And 
                  Proteomic Research 
 4. 
                  Extraction of Phylogenetic Information from Gene Order Data Molecular phylogeny is frequently inferred from comparisons of nucleic or amino acid sequences of a single gene or protein family from different organisms. It is now known that there are a number of difficulties with this approach, for instance, correct alignment of sequence data, biased base (or amino acid) compositions among species, rate variation among sites and/or species, mutational saturation, and long-branch attraction artifact. Thus, development of new methods that can produce a reliable phylogenetic tree is an important issue. Here we present a simple method of reconstructing branching orders among genomes based on gene transpositions. We demonstrate that the occurrence or absence of a gene transposition event could provide empirical evidence for branching orders, being in contrast to the phenetic approaches of overall similarity or minimum distance. This approach is applied to evolutionary relationships among the completely sequenced Gram-positive bacteria. The complete genomic sequence data allow one to search for the target gene transpositions at a comprehensive level. 
 
 1. 
                  Frameworks for Sustainability of GIS Development in Low Income 
                  Countries This presentation discusses the development of Geographic Information System (GIS) software and technological approaches pursued in Brazil. Issues encountered in sustaining a complex technology in a large low income country (LIC) are outlined. In the process of describing the Brazilian experience, the prevalent assumption that LICs do not possess the complex technical and human resources required to develop and support GIS and similar technologies is challenged. Challenges, benefits and drawbacks of developing GIS software capabilities locally are examined and a number of important applications where local technology development has contributed to better understanding and cost-effective solutions are highlighted. Finally, some of the potential long-term benefits of a "learning-by-doing" approach and how other countries might benefit from the Brazilian experience are discussed. 
 2. 
                  The Geography Network Many now see the Internet as the most effective means of meeting the accelerating demand for geographically referenced information. Launched by ESRI in June, 2000, with the support of the National Geographic Society and many data publishers (EarthSat, GDT, WRI, US EPA, Tele Atlas, Space Imaging, etc.) the Geography Network <www.geographynetwork.com>, is a global collaborative and multi-participant network of geographic information users and providers including government agencies, commercial organizations, data publishers, and service providers, who use the Internet to share, publish, and use geographically referenced information. The Geography Network can be thought of as a large online library of distributed GIS information available to everyone. Users consult the Geography Network catalog, a searchable index of all information and services available to Geography Network users. A wide spectrum of simple to advanced GIS and visualization software technologies and online tools allow defining areas of interest, searching for specific geographic content, and can guide users to mapping services. Using any Internet browser, they access data that are physically located on servers around the globe, and can connect one or more sites at the same time. They can use digital map overlay and visualization, and combine and analyze many types of data from different sources. These data can be provided immediately to browsers or to desktop GIS software. Thousands of data layers are already available and Geography Network content is constantly increasing. Much of the content is accessible for free. Commercial content is also provided and maintained by its owners. Viewing or downloading of commercial content, or using commercial services, is charged in the Geography Network's e-commerce system. Becoming a provider is free and simple to do. The Geography Network uses open GIS standards and communication protocols, and serves as a test bed for data providers and the Open GIS Consortium. This presentation will show how the system works, explain the facilities provided, indicate the range of providers, describe the genesis of the system and its progress, and discuss future plans and directions. 
 3. 
                  Geospatial Information One-Stop The Geospatial One-Stop is part of a Presidential Initiative to improve effectiveness, efficiency, and customer service throughout the U.S. Federal Government. It builds upon the National Spatial Data Infrastructure (NSDI) and will accelerate its development and implementation. Geospatial One-Stop is classified as a Government-to-Government (G2G) project because it will focus on sharing and integrating Federal, State, local, and tribal data, and enable more effective management of government business. The vision is to spatially enable the delivery of government services. The goals of Geospatial Information One Stop include providing fast, low-cost reliable access to Geospatial Data for government operations, facilitating G2G interactions needed for vertical missions such as Homeland Security, supporting the alignment of roles, responsibilities and resources, and establishing a methodology for obtaining multi-sector input for coordinating, developing and implementing geographic (data and service) information standards to create the consistency needed for interoperability and to stimulate market development of tools The five major tasks identified in the Project Plan are: 1. Develop and implement data standards for NSDI Framework Data. 2. Fulfill and maintain an operational inventory (based on standardized documentation, using FGDC Metadata Standard) of NSDI Framework Data from Federal agencies, and publish the metadata records in the NSDI Clearinghouse network. 3. Publish metadata of planned acquisition and update activities for NSDI Framework Data from Federal agencies in the NSDI Clearinghouse network. 4. Prototype and deploy data access and web mapping services for NSDI Framework Data from Federal agencies. 5. Establish a comprehensive Federal portal to the resources described in the first four components (standards, priority data, planning information, and products and services), as a logical extension to the NSDI Clearinghouse network. 
 4. 
                  The National Map - Sharing Geospatial Data in the 21st Century Over the last century, the United States has invested on the 
                  order of $1.6 billion and 33 million person hours in the standard 
                  (1:24,000 scale) topographic map series. These maps and associated 
                  digital data are the country's most extensive geospatial data 
                  infrastructure. They are also the only coast-to-coast, border-to-border 
                  coverage of our Nation's critical infrastructure - highways, 
                  bridges, dams, power plants, airports, etc. It is, however, 
                  an asset that is becoming increasingly outdated. These maps 
                  range in age from one year, those that were updated last year, 
                  to 57 years, those that have never been updated. The average 
                  age of these 55,000 maps is 23 years.  
 1. 
                  Application of methods of space-distributed systems modeling 
                  in ecology A review of the studies carried out at NTUU"KPI" and the Institute of Cybernetics of National Academy of Sciences of Ukraine is presented. Two-dimensional and three-dimensional equations of diffusion and heat - mass transfer are used as mathematical models. The models make it possible to take account of space distribution, structural non-uniformity and anomaly properties of physical processes of harmful impurities spreading in the atmosphere, open water pools and subsoil waters. The considered processes are characterized by substantial distribution in space. Therefore, efficient methods of numerical solution of two- and three-dimensional model equations are presented. The complexes of programs allowing to solve efficiently the 
                  problems of modeling, prognosis and estimation of ecological 
                  processes in various environment are given. 
  2. 
                  Une mission géographique et ethnopharmacologique sur 
                  les plantes toxiques de l'Ile Maurice 
 3. 
                  Carte structurale de l'océan indien Dans le cadre des activités de la CCGM (Commission de 
                  la Carte Géologique du Monde), sous la supervision de 
                  l'UNESCO, il a été décidé de créer 
                  un certain nombre de cartes géologiques, tectoniques, 
                  structurales englobant le domaine maritime pour lequel il y 
                  a maintenant beaucoup d'informations, la Commission pour la 
                  cartographie des fonds sous marins étant en charge de 
                  ce dernier domaine; 
 L'ensemble des donnée qui sont accessibles actuellement 
                  figurera sur cette carte : 
 4. 
                  Passerelle d'information sur les collections, spécimens 
                  et observations biologiques (ICSOB) La Passerelle ICSOB est un prototype de moteur de recherche et de cartographie spécialisé sur les données d'observation et les spécimens biologiques des collections d'histoire naturelle. ICSOB répertorie les données disponibles par l'intermédiaire de réseaux de biodiversité accessibles sur l'Internet par voie de requêtes distribuées tels que l'Analyste d'espèces (TSA), le Réseau mondial d'information sur la biodiversité (REMIB) ou le Réseau européen d'information sur les spécimens d'histoire naturelle (ENHSIN). De façon analogue aux moteurs de recherche (tels que Google ou Altavista) qui aident à localiser des documents hypertextes, ICSOB récolte des noms dans les collections distribuées sur les réseaux de l'Internet et connecte les usagers directement aux sources de données originales. Les enregistrements de données transitent directement des gestionnaires autorisés de données primaires aux usagers finaux en temps réel. En outre, les enregistrements pourvus de coordonnées géographiques (longitude, latitude) sont reportés dynamiquement sur une carte du monde dont chacun des points de distribution est directement relié aux données originales. La Passerelle ICSOB fourni un point d'accès à des millions d'enregistrements individuels en provenance de plusieurs réseaux de biodiversité distincts. ICSOB est pleinement intégré à la version multilingue du Système d'information taxonomique intégré (SITI) facilitant l'accès aux données soit par l'intermédiaire de noms communs, de noms scientifiques ou de synonymes. 
 
 1. 
                  Interactive Information System for Irrigation Management Irrigation management is a key to efficient and timely water distribution in canal command areas keeping in view the crop factors, and for irrigation management adequate and always updated information regarding the irrigation system is needed. This paper illustrates a GIS Tool for Irrigation Management which provides information interactively for decision making process. This Interactive Information System (IIS) has been developed to facilitate the operation and management of the command area development and to calculate the irrigation efficiency in the field level. At the basis of this development is geographic information systems (GIS) but gradually, this is being adapted to the kind of decision and management functions that lie at the heart of the planning process of any irrigation project. It also provides support to the design engineers to assess the impact of the design parameters of the System. This is an Arcview based GIS tool developed with the Avenue Codes by integrating the GIS and Relational Database Management System (RDMS). Effective integration of GIS with RDMS enhances performance evaluation and diagnostic analysis capabilities. For this application real time topographic data are required which stored as spatially distributed datasets, back end RDMS has been used to store related attribute information, it lets an Irrigation manager to do some real time calculation and analysis which covers 
 Easy updating system of the associated database keeps the system 
                  always updated in respect of the real field situation. A very 
                  good user friendly Graphical User Interface at the front end 
                  helps the manager to operate the application easily. Using these 
                  "point on click" functions of this application an 
                  irrigation manager is capable to generate outputs in the form 
                  of Maps, Tables and Graphs which guide him to take prompt and 
                  appropriate decision with in few minutes. 
 2. 
                  Results of a Workshop on Scientific Data for Decision Making 
                  Toward Sustainable Development: Senegal River Basin Case Study 
 The spatial databases construction of Chinese ecosystems is based on Chinese Ecosystem Research Network (CERN), Chinese Academy of Sciences (CAS). In order to meet the challenges of understanding and solving the issues of resources and environment at the regional or other larger scales, and with the support of Chinese Academy of Sciences, CERN started to be constructed in 1988. CERN consists of 35 ecological stations on agriculture, forest, grassland, lake and bay ecosystems, which produce a lot of data by monitoring and measurement every day. The quality of these data is control by 5 sub-centers of CERN, including water, soil, atmosphere, biological and aquatic sub-center. At last, all these enormous calibrated data including spatial data are collected in synthesis center. We constructed the 
                  spatial databases to connect the enormous monitoring data with 
                  ecological spatial information. This study of the spatial databases 
                  includes: Key words: ecosystem network; Geographic Information System; Data Share 
 4. 
                  Development of the Global Map: National and Cross-National Coordination The Global Map is geospatial framework data of the Earth's land areas. This framework will be used to place environmental, economic and social data in its geographic context. The Global Map concept permits individual countries to determine how they will be represented in a global data base consisting of 8 layers of standardized data: administrative boundaries, drainage, transportation, population centres, elevation, land cover, land use and vegetation cover at a data density suitable for presentation at a scale of 1:1M. Usually it is the national mapping organizations that contribute data of their country to the Global Map, which is then made available at marginal or no cost. At present, 94 nations have agreed to contribute information to the Global Map and an additional 42 are considering their participation. To date, coverage has been completed and is available for 11 countries. While there is a wealth of source data available for this undertaking, 
                  not all nations have the capacity to evaluate the source data 
                  sets, make corrections and transform them into a contribution 
                  to the Global Map. A proposal to relax the specifications in 
                  order to hasten the completion of the Global Map will have to 
                  be balanced with the problems of dealing with heterogeneous 
                  databases, particularly in the integration, analysis and modeling. 1. 
                  Application de l'Intelligence Artificielle et Télématique 
                  dans les Sciences de la Terre et de l'Environnement Presentation of the the book : Artificial Intelligence and Dynamic Systems in Geophysical Applications. By A. Gvishiani and J.O. Dubois , Schmidt United Institute of Physics of the Earth RAS, CGDS and Institut de Physique du Globe de Paris. This volume is the second of a two-volume series written by 
                  A. Gvishiani and J.O. Dubois. The book "Artificial Intelligence" introduces geometrical clustering and fuzzy logic approaches to geophysical data analysis. A significant part of the volume is devoted to applying the artificial intelligence techniques introduced in volumes 1 and 2, to fields such as seismology, geodynamics, geoelectricity, geomagnetism, aeromagnetics, topography and bathymetry. As in the first volume, this volume consists of two parts, describing complementary approaches to the analysis of natural systems. The first part, written by A. Gvishiani, deals with new ideas and methods in geometrical clustering and the fuzzy logic approach to geophysical data classification. It lays out the mathematical theory and formalized algorithms that form the basis for classification and clustering of the vector objects under consideration. It lays the foundation for the second part of this book which is the use of this classification in the study of dynamical systems. The second part, written by J.O. Dubois, is concerned with 
                  various theoretical tools and their applications to modeling 
                  of natural systems using large geophysical data sets. Fractals 
                  and dynamic systems are used to analyse geomorphological (continental 
                  and marine), hydrological, bathymetrical, gravimetrical, seismological, 
                  geomagnetical and volcanological data. The first volume is devoted to the mathematical and algorithmical basis of the proposed artificial intelligence techniques; this volume presents a wide range of applications of those techniques to geophysical data processing and research problems. At the same time it presents a reader with another algorithmic approach based on fuzzy logic and geometrical illumination models. Many readers will be interested in the two volumes (vol.1, J.O. Dubois, A. Gvishiani "Dynamic Systems and Dynamic Classification Problems in Geophysical Applications" and the present vol.2, A. Gvishiani, J.O. Dubois "Artificial Intelligence and Dynamic Systems in Geophysical Applications") as a package. 
 2. 
                  The Environmental Scenario Generator (ESG) a Distributed Environmental 
                  Data Mining Tool The Environmental Scenario Generator (ESG) is a network distributed software system designed to allow a user running a simulation to intelligently access distributed environmental data archives for inclusion and integration with model runs. The ESG is built to solve several key problems for the modeler. The first is to provide access to an intelligent ?data mining? tool so that key environmental data can not only be retrieved and visualized but in addition, user defined conditions can be searched for and discovered. As an example, a user modeling a hurricane?s landfall might want to model the result of an extreme rain event prior to the hurricane?s arrival. Without a tool such as ESG the simulation coordinator would be required to know: 
 If we consider combining these questions across multiple parameters, 
                  such as temperature, pressure, wind speed, etc. and then add 
                  multiple regions and seasons the problem reveals itself to be 
                  quite daunting. 
 3. 
                  Satellite Imagery As a Multi-Disciplinary Tool for Environmental 
                  Applications 
 SPIDR 2 is a distributed 
                  resource for accessing space physics data which was designed 
                  and constructed jointly at NGDC and CGDS to support requirements 
                  of the Global Observation and Information Network (GOIN) project. 
                  SPIDR is designed to allow users to search, browse, retrieve, 
                  and display Solar Terrestrial Physics (STP) and DMSP satellite 
                  digital data. SPIDR consists of a WWW interface, online data 
                  and information, and interactive display programs, advanced 
                  data mining and data retrieval programs. 
 5. 
                  An Automatic Analysis of Long Geoelectromagnetic Time Series: 
                  Determination of the Volcanic Activity Precursors The new methods 
                  developed for the geophysical long time series analysis, based 
                  on the fuzzy logic approach. These methods include the algorithms 
                  for the determination of anomalous signals. They are specially 
                  designed and very efficient in the problems where the definition 
                  of anomalous signal is fuzzy, i.e. the general signature, amplitude 
                  and frequency of the signal can not be prescribed a priory, 
                  as in the case of seeking for the precursors of natural disasters 
                  in geophysical records. The developed algorithms are able to 
                  determine the intervals of the record that are anomalous withrespect 
                  to the background signal presented at the record. Another part 
                  of algorithms deal with the morphology analysis of signals. 
                  These algorithms were applied for the analysis of the electromagnetic 
                  records over La Fournaise volcano (Reunion island). For several 
                  years five stations measured the the electric field along different 
                  directions. The signals specific for the eruption events are 
                  determined and correlated over several stations. Another types 
                  of signals that correspond to storms and other sources are also 
                  determined and classified. The software is designed that helps 
                  to analyze the spatial distribution of activity over stations. 6. 
                  Application of telematics approaches for solving the problems 
                  of distributed environmental monitoring The results of research carried out at the Cybernetics Gloushkov Center of National Academy of Sciences of Ukraine are presented. A review of the advanced developments in the field of distributed environmental monitoring is given. Among the presented developments - the interactive system of modeling and prognosis of ecological, economic and other processes on the basis of observations for support of taking up quick control decisions. The system is based on the inductive method of arguments group accounting used for automatic extraction of the substantial information from the measurement data. The efficiency of the system is demonstrated on applications of modeling and prognosis of dynamics changes of animal plankton concentration, number of microorganisms in contaminated soil and others. The designs of the mobile laboratory of the quick radiation monitoring (RAMON) and of the automated system for research of subsoil water processes (NADRA) are presented. Problems of the user interface intelletualization in geophysical software are considered. 
 
 1. 
                  Clustering of Geophysical Data by New Fuzzy Logic Based Algorithms A new system of 
                  clusterization algorithms, based on geometrical model of illumination 
                  in the finite-dimensional space, has been developed recently, 
                  using fuzzy sets approach. The two major components of the system 
                  are RODIN and CRYSTAL algorithms. These two efficient clusterization 
                  tools will be presented along with their applications to seismological, 
                  gravity and geomagnetic data analysis. The regions of Malucca 
                  Sea (Indonesia) and Gulf of San Malo (France) are under consideration. 
                  In the course of study of the very complicated geodynamics of 
                  the Malucca sea region the clusterization of earthquakes hypocenters 
                  with respect to their position, type of faulting and horizontal 
                  displacement strike was performed. The results of this procedure 
                  made more clear the stress pattern and hence the geodynamical 
                  structure of the region. RODIN algorithm was also applied for 
                  clustering of the results of anomalous gravity field pseudo-inversion 
                  over this region. It improved the solution considerably and 
                  helped to determine the depths and horizontal positions of sources 
                  of the gravity anomalies. The obtained results correlate well 
                  with the results of the local seismic tomography and gravity 
                  inversion. In the region of Gulf of San Malo the developed algorithms 
                  was successfully used to investigate the structure of quasi-linear 
                  magnetic anomalies onshore and offshore. 
 2. 
                  Artificial Intelligence Methods in the Analysis of Large Geophysical 
                  Data Bases 
 3. 
                  Geo- Environmental Assessment of Flash Flood Hazard of the Safaga 
                  Terrain, Egypt, Using Remote Sensing Imagary 
 4. 
                  On the Modeling of Fast Variations of the Mode of Deformation 
                  of Lithospheric Plates 
 5. 
                  New Mathematical Approach to Seismotectonic Data Studies The paper discuses possible applications of the new recently 
                  obtained exact solutions of the some classical problems of the 
                  elasticity theory for domains having ruptures. Analysis of the 
                  solutions obtained demonstrated that the solution for domains 
                  with ruptures is non unique. The explanation is in the fact 
                  that the properties of apexes of crack differs considerably 
                  from properties of the domain they belong to. The stress distribution 
                  strongly depends on the work of the surface forces released 
                  in these points. Practically it is a question of the work released 
                  on micro level. Thus effect of apexes of crack can be calculated 
                  only as an additional work released there. 
 | ||||||||||||||||||||||||||||||||
| Last site update: 25 September 2002 | ||||||||||||||||||||||||||||||||