QuickSearch:   Number of matching entries: 0.

Search Settings

    AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
    Adjiman, P., Chatalic, P., Goasdoue, F., Rousset, M. & Simon, L. Distributed reasoning in a peer-to-peer setting: Application to the Semantic Web {2006} JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
    Vol. {25}, pp. {269-314} 
    article  
    Abstract: In a peer-to-peer inferencesystem, each peer can reason locally but can also lso solicit licit some of its acquatances, which are peers sharing part of its vocabulary. In this paper, we consider peer-to-peer inference systems in which the local theory of each peer is a set et of propositional clauses defined upon a local vocabulary. An important characteristic of peer-to peer infererence systems is that the global theory (the union of all peer theories) is not known (as opposed to partition-based reasoning systems). The main contribution of this paper is to provide the first consequence finding algorithm in a peer-to-peer setting: DeCA. It is anytime and computes consequences gradually from the solicited peer to peers that are more and more distant. We exhibit a sufficient condition on the acquaintance graph of the peer-to-peer inference system for guaranteeing the completeness of this algorithm. Another important contribution is to apply this general distributed reasoning setting to the setting etting of the Semantic Web through the Somewhere semantic peer-to-peer data management system. The last contribution of this paper is to provide an experimental analysis of the scalability of the peer-to-peer infrastructure that we propose, on large networks of 1000 peers.
    BibTeX:
    @article{Adjiman2006,
      author = {Adjiman, P and Chatalic, P and Goasdoue, F and Rousset, MC and Simon, L},
      title = {Distributed reasoning in a peer-to-peer setting: Application to the Semantic Web},
      journal = {JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH},
      year = {2006},
      volume = {25},
      pages = {269-314}
    }
    
    Agarwal, P. Ontological considerations in GIScience {2005} INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE
    Vol. {19}({5}), pp. {501-536} 
    article DOI  
    Abstract: Ontology is a significant research theme in GIScience. While some researchers believe that the progress in GIScience is being directed through an engagement with the concept of ontology, some dismiss it as irrelevant. This paper is aimed at (i) exploring the theoretical and practical roles of ontologies; (ii) making the definitions and terminology explicit; (iii) assessing the applicability of ontology to problems in the geographical domain; and (iv) assessing whether a unified approach to ontology exists in GIScience. The results will be helpful for GIScientists in (i) understanding the validity of employing ontology within their own work, (ii) assessing what operational framework of terms and methods to use for developing their own ontology, and (iii) to assess what existing ontological models are available and applicable within their domain or application. A comprehensive and critical review will also help in identifying the signficant issues and directing future research agenda in GIScience.
    BibTeX:
    @article{Agarwal2005,
      author = {Agarwal, P},
      title = {Ontological considerations in GIScience},
      journal = {INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE},
      year = {2005},
      volume = {19},
      number = {5},
      pages = {501-536},
      doi = {{10.1080/13658810500032321}}
    }
    
    Ahmed, T., Mehaoua, A., Boutaba, R. & Iraqi, Y. Adaptive packet video streaming over IP networks: A cross-layer approach {2005} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {23}({2}), pp. {385-401} 
    article DOI  
    Abstract: There is an increasing demand for supporting real-time audiovisual services over next-generation wired and wireless networks. Various link/network characteristics make the deployment of such demanding services more challenging than traditional data applications like e-mail and the Web. These audiovisual applications are bandwidth adaptive but have stringent delay, jitter, and packet loss requirements. Consequently, one of the major requirements for the successful and wide deployment of such services is the efficient transmission of sensitive content(audio, video, image) over a broad range of bandwidth-constrained access networks. These media will be typically compressed according to the emerging ISO/IEC MPEG-4 standard to achieve high bandwidth efficiency and. content-based interactivity. MPEG-4 provides an integrated object-oriented representation and coding of natural and. synthetic audiovisual content for its manipulation and transport over a broad range of communication infrastructures. In this paper, we leverage the characteristics of MPEG-4 and Internet protocol (IP) differentiated service frameworks, to propose an innovative cross-layer content delivery architecture that is capable of receiving information from the network and adaptively tune transport parameters, bit rates, and QoS mechanisms according to the underlying network conditions. This service-aware IP transport architecture is composed of: 1) an automatic content-level audiovisual object classification model; 2) a reliable application level framing protocol with fine-grained TCP-Friendly rate control and adaptive unequal error protection; and 3) a service-level QoS matching/packet tagging algorithm for seamless IP differentiated service delivery. The obtained results demonstrate, that breaking the OSI protocol layer isolation paradigm and injecting content-level semantic and service-level requirements within the transport and traffic control protocols, lead to intelligent and efficient support of multimedia services over complex network architectures.
    BibTeX:
    @article{Ahmed2005,
      author = {Ahmed, T and Mehaoua, A and Boutaba, R and Iraqi, Y},
      title = {Adaptive packet video streaming over IP networks: A cross-layer approach},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2005},
      volume = {23},
      number = {2},
      pages = {385-401},
      doi = {{10.1109/JSAC.2004.839425}}
    }
    
    Alameda, J., Christie, M., Fox, G., Futrelle, J., Gannon, D., Hategan, M., Kandaswamy, G., von Laszewski, G., Nacar, M.A., Pierce, M., Roberts, E., Severance, C. & Thomas, M. The Open Grid Computing environments collaboration: portlets and services for science gateways {2007} CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
    Vol. {19}({6}), pp. {921-942} 
    article DOI  
    Abstract: We review the efforts of the Open Grid Computing Environments collaboration. By adopting a general three-tiered architecture based on common standards for portlets and Grid Web services, we can deliver numerous capabilities to science gateways from our diverse constituent efforts. In this paper, we discuss our support for standards-based Grid portlets using the Velocity development environment. Our Grid portlets are based on abstraction layers provided by the Java CoG kit, which hide the differences of different Grid toolkits. Sophisticated services are decoupled from the portal container using Web service strategies. We describe advance information, semantic data, collaboration, and science application services developed by our consortium. Copyright (c) 2006 John Wiley & Sons, Ltd.
    BibTeX:
    @article{Alameda2007,
      author = {Alameda, Jay and Christie, Marcus and Fox, Geoffrey and Futrelle, Joe and Gannon, Dennis and Hategan, Mihael and Kandaswamy, Gopi and von Laszewski, Gregor and Nacar, Mehmet A. and Pierce, Marlon and Roberts, Eric and Severance, Charles and Thomas, Mary},
      title = {The Open Grid Computing environments collaboration: portlets and services for science gateways},
      journal = {CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE},
      year = {2007},
      volume = {19},
      number = {6},
      pages = {921-942},
      note = {14th Global Grid Forum Science Gateway Workshop, Chicago, IL, JUN, 2005},
      doi = {{10.1002/cpe.1078}}
    }
    
    Aleman-Meza, B., Halaschek-Wiener, C., Arpinar, I., Ramakrishnan, C. & Sheth, A. Ranking complex relationships on the semantic Web {2005} IEEE INTERNET COMPUTING
    Vol. {9}({3}), pp. {37-44} 
    article  
    Abstract: Industry and academia are both focusing their attention on information retrieval over semantic metadata extracted from the Web, and it is increasingly possible to analyze such metadata to discover interesting relationships. However, just as document ranking is a critical component in today's search engines, the ranking of complex relationships will be an important component in tomorrow's Semantic Web engines. This article presents a flexible ranking approach to identify interesting and relevant relationships in the Semantic Web. The authors demonstrate the scheme's effectiveness through an empirical evaluation over a real-world data set.
    BibTeX:
    @article{Aleman-Meza2005,
      author = {Aleman-Meza, B and Halaschek-Wiener, C and Arpinar, IB and Ramakrishnan, C and Sheth, AP},
      title = {Ranking complex relationships on the semantic Web},
      journal = {IEEE INTERNET COMPUTING},
      year = {2005},
      volume = {9},
      number = {3},
      pages = {37-44}
    }
    
    Amiri, K., Park, S., Tewari, R. & Padmanabhan, S. DBProxy: A dynamic data cache for Web applications {2003} 19TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS, pp. {821-831}  inproceedings  
    Abstract: The majority of web pages served today are generated dynamically, usually by an application server querying a back-end database. To enhance the scalability of dynamic content serving in large sites, application servers are offloaded to front-end nodes, called edge servers. The improvement from such application offloading is marginal, however if data is still fetched from the Origin database system. To further improve scalability and cut response times, data must be effectively cached on such edge servers. The scale of deployment of edge servers and the rising costs of their administration demand that such caches be self-managing and adaptive. In this paper, we describe DBProxy, an edge-of-network semantic data cache for web applications. DBProxy is designed to adapt to changes in the workload in a transparent and graceful fashion by caching a large number of overlapping and dynamically changing ``materialized views''. New ``views'' are added automatically while others may be discarded to save space. In this paper, we discuss the challenges of designing and implementing such a dynamic edge data cache, and describe our proposed solutions.
    BibTeX:
    @inproceedings{Amiri2003,
      author = {Amiri, K and Park, S and Tewari, R and Padmanabhan, S},
      title = {DBProxy: A dynamic data cache for Web applications},
      booktitle = {19TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS},
      year = {2003},
      pages = {821-831},
      note = {19th International Conference on Data Engineering, BANGALORE, INDIA, MAR 05-08, 2003}
    }
    
    Ankolekar, A., Burstein, M., Hobbs, J., Lassila, O., Martin, D., McDermott, D., McIlraith, S., Narayanan, S., Paolucci, M., Payne, T., Sycara, K. & DAML S Coalition DAML-S: Web Service description for the Semantic Web {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {348-363} 
    inproceedings  
    Abstract: In this paper we present DAML-S, a DAML+OIL ontology for describing the properties and capabilities of Web Services. Web Services - Web-accessible programs and devices - are garnering a great deal of interest from industry, and standards are emerging for low-level descriptions of Web Services. DAML-S complements this effort by providing Web Service descriptions at the application layer, describing what a service can do, and not just how it does it. In this paper we describe three aspects of our ontology: the service profile, the process model, and the service grounding. The paper focuses on the grounding, which connects our ontology with low-level XML-based descriptions of Web Services.
    BibTeX:
    @inproceedings{Ankolekar2002,
      author = {Ankolekar, A and Burstein, M and Hobbs, JR and Lassila, O and Martin, D and McDermott, D and McIlraith, SA and Narayanan, S and Paolucci, M and Payne, T and Sycara, K and DAML S Coalition},
      title = {DAML-S: Web Service description for the Semantic Web},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {348-363},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Antoniou, G. Nonmonotonic rule systems on top of ontology layers {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {394-398} 
    inproceedings  
    Abstract: The development of the Semantic Web proceeds in layers. Currently the most advanced layer that has reached maturity is the ontology layer, in the from of the DAML+OIL language which corresponds to a rich description logic. The next step will be the the realization of logical rule systems on top of the ontology layer. Computationally simple nonmonotonic rule systems show promise to play an important role in electronic commerce on the Semantic Web. In this paper we outline how nonmonotonic rule systems in the form of defeasible reasoning, can be built on top of description logics.
    BibTeX:
    @inproceedings{Antoniou2002,
      author = {Antoniou, G},
      title = {Nonmonotonic rule systems on top of ontology layers},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {394-398},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Antoniou, G. & Bikakis, A. DR-Prolog: A system for defeasible reasoning with rules and ontologies on the Semantic Web {2007} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {19}({2}), pp. {233-245} 
    article  
    Abstract: Nonmonotonic rule systems are expected to play an important role in the layered development of the Semantic Web. Defeasible reasoning is a direction in nonmonotonic reasoning that is based on the use of rules that may be defeated by other rules. It is a simple, but often more efficient approach than other nonmonotonic rule systems for reasoning with incomplete and inconsistent information. This paper reports on the implementation of a system for defeasible reasoning on the Web. The system 1) is syntactically compatible with RuleML, 2) features strict and defeasible rules, priorities, and two kinds of negation, 3) is based on a translation to logic programming with declarative semantics, 4) is flexible and adaptable to different intuitions within defeasible reasoning, and 5) can reason with rules, RDF, RDF Schema, and (parts of) OWL ontologies.
    BibTeX:
    @article{Antoniou2007,
      author = {Antoniou, Grigoris and Bikakis, Antonis},
      title = {DR-Prolog: A system for defeasible reasoning with rules and ontologies on the Semantic Web},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2007},
      volume = {19},
      number = {2},
      pages = {233-245}
    }
    
    Aranguren, M.E., Bechhofer, S., Lord, P., Sattler, U. & Stevens, R. Understanding and using the meaning of statements in a bio-ontology: recasting the Gene Ontology in OWL {2007} BMC BIOINFORMATICS
    Vol. {8} 
    article DOI  
    Abstract: The bio-ontology community falls into two camps: first we have biology domain experts, who actually hold the knowledge we wish to capture in ontologies; second, we have ontology specialists, who hold knowledge about techniques and best practice on ontology development. In the bio-ontology domain, these two camps have often come into conflict, especially where pragmatism comes into conflict with perceived best practice. One of these areas is the insistence of computer scientists on a well-defined semantic basis for the Knowledge Representation language being used. In this article, we will first describe why this community is so insistent. Second, we will illustrate this by examining the semantics of the Web Ontology Language and the semantics placed on the Directed Acyclic Graph as used by the Gene Ontology. Finally we will reconcile the two representations, including the broader Open Biomedical Ontologies format. The ability to exchange between the two representations means that we can capitalise on the features of both languages. Such utility can only arise by the understanding of the semantics of the languages being used. By this illustration of the usefulness of a clear, well-defined language semantics, we wish to promote a wider understanding of the computer science perspective amongst potential users within the biological community.
    BibTeX:
    @article{Aranguren2007,
      author = {Aranguren, Mikel Egana and Bechhofer, Sean and Lord, Phillip and Sattler, Ulrike and Stevens, Robert},
      title = {Understanding and using the meaning of statements in a bio-ontology: recasting the Gene Ontology in OWL},
      journal = {BMC BIOINFORMATICS},
      year = {2007},
      volume = {8},
      doi = {{10.1186/1471-2105-8-57}}
    }
    
    Arndt, R., Troncy, R., Staab, S., Hardman, L. & Vacura, M. COMM: Designing a well-founded multimedia ontology for the web {2007}
    Vol. {4825}SEMANTIC WEB, PROCEEDINGS, pp. {30-43} 
    inproceedings  
    Abstract: Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology before we present the developed ontology, COMM, and evaluate it with respect to our requirements. We provide an API for generating multimedia annotations that conform to COMM.
    BibTeX:
    @inproceedings{Arndt2007,
      author = {Arndt, Richard and Troncy, Raphael and Staab, Steffen and Hardman, Lynda and Vacura, Miroslav},
      title = {COMM: Designing a well-founded multimedia ontology for the web},
      booktitle = {SEMANTIC WEB, PROCEEDINGS},
      year = {2007},
      volume = {4825},
      pages = {30-43},
      note = {6th International Semantic Web Conference/2nd Asian Semantic Web Conference (ISWC 2007/ASWC 2007), Busan, SOUTH KOREA, NOV 11-15, 2007}
    }
    
    Arotaritei, D. & Mitra, S. Web mining: a survey in the fuzzy framework {2004} FUZZY SETS AND SYSTEMS
    Vol. {148}({1}), pp. {5-19} 
    article DOI  
    Abstract: This article provides a survey of the available literature on fuzzy Web mining. The different aspects of Web mining, like clustering, association rule mining, navigation, personalization, Semantic Web, information retrieval, text and image mining are considered under the existing taxonomy. The role of fuzzy sets in handling the different types of uncertainties/impreciseness is highlighted. A hybridization of fuzzy sets with genetic algorithms (another soft computing tool) is described for information retrieval. An extensive bibliography is also included. (C) 2004 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Arotaritei2004,
      author = {Arotaritei, D and Mitra, S},
      title = {Web mining: a survey in the fuzzy framework},
      journal = {FUZZY SETS AND SYSTEMS},
      year = {2004},
      volume = {148},
      number = {1},
      pages = {5-19},
      doi = {{10.1016/j.fss.2004.03.003}}
    }
    
    Aroyo, L. & Dicheva, D. The new challenges for e-learning: The Educational Semantic Web {2004} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {7}({4}), pp. {59-69} 
    article  
    Abstract: The big question for many researchers in the area of educational systems now is what is the next step in the evolution of e-learning? Are we finally moving from a scattered intelligence to a coherent space of collaborative intelligence? How close we are to the vision of the Educational Semantic Web and what do we need to do in order to realize it? Two main challenges can be seen in this direction: on the one hand, to achieve interoperability among various educational systems and on the other hand, to have automated, structured and unified authoring support for their creation. In the spirit of the Semantic Web a key to enabling the interoperability is to capitalize on the ( 1) semantic conceptualization and ontologies, ( 2) common standardized communication syntax, and ( 3) large-scale service-based integration of educational content and functionality provision and usage. A central role in achieving unified authoring support plays the process-awareness of authoring tools, which should reflect the semantic evolution of e-learning systems. The purpose of this paper is to outline the state-of-the-art research along those lines and to suggest a realistic way towards the Educational Semantic Web. With regard to the latter we first propose a modular semantic-driven and service-based interoperability framework, in order to open up, share and reuse educational systems' content and knowledge components. Then we focus on content creation by proposing ontology-driven authoring tools that reflect the modularization in the educational systems, maintain a consistent view on the entire authoring process, and provide wide (semi-) automation of the complex authoring tasks.
    BibTeX:
    @article{Aroyo2004,
      author = {Aroyo, L and Dicheva, D},
      title = {The new challenges for e-learning: The Educational Semantic Web},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2004},
      volume = {7},
      number = {4},
      pages = {59-69}
    }
    
    Aroyo, L., Dolog, P., Houben, G.-J., Kravcik, M., Naeve, A., Nilsson, M. & Wild, F. Interoperability in personalized adaptive learning {2006} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {9}({2}), pp. {4-18} 
    article  
    Abstract: Personalized adaptive learning requires semantic-based and context-aware systems to manage the Web knowledge efficiently as well as to achieve semantic interoperability between heterogeneous information resources and services. The technological and conceptual differences can be bridged either by means of standards or via approaches based on the Semantic Web. This article deals with the issue of semantic interoperability of educational contents on the Web by considering the integration of learning standards, Semantic Web, and adaptive technologies to meet the requirements of learners. Discussion is made on the state of the art and the main challenges in this field, including metadata access and design issues relating to adaptive learning. Additionally, a way how to integrate several original approaches is proposed.
    BibTeX:
    @article{Aroyo2006,
      author = {Aroyo, Lora and Dolog, Peter and Houben, Geert-Jan and Kravcik, Milos and Naeve, Ambjorn and Nilsson, Mikael and Wild, Fridolin},
      title = {Interoperability in personalized adaptive learning},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2006},
      volume = {9},
      number = {2},
      pages = {4-18},
      note = {14th International World Wide Web Conference (WWW2005), Chiba, JAPAN, MAY 10-14, 2005}
    }
    
    Artz, D. & Gil, Y. A survey of trust in computer science and the Semantic Web {2007} JOURNAL OF WEB SEMANTICS
    Vol. {5}({2}), pp. {58-71} 
    article DOI  
    Abstract: Trust is an integral component in many kinds of human interaction, allowing people to act under uncertainty and with the risk of negative consequences. For example, exchanging money for a service, giving access to your property, and choosing between conflicting sources of information all may utilize some form of trust. In computer science, trust is a widely used term whose definition differs among researchers and application areas. Trust is an essential component of the vision for the SemanticWeb, where both new problems and new applications of trust are being studied. This paper gives an overview of existing trust research in computer science and the Semantic Web. (C) 2007 Published by Elsevier B. V.
    BibTeX:
    @article{Artz2007,
      author = {Artz, Donovan and Gil, Yolanda},
      title = {A survey of trust in computer science and the Semantic Web},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2007},
      volume = {5},
      number = {2},
      pages = {58-71},
      doi = {{10.1016/j.websem.2007.03.002}}
    }
    
    Astrova, I. Reverse engineering of relational Databases to ontologies {2004}
    Vol. {3053}SEMANTIC WEB: RESEARCH AND APPLICATIONS, pp. {327-341} 
    inproceedings  
    Abstract: A majority of the work on reverse engineering has been done on extracting entity-relationship and object models from relational databases. There exist only a few approaches that consider ontologies, as the target for reverse engineering. Moreover, the existing approaches can extract only a small subset of semantics embedded within a relational database, or they can require much user interaction for semantic annotation. In our opinion, the potential source of these problems lies in that the primary focus has been on analyzing key correlation. Data and attribute correlations are considered rarely and thus, have received little or no analysis. As an attempt to resolve the problems, we propose a novel approach, which is based on an analysis of key, data and attribute correlations, as well as their combination. Our approach can be applied to migrating data-intensive Web pages, which are usually based on relational databases, into the ontology-based Semantic Web.
    BibTeX:
    @inproceedings{Astrova2004,
      author = {Astrova, I},
      title = {Reverse engineering of relational Databases to ontologies},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS},
      year = {2004},
      volume = {3053},
      pages = {327-341},
      note = {1st European Semantic Web Symposium, Heraklion, GREECE, MAY 10-12, 2004}
    }
    
    Auer, S., Dietzold, S. & Riechert, T. OntoWiki - A tool for social, semantic collaboration {2006}
    Vol. {4273}Semantic Web - ISEC 2006, Proceedings, pp. {736-749} 
    inproceedings  
    Abstract: We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing to comment and discuss every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. Ontowiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ``architecture of participation'' that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.
    BibTeX:
    @inproceedings{Auer2006,
      author = {Auer, Soren and Dietzold, Sebastian and Riechert, Thomas},
      title = {OntoWiki - A tool for social, semantic collaboration},
      booktitle = {Semantic Web - ISEC 2006, Proceedings},
      year = {2006},
      volume = {4273},
      pages = {736-749},
      note = {5th International Semantic Web Conference (ISWC 2006), Athens, GA, NOV 05-09, 2006}
    }
    
    Avraham, S., Tung, C.-W., Ilic, K., Jaiswal, P., Kellogg, E.A., McCouch, S., Pujar, A., Reiser, L., Rhee, S.Y., Sachs, M.M., Schaeffer, M., Stein, L., Stevens, P., Vincent, L., Zapata, F. & Ware, D. The Plant Ontology Database: a community resource for plant structure and developmental stages controlled vocabulary and annotations {2008} NUCLEIC ACIDS RESEARCH
    Vol. {36}({Sp. Iss. SI}), pp. {D449-D454} 
    article DOI  
    Abstract: The Plant Ontology Consortium (POC, http://www.plantontology.org) is a collaborative effort among model plant genome databases and plant researchers that aims to create, maintain and facilitate the use of a controlled vocabulary (ontology) for plants. The ontology allows users to ascribe attributes of plant structure (anatomy and morphology) and developmental stages to data types, such as genes and phenotypes, to provide a semantic framework to make meaningful cross-species and database comparisons. The POC builds upon groundbreaking work by the Gene Ontology Consortium (GOC) by adopting and extending the GOCs principles, existing software and database structure. Over the past year, POC has added hundreds of ontology terms to associate with thousands of genes and gene products from Arabidopsis, rice and maize, which are available through a newly updated web-based browser (http://www.plantontology.org/amigo/go.cgi) for viewing, searching and querying. The Consortium has also implemented new functionalities to facilitate the application of PO in genomic research and updated the website to keep the contents current. In this report, we present a brief description of resources available from the website, changes to the interfaces, data updates, community activities and future enhancement.
    BibTeX:
    @article{Avraham2008,
      author = {Avraham, Shulamit and Tung, Chih-Wei and Ilic, Katica and Jaiswal, Pankaj and Kellogg, Elizabeth A. and McCouch, Susan and Pujar, Anuradha and Reiser, Leonore and Rhee, Seung Y. and Sachs, Martin M. and Schaeffer, Mary and Stein, Lincoln and Stevens, Peter and Vincent, Leszek and Zapata, Felipe and Ware, Doreen},
      title = {The Plant Ontology Database: a community resource for plant structure and developmental stages controlled vocabulary and annotations},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2008},
      volume = {36},
      number = {Sp. Iss. SI},
      pages = {D449-D454},
      doi = {{10.1093/nar/gkm908}}
    }
    
    Aziz, H., Gao, J., Maropoulos, P. & Cheung, W. Open standard, open source and peer-to-peer tools and methods for collaborative product development {2005} COMPUTERS IN INDUSTRY
    Vol. {56}({3}), pp. {260-271} 
    article DOI  
    Abstract: This paper reports on a collaborative product development and knowledge management platform for small to medium enterprises. It has been recognised that current product lifecycle management (PLM) implementations are document oriented, have a non-customisable data model and inter-enterprise integration difficulties. To overcome these, an ontological knowledge management methodology utilising the semantic web initiative data formats was added to a PLM and an open source alternative. Shortcomings of centralised architectures are highlighted and a solution using a de-centralised architecture proposed. This is implementable at low cost; the scalability increases in line with user numbers. Ontologies, rules and workflows are reusable and extendable. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Aziz2005,
      author = {Aziz, H and Gao, J and Maropoulos, P and Cheung, WM},
      title = {Open standard, open source and peer-to-peer tools and methods for collaborative product development},
      journal = {COMPUTERS IN INDUSTRY},
      year = {2005},
      volume = {56},
      number = {3},
      pages = {260-271},
      doi = {{10.1016/j.compind.2004.12.002}}
    }
    
    Baader, R., Horrocks, I. & Sattler, U. Description logics as ontology languages for the semantic web {2005}
    Vol. {2605}MECHANIZING MATHEMATICAL REASONING: ESSAYS IN HONOUR OF JORG H SIEKMANN ON THE OCCASION OF HIS 60TH BIRTHDAY, pp. {228-248} 
    incollection  
    BibTeX:
    @incollection{Baader2005,
      author = {Baader, R and Horrocks, I and Sattler, U},
      title = {Description logics as ontology languages for the semantic web},
      booktitle = {MECHANIZING MATHEMATICAL REASONING: ESSAYS IN HONOUR OF JORG H SIEKMANN ON THE OCCASION OF HIS 60TH BIRTHDAY},
      year = {2005},
      volume = {2605},
      pages = {228-248}
    }
    
    Bailey, J., Bry, F., Furche, T. & Schaffert, S. Web and Semantic Web query languages: A survey {2005}
    Vol. {3564}REASONING WEB, pp. {35-133} 
    inproceedings  
    Abstract: A number of techniques have been developed to facilitate powerful data retrieval on the Web and Semantic Web. Three categories of Web query languages can be distinguished, according to the format of the data they can retrieve: XML, RDF and Topic Maps. This article introduces the spectrum of languages falling into these categories and summarises their salient aspects. The languages are introduced using common sample data and query types. Key aspects of the query languages considered are stressed in a conclusion.
    BibTeX:
    @inproceedings{Bailey2005,
      author = {Bailey, J and Bry, F and Furche, T and Schaffert, S},
      title = {Web and Semantic Web query languages: A survey},
      booktitle = {REASONING WEB},
      year = {2005},
      volume = {3564},
      pages = {35-133},
      note = {1st International Summer School on Reasoning Web, Msida, MALTA, JUL 25-29, 2005}
    }
    
    Bailin, S. & Truszkowski, W. Ontology negotiation between intelligent information agents {2002} KNOWLEDGE ENGINEERING REVIEW
    Vol. {17}({1}), pp. {7-19} 
    article DOI  
    Abstract: This paper describes an approach to ontology negotiation between agents supporting intelligent information management. Ontologies are declarative (data-driven) expressions of an agent's ``world'': the objects, operations, facts and rules that constitute the logical space within which an agent performs. Ontology negotiation enables agents to cooperate in performing a task, even if they are based on different ontologies. Our objective is to increase the opportunities for ``strange agents'' - that is, agents not necessarily developed within the same framework or with the same contextual operating assumptions - to communicate in solving tasks when they encounter each other on the web. In particular, we have focused on information search tasks. We have developed a protocol that allows agents to discover ontology conflicts and then, through incremental interpretation, clarification and explanation, establish a common basis for communicating with each other. We have implemented this protocol in a set of Java classes that can be added to a variety of agents, irrespective of their underlying ontological assumptions. We have demonstrated the use of the protocol, through this implementation, in a test-bed that includes two large scientific archives: NASA's Global Change Master Directory and NOAN's Wind and Sea Index. This paper presents an overview of different methods for resolving ontology mismatches and motivates the Ontology Negotiation Protocol (ONP) as a method that addresses some problems with other approaches. Much remains to be done. The protocol must be tested in larger and less familiar contexts (for example, numerous archives that have not been preselected) and it must be extended to accommodate additional forms of clarification and ontology evolution.
    BibTeX:
    @article{Bailin2002,
      author = {Bailin, SC and Truszkowski, W},
      title = {Ontology negotiation between intelligent information agents},
      journal = {KNOWLEDGE ENGINEERING REVIEW},
      year = {2002},
      volume = {17},
      number = {1},
      pages = {7-19},
      note = {13th International Conference on Knowledge Engineering and Knowledge Management (EKAW 2002), SIGUENZA, SPAIN, 2002},
      doi = {{10.1017/S0269888902000292}}
    }
    
    Baker, P., Goble, C., Bechhofer, S., Paton, N., Stevens, R. & Brass, A. An ontology for bioinformatics applications {1999} BIOINFORMATICS
    Vol. {15}({6}), pp. {510-520} 
    article  
    Abstract: Motivation: An oncology of biological terminology provides a model of biological concepts that can be used to form a semantic framework for many data storage, retrieval and analysis tasks. Such a semantic framework could be used to underpin a range of important bioinformatics tasks, such as the querying of heterogeneous bioinformatics sources or the systematic annotation of experimental results. Results: This paper provides an overview of an ontology [the Transparent Access to Multiple Biological Information Sources (TAMBIS) ontology or TaO] that describes a wide range of bioinformatics concepts. The present paper describes the mechanisms used for delivering the ontology and discusses the ontology's design and organization, which are crucial for maintaining the coherence of a large collection of concepts and their relationships. Availability: The TAMBIS system, which uses a subset of the TaO described here, is accessible over the Web via http://img.cs.man.ac.uk/tambis (although in the first instance, we will use a password mechanism to limit the load on our server). The complete model is also available on the Web at the above URL.
    BibTeX:
    @article{Baker1999,
      author = {Baker, PG and Goble, CA and Bechhofer, S and Paton, NW and Stevens, R and Brass, A},
      title = {An ontology for bioinformatics applications},
      journal = {BIOINFORMATICS},
      year = {1999},
      volume = {15},
      number = {6},
      pages = {510-520}
    }
    
    Bar-Ilan, J. Informetrics at the beginning of the 21st century - A review {2008} JOURNAL OF INFORMETRICS
    Vol. {2}({1}), pp. {1-52} 
    article DOI  
    Abstract: This paper reviews developments in informetrics between 2000 and 2006. At the beginning of the 21st century we witness considerable growth in webometrics, mapping and visualization and open access. A new topic is comparison between citation databases, as a result of the introduction of two new citation databases Scopus and Google Scholar. There is renewed interest in indicators as a result of the introduction of the h-index. Traditional topics like citation analysis and informetric theory also continue to develop. The impact factor debate, especially outside the informetric literature continues to thrive. Ranked lists (of journal, highly cited papers or of educational institutions) are of great public interest. (C) 2007 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Bar-Ilan2008,
      author = {Bar-Ilan, Judit},
      title = {Informetrics at the beginning of the 21st century - A review},
      journal = {JOURNAL OF INFORMETRICS},
      year = {2008},
      volume = {2},
      number = {1},
      pages = {1-52},
      doi = {{10.1016/j.joi.2007.11.001}}
    }
    
    Bar-Ilan, J. The use of Web search engines in information science research {2004} ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY
    Vol. {38}, pp. {231-288} 
    article  
    BibTeX:
    @article{Bar-Ilan2004,
      author = {Bar-Ilan, J},
      title = {The use of Web search engines in information science research},
      journal = {ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY},
      year = {2004},
      volume = {38},
      pages = {231-288}
    }
    
    Barca, L., Burani, C. & Arduino, L. Word naming times and psycholinguistic norms for Italian nouns {2002} BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS
    Vol. {34}({3}), pp. {424-434} 
    article  
    Abstract: The present study describes normative measures for 626 Italian simple nouns. The database (LEXVAR. XLS) is freely available for down-loading on the Web site http://wwwistc.ip.rm.cnr.it/material/ database/. For each of the 626 nouns, values for the following variables are reported: age of acquisition, familiarity, imageability, concreteness, adult written frequency, child written frequency, adult spoken frequency, number of orthographic neighbors, mean bigram frequency, length in syllables, and length in letters. A classification of lexical stress and of the type of word-initial phoneme is also provided. The intercorrelations among the variables, a factor analysis, and the effects of variables and of the extracted factors on word naming are reported. Naming latencies were affected primarily by a factor including word length and neighborhood size and by a word frequency factor. Neither a semantic factor including imageability, concreteness, and age of acquisition nor a factor defined by mean bigram frequency had significant effects on pronunciation times. These results hold for a language with shallow orthography, like Italian, for which lexical nonsemantic properties have been shown to affect reading aloud. These norms are useful in a variety of research areas involving the manipulation and control of stimulus attributes.
    BibTeX:
    @article{Barca2002,
      author = {Barca, L and Burani, C and Arduino, LS},
      title = {Word naming times and psycholinguistic norms for Italian nouns},
      journal = {BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS},
      year = {2002},
      volume = {34},
      number = {3},
      pages = {424-434}
    }
    
    Bassiliades, N., Antoniou, G. & Vlahavas, I. A defeasible logic reasoner for the semantic web {2006} INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS
    Vol. {2}({1}), pp. {1-41} 
    article  
    Abstract: Defeasible reasoning is a rule-based approach for efficient reasoning with incomplete and inconsistent information. Such reasoning is, among others, useful for ontology integration, where conflicting information arises naturally; and for the modeling of business rules and policies, where rules with exceptions are often used This paper describes these scenarios and reports on the implementation of a system for defeasible reasoning on the Web. The system, DR-DEVICE, is capable of reasoning about RDF metadata over multiple Web sources using defeasible logic rules. It is implemented on top of CLIPS production rule system and builds upon R-DEVICE, an earlier deductive rule system over RDF metadata that also supports derived attribute and aggregate attribute rules. Rules can be expressed either in a native CLIPS-like language, or in an extension of the OO-RuleML syntax. The operational semantics of defeasible logic are implemented through compilation into the generic rule language of R-DEVICE. The paper also presents a full semantic Web broker example for apartment renting.
    BibTeX:
    @article{Bassiliades2006,
      author = {Bassiliades, Nick and Antoniou, Grigoris and Vlahavas, Ioannis},
      title = {A defeasible logic reasoner for the semantic web},
      journal = {INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS},
      year = {2006},
      volume = {2},
      number = {1},
      pages = {1-41}
    }
    
    Bassiliades, N., Antoniou, G. & Vlahavas, L. DR-DEVICE: A defeasible logic system for the Semantic Web {2004}
    Vol. {3208}PRINCIPLES AND PRACTICE OF SEMANTIC WEB REASONING, PROCEEDINGS, pp. {134-148} 
    inproceedings  
    Abstract: This paper presents DR-DEVICE, a system for defeasible reasoning on the Web. Defeasible reasoning is a rule-based approach for efficient reasoning with incomplete and inconsistent information. Such reasoning is, among others, useful for ontology integration, where conflicting information arises naturally; and for the modeling of business rules and policies, where rules with exceptions are often used. In this paper we describe these scenarios in more detail along with the implementation of the DR-DEVICE system, which is capable of reasoning about RDF data over multiple Web sources using defeasible logic rules. The system is implemented on top of CLIPS production rule system and builds upon R-DEVICE, an earlier deductive rule system over RDF data that also supports derived attribute and aggregate attribute rules. Rules can be expressed either in a native CLIPS-like language, or in an extension of the OO-RuleML syntax. The operational semantics of defeasible logic are implemented through compilation into the generic rule language of R-DEVICE. The paper includes a use case of a semantic web broker that reasons defeasibly about renting apartments based on buyer's requirements (expressed RuleML defeasible logic rules) and seller's advertisements (expressed in RDF).
    BibTeX:
    @inproceedings{Bassiliades2004,
      author = {Bassiliades, N and Antoniou, G and Vlahavas, L},
      title = {DR-DEVICE: A defeasible logic system for the Semantic Web},
      booktitle = {PRINCIPLES AND PRACTICE OF SEMANTIC WEB REASONING, PROCEEDINGS},
      year = {2004},
      volume = {3208},
      pages = {134-148},
      note = {2nd International Workshop on Principles and Practice of Semantic Web Reasoning, St Malo, FRANCE, SEP 06-10, 2004}
    }
    
    Belleau, F., Nolin, M.-A., Tourigny, N., Rigault, P. & Morissette, J. Bio2RDF: Towards a mashup to build bioinformatics knowledge systems {2008} JOURNAL OF BIOMEDICAL INFORMATICS
    Vol. {41}({5, Sp. Iss. SI}), pp. {706-716} 
    article DOI  
    Abstract: Presently, there are numerous bioinformatics databases available on different websites. Although RDF was proposed as a standard format for the web, these databases are still available in various formats. With the increasing popularity of the semantic web technologies and the ever growing number of databases in bioinformatics, there is a pressing need to develop mashup systems to help the process of bioinformatics knowledge integration. Bio2RDF is such a system, built from rdfizer programs written in JSP, the Sesame open Source triplestore technology and an OWL ontology. With Bio2RDF, documents from public bioinformatics databases Such as Kegg, PDB, MGI, HGNC and several of NCBI's databases can now be made available in RDF format through a unique URL in the form of http://bio2rdf.org/namespace:id. The Bio2RDF project has successfully applied the semantic web technology to publicly available databases by creating a knowledge space of RDF documents linked together with normalized URIs and sharing a common ontology. Bio2RDF is based on a three-step approach to build mashups of bioinformatics data. The present article details this new approach and illustrates the building of a mashup used to explore the implication Of four transcription factor genes in Parkinson's disease. The Bio2RDF repository can be queried at http://bio2rdf.org. (C) 2008 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Belleau2008,
      author = {Belleau, Francois and Nolin, Marc-Alexandre and Tourigny, Nicole and Rigault, Philippe and Morissette, Jean},
      title = {Bio2RDF: Towards a mashup to build bioinformatics knowledge systems},
      journal = {JOURNAL OF BIOMEDICAL INFORMATICS},
      year = {2008},
      volume = {41},
      number = {5, Sp. Iss. SI},
      pages = {706-716},
      doi = {{10.1016/j.jbi.2008.03.004}}
    }
    
    Benatallah, B., Hacid, M., Leger, A., Rey, C. & Toumani, F. On automating Web services discovery {2005} VLDB JOURNAL
    Vol. {14}({1}), pp. {84-96} 
    article DOI  
    Abstract: One of the challenging problems that Web service technology faces is the ability to effectively discover services based on their capabilities. We present an approach to tackling this problem in the context of description logics (DLs). We formalize service discovery as a new instance of the problem of rewriting concepts using terminologies. We call this new instance the best covering problem. We provide a formalization of the best covering problem in the framework of DL-based ontologies and propose a hypergraph-based algorithm to effectively compute best covers of a given request. We propose a novel matchmaking algorithm that takes as input a service request (or query) Q and an ontology T of services and finds a set of services called a ``best cover'' of Q whose descriptions contain as much common information with Q as possible and as little extra information with respect to Q as possible. We have implemented the proposed discovery technique and used the developed prototype in the context of the Multilingual Knowledge Based European Electronic Marketplace (MKBEEM) project.
    BibTeX:
    @article{Benatallah2005,
      author = {Benatallah, B and Hacid, MS and Leger, A and Rey, C and Toumani, F},
      title = {On automating Web services discovery},
      journal = {VLDB JOURNAL},
      year = {2005},
      volume = {14},
      number = {1},
      pages = {84-96},
      doi = {{10.1007/s00778-003-0117-x}}
    }
    
    Benatallah, B., Hacid, M., Rey, C. & Toumani, F. Request rewriting-based web service discovery {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {242-257} 
    inproceedings  
    Abstract: One of the challenging problems that Web service technology faces is the ability to effectively discover services based on their capabilities. We present an approach to tackle this problem in the context of DAML-S ontologies of services. The proposed approach enables to select the combinations of Web services that best match a given request Q and effectively computes the extra information with respect to Q (e.g., the information required by a service request but not provided by any existing service). We study the reasoning problem associated with such a matching process and propose an algorithm derived from hypergraphs theory.
    BibTeX:
    @inproceedings{Benatallah2003,
      author = {Benatallah, B and Hacid, MS and Rey, C and Toumani, F},
      title = {Request rewriting-based web service discovery},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {242-257},
      note = {2nd International Semantic Web Conference, SANIBEL, FL, OCT 20-23, 2003}
    }
    
    Berardi, D., Calvanese, D., De Giacomo, G. & Mecella, M. Composition of services with nondeterministic observable behavior {2005}
    Vol. {3826}SERVICE-ORIENTED COMPUTING - ICSOC 2005, pp. {520-526} 
    inproceedings  
    Abstract: In [3] we started studying an advanced form of service composition where available services were modeled as deterministic finite transition systems, describing the possible conversations they can have with clients, and where the client request was itself expressed as a (virtual) service making use of the same alphabet of actions. In [4] we extended our studies by considering the case in which the client request was loosen by allowing don't care nondeterminism in expressing the required target service. In the present paper we complete such a line of investigation, by considering the case in which the available services are only partially controllable and must be modeled as nondeterministic finite transition systems, possibly because of our lack of information on their exact behavior. Notably such services display a ``devilish'' form of nondeterminism, since we want to model the inability of the orchestrator to actually choose between different executions of the same action. We investigate how to automatically perform the synthesis of the composition under these circumstances.
    BibTeX:
    @inproceedings{Berardi2005,
      author = {Berardi, D and Calvanese, D and De Giacomo, G and Mecella, M},
      title = {Composition of services with nondeterministic observable behavior},
      booktitle = {SERVICE-ORIENTED COMPUTING - ICSOC 2005},
      year = {2005},
      volume = {3826},
      pages = {520-526},
      note = {3rd International Conference on Service-Oriented Computing, Amsterdam, NETHERLANDS, DEC 12-15, 2005}
    }
    
    Berendt, B., Hotho, A. & Stumme, G. Towards Semantic Web Mining {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {264-278} 
    inproceedings  
    Abstract: Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. The idea is to improve, on the one hand, the results of Web Mining by exploiting the new semantic structures in the Web; and to make use of Web Mining, on the other hand, for building up the Semantic Web. This paper gives an overview of where the two areas meet today, and sketches ways of how a closer integration could be profitable.
    BibTeX:
    @inproceedings{Berendt2002,
      author = {Berendt, B and Hotho, A and Stumme, G},
      title = {Towards Semantic Web Mining},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {264-278},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Berners-Lee, T. & Hendler, J. Publishing on the semantic web - The coming Internet revolution will profoundly affect scientific information. {2001} NATURE
    Vol. {410}({6832}), pp. {1023-1024} 
    article  
    BibTeX:
    @article{Berners-Lee2001a,
      author = {Berners-Lee, T and Hendler, J},
      title = {Publishing on the semantic web - The coming Internet revolution will profoundly affect scientific information.},
      journal = {NATURE},
      year = {2001},
      volume = {410},
      number = {6832},
      pages = {1023-1024}
    }
    
    Berners-Lee, T., Hendler, J. & Lassila, O. The Semantic Web - A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities {2001} SCIENTIFIC AMERICAN
    Vol. {284}({5}), pp. {34+} 
    article  
    BibTeX:
    @article{Berners-Lee2001,
      author = {Berners-Lee, T and Hendler, J and Lassila, O},
      title = {The Semantic Web - A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities},
      journal = {SCIENTIFIC AMERICAN},
      year = {2001},
      volume = {284},
      number = {5},
      pages = {34+}
    }
    
    Bernholdt, D., Bharathi, S., Brown, D., Chanchio, K., Chen, M., Chervenak, A., Cinquini, L., Drach, B., Foster, I., Fox, P., Garcia, J., Kesselman, C., Markel, R., Middleton, D., Nefedova, V., Pouchard, L., Shoshani, A., Sim, A., Strand, G. & Williams, D. The Earth System Grid: Supporting the next generation of climate modeling research {2005} PROCEEDINGS OF THE IEEE
    Vol. {93}({3}), pp. {485-495} 
    article DOI  
    Abstract: Understanding the earth's climate system and how it might be changing is a preeminent scientific challenge. Global climate models are used to simulate past, present, and future climates, and experiments are executed continuously on ail array of distributed supercomputers. The resulting data archive, spread over several sites, currently contains upwards of 100 TB of simulation data and is growing rapidly Looking toward mid-decade and beyond, we must anticipate and prepare for distributed climate research data holdings of many petabytes. The Earth S stein Grid (ESG) is a collaborative interdisciplinary project aimed at addressing the challenge of enabling management, discovery, access, and analysis of these critically important datasets in a distributed and heterogeneous computational environment. The problem is fundamentally a Grid problem. Building upon the Globus toolkit and a variety of other technologies, ESG is developing an environment that addresses authentication, authorizationfor data access, large-scale data transport and management, services and abstractions for high-performance remote data access, mechanisinsfor scalable data replication, cataloging with rich semantic and syntactic information, data discovery, distributed monitoring, and Web-based portals,for using the system.
    BibTeX:
    @article{Bernholdt2005,
      author = {Bernholdt, D and Bharathi, S and Brown, D and Chanchio, K and Chen, ML and Chervenak, A and Cinquini, L and Drach, B and Foster, I and Fox, P and Garcia, J and Kesselman, C and Markel, R and Middleton, D and Nefedova, V and Pouchard, L and Shoshani, A and Sim, A and Strand, G and Williams, D},
      title = {The Earth System Grid: Supporting the next generation of climate modeling research},
      journal = {PROCEEDINGS OF THE IEEE},
      year = {2005},
      volume = {93},
      number = {3},
      pages = {485-495},
      doi = {{10.1109/JPROC.2004.842745}}
    }
    
    Bernstein, P., Halevy, A. & Pottinger, R. A vision for management of complex models {2000} SIGMOD RECORD
    Vol. {29}({4}), pp. {55-63} 
    article  
    Abstract: Many problems encountered when building applications of database systems involve the manipulation of models. By ``model,'' we mean a complex structure that represents a design artifact, such as a relational schema, object-oriented interface, UML model, XML DTD, web-site schema, semantic network, complex document, or software configuration. Many uses of models involve managing changes in models and transformations of data from one model into another. These uses require an explicit representation of ``mappings'' between models. We propose to make database systems easier to use for these applications by making ``model'' and ``model mapping'' first-class objects with special operations that simplify their use. We call this capability model management. In addition to making the case for model management, our main contribution is a sketch of a proposed data model. The data model consists of formal, object-oriented structures for representing models and model mappings, and of high-level algebraic operations on those structures, such as matching, differencing, merging, selection, inversion and instantiation. We focus on structure and semantics, not implementation.
    BibTeX:
    @article{Bernstein2000,
      author = {Bernstein, PA and Halevy, AY and Pottinger, RA},
      title = {A vision for management of complex models},
      journal = {SIGMOD RECORD},
      year = {2000},
      volume = {29},
      number = {4},
      pages = {55-63},
      note = {3rd Workshop of the Engineering Federated Information Systems (EFIS), DUBLIN, IRELAND, JUN, 2000}
    }
    
    Bertino, E. Data security {1998} DATA & KNOWLEDGE ENGINEERING
    Vol. {25}({1-2}), pp. {199-216} 
    article  
    Abstract: Maintaining data quality is an important requirement in any organization. It requires measures for access control, semantic integrity, fault tolerance and recovery. Access control regulates the access to the system by users to ensure that all accesses are authorized according to some specified policy. In this paper, we survey the state of the art in access control for database systems, discuss the main research issues, and outline possible directions for future research.
    BibTeX:
    @article{Bertino1998,
      author = {Bertino, E},
      title = {Data security},
      journal = {DATA & KNOWLEDGE ENGINEERING},
      year = {1998},
      volume = {25},
      number = {1-2},
      pages = {199-216}
    }
    
    Bertino, E., Khan, L., Sandhu, R. & Thuraisingham, B. Secure knowledge management: Confidentiality, trust, and privacy {2006} IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS
    Vol. {36}({3}), pp. {429-438} 
    article DOI  
    Abstract: Knowledge management enhances the value of a corporation by identifying the assets and expertise as well as efficiently managing the resources. Security for knowledge management is critical as organizations have to protect their intellectual assets. Therefore, only authorized individuals must be permitted to execute various operations and functions in an organization. In this paper, secure knowledge management will be discussed, focusing on confidentiality, trust, and privacy. In particular, certain access-control techniques will be investigated, and trust management as well as privacy control for knowledge management will be explored.
    BibTeX:
    @article{Bertino2006,
      author = {Bertino, E and Khan, LR and Sandhu, R and Thuraisingham, B},
      title = {Secure knowledge management: Confidentiality, trust, and privacy},
      journal = {IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS},
      year = {2006},
      volume = {36},
      number = {3},
      pages = {429-438},
      doi = {{10.1109/TSMCA.2006.871796}}
    }
    
    Bicer, V., Laleci, G., Dogac, A. & Kabak, Y. Artemis message exchange framework: Semantic interoperability of exchanged messages in the healthcare domain {2005} SIGMOD RECORD
    Vol. {34}({3}), pp. {71-76} 
    article  
    Abstract: One of the most challenging problems in the healthcare domain is providing interoperability among healthcare information systems. In order to address this problem, we propose the semantic mediation of exchanged messages. Given that most of the messages exchanged in the healthcare domain are in EDI (Electronic Data Interchange) or XML format, we describe how to transform these messages into OWL (Web Ontology Language) ontology instances. The OWL message instances are then mediated through an ontology mapping tool that we developed, namely, OWLmt. OWLmt; uses OWL-QL engine which enables the mapping tool to reason over the source ontology instances while generating the target ontology instances according to the mapping patterns defined through a GUI. Through a prototype implementation, we demonstrate how to mediate between HL7 Version 2 and HL7 Version 3 messages. However, the framework proposed is generic enough to mediate between any incompatible healthcare standards that are currently in use.
    BibTeX:
    @article{Bicer2005,
      author = {Bicer, V and Laleci, GB and Dogac, A and Kabak, Y},
      title = {Artemis message exchange framework: Semantic interoperability of exchanged messages in the healthcare domain},
      journal = {SIGMOD RECORD},
      year = {2005},
      volume = {34},
      number = {3},
      pages = {71-76}
    }
    
    Blake, J.A., Eppig, J.T., Bult, C.J., Kadin, J.A., Richardson, J.E. & Mouse Genome Database Grp The Mouse Genome Database (MGD): updates and enhancements {2006} NUCLEIC ACIDS RESEARCH
    Vol. {34}({Sp. Iss. SI}), pp. {D562-D567} 
    article DOI  
    Abstract: The Mouse Genome Database (MGD) integrates genetic and genomic data for the mouse in order to facilitate the use of the mouse as a model system for understanding human biology and disease processes. A core component of the MGD effort is the acquisition and integration of genomic, genetic, functional and phenotypic information about mouse genes and gene products. MGD works within the broader bioinformatics community to define referential and semantic standards to facilitate data exchange between resources including the incorporation of information from the biomedical literature. MGD is also a platform for computational assessment of integrated biological data with the goal of identifying candidate genes associated with complex phenotypes. MGD is web accessible at http://www.informatics.jax.org. Recent improvements in MGD described here include the incorporation of an interactive genome browser, the enhancement of phenotype resources and the further development of functional annotation resources.
    BibTeX:
    @article{Blake2006,
      author = {Blake, Judith A. and Eppig, Janan T. and Bult, Carol J. and Kadin, James A. and Richardson, Joel E. and Mouse Genome Database Grp},
      title = {The Mouse Genome Database (MGD): updates and enhancements},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2006},
      volume = {34},
      number = {Sp. Iss. SI},
      pages = {D562-D567},
      doi = {{10.1093/nar/gkj085}}
    }
    
    Blake, M. & Gomaa, H. Agent-oriented compositional approaches to services-based cross-organizational workflow {2005} DECISION SUPPORT SYSTEMS
    Vol. {40}({1}), pp. {31-50} 
    article DOI  
    Abstract: With the sophistication and maturity of distributed component-based services and semantic web services, the idea of specification-driven service composition is becoming a reality. One such approach is workflow composition of services that span multiple, distributed web-accessible locations. Given the dynamic nature of this domain, the adaptation of software agents represents a possible solution for the composition and enactment of cross-organizational services. This paper details design aspects of an architecture that would support this evolvable service-based workflow composition. The internal coordination and control aspects of such an architecture is addressed. These agent developmental processes are aligned with industry-standard software engineering processes. (c) 2004 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Blake2005,
      author = {Blake, MB and Gomaa, H},
      title = {Agent-oriented compositional approaches to services-based cross-organizational workflow},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {2005},
      volume = {40},
      number = {1},
      pages = {31-50},
      doi = {{10.1016/j.dss.2004.04.003}}
    }
    
    Boddy, S., Rezgul, Y., Cooper, G. & Wetherill, M. Computer integrated construction: A review and proposals for future direction {2007} ADVANCES IN ENGINEERING SOFTWARE
    Vol. {38}({10}), pp. {677-687} 
    article DOI  
    Abstract: We present a review of the computer integrated construction (CIC) research space spanning approximately 20 years. This review reveals a strong focus on data and application integration for most of that time. We argue that whilst valuable in its own right, such research and the software solutions it yields fall short of the potential for CIC, giving our rationale for these beliefs. Thus we propose a re-focussing of CIC research on the relatively under-represented area of semantically described and coordinated process oriented systems to better support the kind of short term virtual organisation that typifies the working environment in the construction sector. Finally we present an outline vision for such a system, supported by a generic system architecture and a simple business model for its deployment, noting opportunities for future work in its realisation. (c) 2006 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Boddy2007,
      author = {Boddy, Stefan and Rezgul, Yacine and Cooper, Grahame and Wetherill, Matthew},
      title = {Computer integrated construction: A review and proposals for future direction},
      journal = {ADVANCES IN ENGINEERING SOFTWARE},
      year = {2007},
      volume = {38},
      number = {10},
      pages = {677-687},
      doi = {{10.1016/j.advensoft.2006.10.007}}
    }
    
    Borgida, A. & Serafini, L. Distributed Description Logics: Directed domain correspondences in federated information sources {2002}
    Vol. {2519}ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2002: COOPLS, DOA, AND ODBASE, pp. {36-53} 
    inproceedings  
    Abstract: A central problem of co-operative information systems is the ability to integrate information from multiple sources. Although this problem has been studied for several decades, there is a need for a more refined approach in those cases where the original sources form a loose federation, each maintaining its own independent view of the world. In particular, we motivate with examples the utility of directed non-injective mappings between the individuals in the domains of multiple IS. We then extend the logical formalism of Description Logics, which has previously served successfully in IS integration and is currently being used in semantic-web ontolgies, to handle such mappings. The result is called Distributed Description Logics, and we consider some of its desirable properties, as well as some theorems concerning its computational aspects.
    BibTeX:
    @inproceedings{Borgida2002,
      author = {Borgida, A and Serafini, L},
      title = {Distributed Description Logics: Directed domain correspondences in federated information sources},
      booktitle = {ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2002: COOPLS, DOA, AND ODBASE},
      year = {2002},
      volume = {2519},
      pages = {36-53},
      note = {Confederated Conferences CoopIS, DOA and ODBASE, IRVINE, CA, OCT 28-NOV 01, 2002}
    }
    
    Boulos, M.N.K. & Wheeler, S. The emerging Web 2.0 social software: an enabling suite of sociable technologies in health and health care education {2007} HEALTH INFORMATION AND LIBRARIES JOURNAL
    Vol. {24}({1}), pp. {2-23} 
    article  
    Abstract: Web 2.0 sociable technologies and social software are presented as enablers in health and health care, for organizations, clinicians, patients and laypersons. They include social networking services, collaborative filtering, social bookmarking, folksonomies, social search engines, file sharing and tagging, mashups, instant messaging, and online multi-player games. The more popular Web 2.0 applications in education, namely wikis, blogs and podcasts, are but the tip of the social software iceberg. Web 2.0 technologies represent a quite revolutionary way of managing and repurposing/remixing online information and knowledge repositories, including clinical and research information, in comparison with the traditional Web 1.0 model. The paper also offers a glimpse of future software, touching on Web 3.0 (the Semantic Web) and how it could be combined with Web 2.0 to produce the ultimate architecture of participation. Although the tools presented in this review look very promising and potentially fit for purpose in many health care applications and scenarios, careful thinking, testing and evaluation research are still needed in order to establish `best practice models' for leveraging these emerging technologies to boost our teaching and learning productivity, foster stronger `communities of practice', and support continuing medical education/professional development (CME/CPD) and patient education.
    BibTeX:
    @article{Boulos2007,
      author = {Boulos, Maged N. Kamel and Wheeler, Steve},
      title = {The emerging Web 2.0 social software: an enabling suite of sociable technologies in health and health care education},
      journal = {HEALTH INFORMATION AND LIBRARIES JOURNAL},
      year = {2007},
      volume = {24},
      number = {1},
      pages = {2-23}
    }
    
    Bouquet, P., Serafini, L. & Zanobini, S. Semantic coordination: A new approach and an application {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {130-145} 
    inproceedings  
    Abstract: Semantic coordination, namely the problem of finding an agreement on the meaning of heterogeneous semantic models, is one of the key issues in the development of the Semantic Web. In this paper, we propose a new algorithm for discovering semantic mappings across hierarchical classifications based on a new approach to semantic coordination. This approach shifts the problem of semantic coordination from the problem of computing linguistic or structural similarities (what most other proposed approaches do) to the problem of deducing relations between sets of logical formulae that represent the meaning of concepts belonging to different models. We show how to apply the approach and the algorithm to an interesting family of semantic models, namely hierarchical classifications, and present the results of preliminary tests on two types of hierarchical classifications, web directories and catalogs. Finally, we argue why this is a significant improvement on previous approaches.
    BibTeX:
    @inproceedings{Bouquet2003,
      author = {Bouquet, P and Serafini, L and Zanobini, S},
      title = {Semantic coordination: A new approach and an application},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {130-145},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Bozsak, E., Ehrig, M., Handschuh, S., Hotho, A., Maedche, A., Motik, B., Oberle, D., Schmitz, C., Staab, S., Stojanovic, L., Stojanovic, N., Studer, R., Stumme, G., Sure, Y., Tane, J., Volz, R. & Zacharias, V. KAON - Towards a large scale Semantic Web {2002}
    Vol. {2455}E-COMMERCE AND WEB TECHNOLOGIES, PROCEEDINGS, pp. {304-313} 
    inproceedings  
    Abstract: The Semantic Web will bring structure to the content of Web pages, being an extension of the current Web, in which information is given a well-defined meaning. Especially within e-commerce applications, Semantic Web technologies m the form of ontologies and metadata are becoming increasingly prevalent and important. This paper introduce KAON - the Karlsruhe Ontology and Semantic Web Tool Suite. KAON is developed jointly within several EU-funded projects and specifically designed to provide the ontology and metadata infrastructure needed for building, using and accessing semantics-driven applications on the Web and on your desktop.
    BibTeX:
    @inproceedings{Bozsak2002,
      author = {Bozsak, E and Ehrig, M and Handschuh, S and Hotho, A and Maedche, A and Motik, B and Oberle, D and Schmitz, C and Staab, S and Stojanovic, L and Stojanovic, N and Studer, R and Stumme, G and Sure, Y and Tane, J and Volz, R and Zacharias, V},
      title = {KAON - Towards a large scale Semantic Web},
      booktitle = {E-COMMERCE AND WEB TECHNOLOGIES, PROCEEDINGS},
      year = {2002},
      volume = {2455},
      pages = {304-313},
      note = {3rd International Conference on E-Commerce and Web Technologies, AIX PROVENCE, FRANCE, SEP 02-06, 2002}
    }
    
    Brasethvik, T. & Gulla, J. Natural language analysis for semantic document modeling {2001} DATA & KNOWLEDGE ENGINEERING
    Vol. {38}({1}), pp. {45-62} 
    article  
    Abstract: To ease the retrieval of documents published on the Web, the documents should be classified in a way that users find helpful and meaningful. This paper presents an approach to semantic document classification and retrieval based on natural language analysis and conceptual modeling. Users may define their own conceptual domain model, which is then used in combination with linguistic tools to define a controlled vocabulary for a document collection. Users may browse this domain model and interactively classify documents by selecting model fragments that describe the contents of the documents. Natural language tools are used to analyze the text of the documents and propose relevant model fragments in terms of selected domain model concepts and named relations. The proposed fragments are refined by the users and stored as document descriptions in RDF-XML format. For document retrieval, lexical analysis is used to preprocess search expressions and map these to the domain model for manual query-refinement. A prototype of the system is described, and the approach is illustrated with examples from a document collection published by the Norwegian Center for Medical Informatics (KITH). (C) 2001 published by Elsevier Science B.V.
    BibTeX:
    @article{Brasethvik2001,
      author = {Brasethvik, T and Gulla, JA},
      title = {Natural language analysis for semantic document modeling},
      journal = {DATA & KNOWLEDGE ENGINEERING},
      year = {2001},
      volume = {38},
      number = {1},
      pages = {45-62},
      note = {4th International Workshop on Natural Language for Data Bases (NLDB 00), VERSAILLES, FRANCE, JUN, 2000}
    }
    
    Brazhnik, O. & Jones, J.F. Anatomy of data integration {2007} JOURNAL OF BIOMEDICAL INFORMATICS
    Vol. {40}({3}), pp. {252-269} 
    article DOI  
    Abstract: Producing reliable information is the ultimate goal of data processing. The ocean of data created with the advances of science and technologies calls for integration of data coming from heterogeneous sources that are diverse in their purposes, business rules, underlying models and enabling technologies. Reference models, Semantic Web, standards, ontology, and other technologies enable fast and efficient merging of heterogeneous data, while the reliability of produced information is largely defined by how well the data represent the reality. In this paper, we initiate a framework for assessing the informational value of data that includes data dimensions; aligning data quality with business practices; identifying authoritative sources and integration keys; merging models; uniting updates of varying frequency and overlapping or gapped data sets. Published by Elsevier Inc.
    BibTeX:
    @article{Brazhnik2007,
      author = {Brazhnik, Olga and Jones, John F.},
      title = {Anatomy of data integration},
      journal = {JOURNAL OF BIOMEDICAL INFORMATICS},
      year = {2007},
      volume = {40},
      number = {3},
      pages = {252-269},
      doi = {{10.1016/j.jbi.2006.09.001}}
    }
    
    Breslin, J., Harth, A., Bojars, U. & Decker, S. Towards semantically-interlinked online communities {2005}
    Vol. {3532}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {500-514} 
    inproceedings  
    Abstract: Online community sites have replaced the traditional means of keeping a community informed via libraries and publishing. At present, online communities are islands that are not interlinked. We describe different types of online communities and tools that are currently used to build and support such communities. Ontologies and Semantic Web technologies offer an upgrade path to providing more complex services. Fusing information and inferring links between the various applications and types of information provides relevant insights that make the available information on the Internet more valuable. We present the SIOC ontology which combines terms from vocabularies that already exist with new terms needed to describe the relationships between concepts in the realm of online community sites.
    BibTeX:
    @inproceedings{Breslin2005,
      author = {Breslin, JG and Harth, A and Bojars, U and Decker, S},
      title = {Towards semantically-interlinked online communities},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2005},
      volume = {3532},
      pages = {500-514},
      note = {2nd European Semantic Web Conference, Heraklion, GREECE, MAY 29-JUN 01, 2005}
    }
    
    Bressan, S., Goh, C., Levina, N., Madnick, S., Shah, A. & Siegel, M. Context knowledge representation and reasoning in the Context Interchange system {2000} APPLIED INTELLIGENCE
    Vol. {13}({2}), pp. {165-180} 
    article  
    Abstract: The Context Interchange Project presents a unique approach to the problem of semantic conflict resolution among multiple heterogeneous data sources. The system presents a semantically meaningful view of the data to the receivers (e.g. user applications) for all the available data sources. The semantic conflicts are automatically detected and reconciled by a Context Mediator using the context knowledge associated with both the data sources and the data receivers. The results are collated and presented in the receiver context. The current implementation of the system provides access to flat files, classical relational databases, on-line databases, and web services. An example application, using actual financial information sources, is described along with a detailed description of the operation of the system for an example query.
    BibTeX:
    @article{Bressan2000,
      author = {Bressan, S and Goh, C and Levina, N and Madnick, S and Shah, A and Siegel, M},
      title = {Context knowledge representation and reasoning in the Context Interchange system},
      journal = {APPLIED INTELLIGENCE},
      year = {2000},
      volume = {13},
      number = {2},
      pages = {165-180}
    }
    
    Broekstra, J., Kampman, A. & van Harmelen, F. Sesame: A generic architecture for storing and querying RDF and RDF schema {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {54-68} 
    inproceedings  
    Abstract: RDF and RDF Schema are two W3C standards aimed at enriching the Web with machine-processable semantic data. We have developed Sesame, an architecture for efficient storage and expressive querying of large quantities of metadata in RDF and RDF Schema. Sesame's design and implementation are independent from any specific storage device. Thus, Sesame can be deployed on top of a variety of storage devices, such as relational databases, triple stores, or object-oriented databases, without having to change the query engine or other functional modules. Sesame offers support for concurrency control, independent export of RDF and RDFS information and a query engine. for RQL, a query language for RDF that offers native support for RDF Schema semantics. We present an overview of Sesame as a generic architecture, as well as its implementation and our first experiences with this implementation.
    BibTeX:
    @inproceedings{Broekstra2002,
      author = {Broekstra, J and Kampman, A and van Harmelen, F},
      title = {Sesame: A generic architecture for storing and querying RDF and RDF schema},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {54-68},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Broekstra, J., Klein, M., Decker, S., Fensel, D., van Harmelen, F. & Horrocks, I. Enabling knowledge representation on the Web by extending RDF Schema {2002} COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING
    Vol. {39}({5}), pp. {609-634} 
    article  
    Abstract: Recently, a widespread interest has emerged in using ontologies on the Web. Resource Description Framework Schema (RDFS) is a basic tool that enables users to define vocabulary, structure and constraints for expressing meta data about Web resources. However, it includes no provisions for formal semantics, and its expressivity is not sufficient for full-fledged ontological modeling and reasoning. In this paper, we will show how RDFS can be extended to include a more expressive knowledge representation language. That, in turn, would enrich it with the required additional expressivity and the semantics of that language. We do this by describing the ontology language Ontology Inference Layer (OIL) as an extension of RDFS. An important advantage to our approach is that it ensures maximal sharing of meta data on the Web: even partial interpretation of an OIL ontology by less semantically aware processors will yield a correct partial interpretation of the meta data. (C) 2002 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Broekstra2002a,
      author = {Broekstra, J and Klein, M and Decker, S and Fensel, D and van Harmelen, F and Horrocks, I},
      title = {Enabling knowledge representation on the Web by extending RDF Schema},
      journal = {COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING},
      year = {2002},
      volume = {39},
      number = {5},
      pages = {609-634},
      note = {10th International World Wide Web Conference, HONG KONG, MAY 01-05, 2001}
    }
    
    de Bruijn, J., Lausen, H., Polleres, A. & Fensel, D. The Web Service Modeling Language WSML: An overview {2006}
    Vol. {4011}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {590-604} 
    inproceedings  
    Abstract: The Web Service Modeling Language (WSML) is a language for the specification of different aspects of Semantic Web Services. It provides a formal language for the Web Service Modeling Ontology WSMO which is based on well-known logical formalisms, specifying one coherent language framework for the semantic description of Web Services, starting from the intersection of Datalog and the Description Logic SHIQ. This core language is extended in the directions of Description Logics and Logic Programming in a principled manner with strict layering. WSML distinguishes between conceptual and logical modeling in order to support users who are not familiar with formal logic, while not restricting the expressive power of the language for the expert user. IRIs play a central role in WSML as identifiers. Furthermore, WSML defines XML and RDF serializations for inter-operation over the Semantic Web.
    BibTeX:
    @inproceedings{Bruijn2006,
      author = {de Bruijn, Jos and Lausen, Holger and Polleres, Axel and Fensel, Dieter},
      title = {The Web Service Modeling Language WSML: An overview},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2006},
      volume = {4011},
      pages = {590-604},
      note = {3rd European Semantic Web Conference, Budva, SERBIA MONTENEG, JUN 11-14, 2006}
    }
    
    Bubak, M., Gubala, T., Kapalka, M., Malawski, M. & Rycerz, K. Workflow composer and service registry for grid applications {2005} FUTURE GENERATION COMPUTER SYSTEMS
    Vol. {21}({1}), pp. {79-86} 
    article DOI  
    Abstract: Automatic composition of workflows from Web and Grid services is an important challenge in today's distributed applications. The system presented in this paper supports the user in composing an application workflow from existing Grid services. The flow composition system builds workflows on an abstract level with semantic and syntactic descriptions of services available on the Grid. Two main modules of the system are the flow composer and the distributed Grid service registry. We present motivation, the concept of the overall system architecture and the results of a feasibility study. (C) 2004 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Bubak2005,
      author = {Bubak, M and Gubala, T and Kapalka, M and Malawski, M and Rycerz, K},
      title = {Workflow composer and service registry for grid applications},
      journal = {FUTURE GENERATION COMPUTER SYSTEMS},
      year = {2005},
      volume = {21},
      number = {1},
      pages = {79-86},
      doi = {{10.1016/j.future.2004.09.021}}
    }
    
    Buffa, M., Gandon, F., Ereteo, G., Sander, P. & Faron, C. SweetWiki: A semantic wiki {2008} JOURNAL OF WEB SEMANTICS
    Vol. {6}({1}), pp. {84-97} 
    article DOI  
    Abstract: Everyone agrees that user interactions and social networks are among the cornerstones of ``Web 2.0''. Web 2.0 applications generally run in a web browser, propose dynamic content with rich user interfaces, offer means to easily add or edit content of the web site they belong to and present social network aspects. Well-known applications that have helped spread Web 2.0 are blogs, wikis, and image/video sharing sites; they have dramatically increased sharing and participation among web users. It is possible to build knowledge using tools that can help analyze users' behavior behind the scenes: what they do, what they know, what they want. Tools that help share this knowledge across a network, and that can reason on that knowledge, will lead to users who can better use the knowledge available, i.e., to smarter users. Wikipedia, a wildly successful example of web technology, has helped knowledge-sharing between people by letting individuals freely create and modify its content. But Wikipedia is designed for people-today's software cannot understand and reason on Wikipedia's content. In parallel, the ``semantic web'', a set of technologies that help knowledge-sharing across the web between different applications, is starting to gain attraction. Researchers have only recently started working on the concept of a ``semantic wiki'', mixing the advantages of the wiki and the technologies of the semantic web. In this paper we will present a state-of-the-art of semantic wikis, and we will introduce SweetWiki, an example of an application reconciling two trends of the future web: a semantically augmented web and a web of social applications where every user is an active provider as well as a consumer of information. SweetWiki makes heavy use of semantic web concepts and languages, and demonstrates how the use of such paradigms can improve navigation, search, and usability. (c) 2007 Published by Elsevier B.V.
    BibTeX:
    @article{Buffa2008,
      author = {Buffa, Michel and Gandon, Fabien and Ereteo, Guillaume and Sander, Peter and Faron, Catherine},
      title = {SweetWiki: A semantic wiki},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2008},
      volume = {6},
      number = {1},
      pages = {84-97},
      doi = {{10.1016/j.websem.2007.11.003}}
    }
    
    Bult, C.J., Eppig, J.T., Kadin, J.A., Richardson, J.E., Blake, J.A. & Mouse Genome Database Grp The Mouse Genome Database (MGD): mouse biology and model systems {2008} NUCLEIC ACIDS RESEARCH
    Vol. {36}({Sp. Iss. SI}), pp. {D724-D728} 
    article DOI  
    Abstract: The Mouse Genome Database, (MGD, http://www.informatics.jax.org/), integrates genetic, genomic and phenotypic information about the laboratory mouse, a primary animal model for studying human biology and disease. MGD data content includes comprehensive characterization of genes and their functions, standardized descriptions of mouse phenotypes, extensive integration of DNA and protein sequence data, normalized representation of genome and genome variant information including comparative data on mammalian genes. Data within MGD are obtained from diverse sources including manual curation of the biomedical literature, direct contributions from individual investigators laboratories and major informatics resource centers such as Ensembl, UniProt and NCBI. MGD collaborates with the bioinformatics community on the development of data and semantic standards such as the Gene Ontology (GO) and the Mammalian Phenotype (MP) Ontology. MGD provides a data-mining platform that enables the development of translational research hypotheses based on comparative genotype, phenotype and functional analyses. Both web-based querying and computational access to data are provided. Recent improvements in MGD described here include the association of gene trap data with mouse genes and a new batch query capability for customized data access and retrieval.
    BibTeX:
    @article{Bult2008,
      author = {Bult, Carol J. and Eppig, Janan T. and Kadin, James A. and Richardson, Joel E. and Blake, Judith A. and Mouse Genome Database Grp},
      title = {The Mouse Genome Database (MGD): mouse biology and model systems},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2008},
      volume = {36},
      number = {Sp. Iss. SI},
      pages = {D724-D728},
      doi = {{10.1093/nar/gkm961}}
    }
    
    Burstein, M. Dynamic invocation of Semantic Web services that use unfamiliar ontologies {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({4}), pp. {67-73} 
    article  
    BibTeX:
    @article{Burstein2004,
      author = {Burstein, MH},
      title = {Dynamic invocation of Semantic Web services that use unfamiliar ontologies},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {4},
      pages = {67-73}
    }
    
    Burstein, M., Bussler, C., Zaremba, M., Finin, T., Huhns, M., Paolucci, M., Sheth, A. & Williams, S. A Semantic Web Services Architecture {2005} IEEE INTERNET COMPUTING
    Vol. {9}({5}), pp. {72-81} 
    article  
    Abstract: The Semantic Web Services Initiative Architecture (SWSA) committee has created a set of architectural and protocol abstractions that serve as a foundation for Semantic Web service technologies. This article summarizes the committee's findings, emphasizing its review of requirements gathered from several different environments. The authors also identify the scope and potential requirements for a Semantic Web services architecture.
    BibTeX:
    @article{Burstein2005,
      author = {Burstein, M and Bussler, C and Zaremba, M and Finin, T and Huhns, MN and Paolucci, M and Sheth, AP and Williams, S},
      title = {A Semantic Web Services Architecture},
      journal = {IEEE INTERNET COMPUTING},
      year = {2005},
      volume = {9},
      number = {5},
      pages = {72-81}
    }
    
    Burton-Jones, A., Storey, V., Sugumaran, V. & Ahluwalia, P. A semiotic metrics suite for assessing the quality of ontologies {2005} DATA & KNOWLEDGE ENGINEERING
    Vol. {55}({1}), pp. {84-102} 
    article DOI  
    Abstract: A suite of metrics is proposed to assess the quality of an ontology. Drawing upon semiotic theory, the metrics assess the syntactic, semantic, pragmatic, and social aspects of ontology quality. We operationalize the metrics and implement them in a prototype tool called the Ontology Auditor. An initial validation of the Ontology Auditor on the DARPA Agent Markup Language (DAML) library of domain ontologies indicates that the metrics are feasible and highlights the wide variation in quality among ontologies in the library. The contribution of the research is to provide a theory-based framework that developers can use to develop high quality ontologies and that applications can use to choose appropriate ontologies for a given task. (c) 2004 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Burton-Jones2005,
      author = {Burton-Jones, A and Storey, VC and Sugumaran, V and Ahluwalia, P},
      title = {A semiotic metrics suite for assessing the quality of ontologies},
      journal = {DATA & KNOWLEDGE ENGINEERING},
      year = {2005},
      volume = {55},
      number = {1},
      pages = {84-102},
      note = {Conference on Natural Language and Databasde and Information Systems NLDB 03), Burg, GERMANY, JUN, 2003},
      doi = {{10.1016/j.datak.2004.11.010}}
    }
    
    Bussler, C., Fensel, D. & Maedche, A. A conceptual architecture for Semantic Web Enabled Web Services {2002} SIGMOD RECORD
    Vol. {31}({4}), pp. {24-29} 
    article  
    Abstract: Semantic Web Enabled Web Services (SWWS) will transform the web from a static collection of information into a distributed device of computation on the basis of Semantic Web technology making content within the World Wide Web machine-processable and machine-interpretable. semantic Web Enabled Web Services will allow the automatic discovery, selection and execution of inter-organization business logic making areas like dynamic supply chain composition a reality. In this paper we introduce the vision of Semantic Web Enabled Well Services, describe requirements far building semantics-driven web services anti sketch a first draft of a conceptual architecture for implementing semantic web enabled web services.
    BibTeX:
    @article{Bussler2002,
      author = {Bussler, C and Fensel, D and Maedche, A},
      title = {A conceptual architecture for Semantic Web Enabled Web Services},
      journal = {SIGMOD RECORD},
      year = {2002},
      volume = {31},
      number = {4},
      pages = {24-29},
      note = {Amicalola Workshop on DB-IS Research for Semantic Web and Enterprises, GEORGIA, APR 03-05, 2002}
    }
    
    Canales, A., Pena, A., Peredo, R., Sossa, H. & Gutierrez, A. Adaptive and intelligent web based education system: Towards an integral architecture and framework {2007} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {33}({4}), pp. {1076-1089} 
    article DOI  
    Abstract: In this paper it is presented our contribution for carrying out adaptive and intelligent Web-based Education Systems (WBES) that take into account the individual student learning requirements, by means of a holistic architecture and Framework for developing WBES. In addition, three basic modules of the proposed WBES are outlined: an Authoring tool, a Semantic Web-based Evaluation, and a Cognitive Maps-based Student Model. As well, it is stated a Service Oriented Architecture (SOA) oriented to deploy reusable, accessible, durable and interoperable services. The approach enhances the Learning Technology Standard Architecture, proposed by IEEE-LTSA (Learning Technology System Architecture) [IEEE 1484.1/D9 LTSA (2001). Draft standard for learning technology learning technology systems architecture (LTSA). New York, USA. URL: http://ieee.Itsc.org/wgl], and the Sharable Content Object Reusable Model (SCORM), claimed by Advanced Distributed Learning (ADL) [Advanced Distributed Learning Initiative (2004). URL: http://www.adlnet.org]. (c) 2006 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Canales2007,
      author = {Canales, Alejandro and Pena, Alejandro and Peredo, Ruben and Sossa, Humberto and Gutierrez, Agustin},
      title = {Adaptive and intelligent web based education system: Towards an integral architecture and framework},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2007},
      volume = {33},
      number = {4},
      pages = {1076-1089},
      doi = {{10.1016/j.eswa.2006.08.034}}
    }
    
    Cannata, N., Merelli, E. & Altman, R.B. Time to organize the bioinformatics resourceome {2005} PLOS COMPUTATIONAL BIOLOGY
    Vol. {1}({7}), pp. {531-533} 
    article DOI  
    BibTeX:
    @article{Cannata2005,
      author = {Cannata, Nicola and Merelli, Emanuela and Altman, Russ B.},
      title = {Time to organize the bioinformatics resourceome},
      journal = {PLOS COMPUTATIONAL BIOLOGY},
      year = {2005},
      volume = {1},
      number = {7},
      pages = {531-533},
      doi = {{10.1371/journal.pcbi.0010076}}
    }
    
    Cardoso, J. & Sheth, A. Semantic e-workflow composition {2003} JOURNAL OF INTELLIGENT INFORMATION SYSTEMS
    Vol. {21}({3}), pp. {191-225} 
    article  
    Abstract: Systems and infrastructures are currently being developed to support Web services. The main idea is to encapsulate an organization's functionality within an appropriate interface and advertise it as Web services. While in some cases Web services may be utilized in an isolated form, it is normal to expect Web services to be integrated as part of workflow processes. The composition of workflow processes that model e-service applications differs from the design of traditional workflows, in terms of the number of tasks (Web services) available to the composition process, in their heterogeneity, and in their autonomy. Therefore, two problems need to be solved: how to efficiently discover Web services-based on functional and operational requirements-and how to facilitate the interoperability of heterogeneous Web services. In this paper, we present a solution within the context of the emerging Semantic Web that includes use of ontologies to overcome some of the problem. We describe a prototype that has been implemented to illustrate how discovery and interoperability functions are achieved more efficiently.
    BibTeX:
    @article{Cardoso2003,
      author = {Cardoso, J and Sheth, A},
      title = {Semantic e-workflow composition},
      journal = {JOURNAL OF INTELLIGENT INFORMATION SYSTEMS},
      year = {2003},
      volume = {21},
      number = {3},
      pages = {191-225}
    }
    
    Carroll, J., Bizer, C., Hayes, P. & Stickler, P. Named graphs {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({4}), pp. {247-267} 
    article DOI  
    Abstract: The Semantic Web consists of many RDF graphs nameable by URIs. This paper extends the syntax and semantics of RDF to cover such named graphs. This enables RDF statements that describe graphs, which is beneficial in many Semantic Web application areas. Named graphs are given an abstract syntax, a formal semantics, an XML syntax, and a syntax based on N3. SPARQL is a query language applicable to named graphs. A specific application area discussed in detail is that of describing provenance information. This paper provides a formally defined framework suited to being a foundation for the Semantic Web trust layer. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Carroll2005,
      author = {Carroll, JJ and Bizer, C and Hayes, P and Stickler, P},
      title = {Named graphs},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {4},
      pages = {247-267},
      note = {Semantic Web Track held at the World Wide Web Conference, Chiba, JAPAN, MAY, 2005},
      doi = {{10.1016/j.websem.2005.09.001}}
    }
    
    Castells, P., Fernandez, M. & Vallet, D. An adaptation of the vector-space model for ontology-based information retrieval {2007} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {19}({2}), pp. {261-272} 
    article  
    Abstract: Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based knowledge bases to improve search over large document repositories. In our view of Information Retrieval on the Semantic Web, a search engine returns documents rather than, or in addition to, exact values in response to user queries. For this purpose, our approach includes an ontology-based scheme for the semiautomatic annotation of documents and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with conventional keyword-based retrieval to achieve tolerance to knowledge base incompleteness. Experiments are shown where our approach is tested on corpora of significant scale, showing clear improvements with respect to keyword-based search.
    BibTeX:
    @article{Castells2007,
      author = {Castells, Pablo and Fernandez, Miriam and Vallet, David},
      title = {An adaptation of the vector-space model for ontology-based information retrieval},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2007},
      volume = {19},
      number = {2},
      pages = {261-272}
    }
    
    Chaisorn, L., Chua, T. & Lee, C. A multi-modal approach to story segmentation for news video {2003} WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS
    Vol. {6}({2}), pp. {187-208} 
    article  
    Abstract: This research proposes a two-level, multi-modal framework to perform the segmentation and classification of news video into single-story semantic units. The video is analyzed at the shot and story unit (or scene) levels using a variety of features and techniques. At the shot level, we employ Decision Trees technique to classify the shots into one of 13 predefined categories or mid-level features. At the scene/story level, we perform the HMM (Hidden Markov Models) analysis to locate story boundaries. Our initial results indicate that we could achieve a high accuracy of over 95% for shot classification, and over 89% in F-1 measure on scene/story boundary detection. Detailed analysis reveals that HMM is effective in identifying dominant features, which helps in locating story boundaries. Our eventual goal is to support the retrieval of news video at story unit level, together with associated texts retrieved from related news sites on the web.
    BibTeX:
    @article{Chaisorn2003,
      author = {Chaisorn, L and Chua, TS and Lee, CH},
      title = {A multi-modal approach to story segmentation for news video},
      journal = {WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS},
      year = {2003},
      volume = {6},
      number = {2},
      pages = {187-208},
      note = {6th IFIP 2.6 Working Conference on Visual Database Systems (VDB6), BRISBANE, AUSTRALIA, MAY 29-31, 2002}
    }
    
    Chakraborty, D., Joshi, A., Yesha, Y. & Finin, T. Toward distributed service discovery in pervasive computing environments {2006} IEEE TRANSACTIONS ON MOBILE COMPUTING
    Vol. {5}({2}), pp. {97-112} 
    article  
    Abstract: The paper proposes a novel distributed service discovery protocol for pervasive environments. The protocol is based on the concepts of peer-to-peer caching of service advertisements and group-based intelligent forwarding of service requests. It does not require a service to be registered with a registry or lookup server. Services are described using the Web Ontology Language ( OWL). We exploit the semantic class/subClass hierarchy of OWL to describe service groups and use this semantic information to selectively forward service requests. OWL-based service description also enables increased flexibility in service matching. We present simulation results that show that our protocol achieves increased efficiency in discovering services ( compared to traditional broadcast-based mechanisms) by efficiently utilizing bandwidth via controlled forwarding of service requests.
    BibTeX:
    @article{Chakraborty2006,
      author = {Chakraborty, D and Joshi, A and Yesha, Y and Finin, T},
      title = {Toward distributed service discovery in pervasive computing environments},
      journal = {IEEE TRANSACTIONS ON MOBILE COMPUTING},
      year = {2006},
      volume = {5},
      number = {2},
      pages = {97-112}
    }
    
    Chandrasekaran, S., Silver, G., Miller, J., Cardoso, J. & Sheth, A. Web service technologies and their synergy with simulation {2002} PROCEEDINGS OF THE 2002 WINTER SIMULATION CONFERENCE, VOLS 1 AND 2, pp. {606-615}  inproceedings  
    Abstract: The World Wide Web has had an huge influence on the computing field in general as well as simulation in particular (e.g., Web-Based Simulation). A new wave of development based upon XML has started. Two of the most interesting aspects of this development are the Semantic Web and Web Services. This paper examines the synergy between Web service technology and simulation. In one direction, Web service processes can be simulated for the purpose of correcting/improving the design. In the other direction, simulation models/components can be built out of Web services. Work on seamlessly using simulation as a part of Web service composition and process design, as well as on using Web services to re-build the JSIM Web-based simulation environment is highlighted.
    BibTeX:
    @inproceedings{Chandrasekaran2002,
      author = {Chandrasekaran, S and Silver, G and Miller, JA and Cardoso, J and Sheth, AP},
      title = {Web service technologies and their synergy with simulation},
      booktitle = {PROCEEDINGS OF THE 2002 WINTER SIMULATION CONFERENCE, VOLS 1 AND 2},
      year = {2002},
      pages = {606-615},
      note = {35th Winter Simulation Conference, SAN DIEGO, CA, DEC 08-11, 2002}
    }
    
    Chen, H., Chung, Y., Ramsey, M. & Yang, C. An intelligent personal spider (agent) for dynamic Internet/Intranet searching {1998} DECISION SUPPORT SYSTEMS
    Vol. {23}({1}), pp. {41-58} 
    article  
    Abstract: As Internet services based on the World-Wide Web become more popular, information overload has become a pressing research problem. Difficulties with search on Internet will worsen as the amount of on-line information increases. A scalable approach to Internet search is critical to the success of Internet services and other current and future National Information Infrastructure (NII) applications. As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent personal spider (agent) approach to Internet searching. The approach, which is grounded on automatic textual analysis and general-purpose search algorithms, is expected to be an improvement over the current static and inefficient Internet searches. In this experiment, we implemented Internet personal spiders based on best first search and genetic algorithm techniques. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the web, based on the links and keyword indexing. A plain, static CGI/HTML-based interface was developed earlier, followed by a recent enhancement of a graphical, dynamic Java-based interface. Preliminary evaluation results and two working prototypes (available for Web access) are presented. Although the examples and evaluations presented are mainly based on Internet applications, the applicability of the proposed techniques to the potentially more rewarding Intranet applications should be obvious. In particular, we believe the proposed agent design can be used to locate organization-wide information, to gather new, time-critical organizational information, and to support team-building and communication in Intranets. (C) 1998 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Chen1998,
      author = {Chen, HC and Chung, YM and Ramsey, M and Yang, CC},
      title = {An intelligent personal spider (agent) for dynamic Internet/Intranet searching},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {1998},
      volume = {23},
      number = {1},
      pages = {41-58}
    }
    
    Chen, H., Finin, T., Joshi, A., Kagal, L., Perich, F. & Chakraborty, D. Intelligent agents meet the semantic Web in smart spaces {2004} IEEE INTERNET COMPUTING
    Vol. {8}({6}), pp. {69-79} 
    article  
    BibTeX:
    @article{Chen2004,
      author = {Chen, H and Finin, T and Joshi, A and Kagal, L and Perich, F and Chakraborty, D},
      title = {Intelligent agents meet the semantic Web in smart spaces},
      journal = {IEEE INTERNET COMPUTING},
      year = {2004},
      volume = {8},
      number = {6},
      pages = {69-79}
    }
    
    Chen, H., Ma, J., Wang, Y. & Wu, Z. A survey on semantic e-science applications {2008} COMPUTING AND INFORMATICS
    Vol. {27}({1}), pp. {5-20} 
    article  
    Abstract: This paper gives a survey on the state of the art in the field of applying semantic web technologies in typical e-science applications. It makes a overview on the applications from a handful of science communities including chemistry, earth science, energy, life science and health care, scientific publishing and so forth. Analysis is given to summarize the trends and future directions.
    BibTeX:
    @article{Chen2008,
      author = {Chen, Huajun and Ma, Jun and Wang, Yimin and Wu, Zhaohui},
      title = {A survey on semantic e-science applications},
      journal = {COMPUTING AND INFORMATICS},
      year = {2008},
      volume = {27},
      number = {1},
      pages = {5-20}
    }
    
    Chen, J., Bardes, E.E., Aronow, B.J. & Jegga, A.G. ToppGene Suite for gene list enrichment analysis and candidate gene prioritization {2009} NUCLEIC ACIDS RESEARCH
    Vol. {37}({Suppl. S}), pp. {W305-W311} 
    article DOI  
    Abstract: ToppGene Suite (http://toppgene.cchmc.org; this web site is free and open to all users and does not require a login to access) is a one-stop portal for (i) gene list functional enrichment, (ii) candidate gene prioritization using either functional annotations or network analysis and (iii) identification and prioritization of novel disease candidate genes in the interactome. Functional annotation-based disease candidate gene prioritization uses a fuzzy-based similarity measure to compute the similarity between any two genes based on semantic annotations. The similarity scores from individual features are combined into an overall score using statistical meta-analysis. A P-value of each annotation of a test gene is derived by random sampling of the whole genome. The protein-protein interaction network (PPIN)-based disease candidate gene prioritization uses social and Web networks analysis algorithms (extended versions of the PageRank and HITS algorithms, and the K-Step Markov method). We demonstrate the utility of ToppGene Suite using 20 recently reported GWAS-based gene-disease associations (including novel disease genes) representing five diseases. ToppGene ranked 19 of 20 (95 candidate genes within the top 20 while ToppNet ranked 12 of 16 (75 candidate genes among the top 20
    BibTeX:
    @article{Chen2009,
      author = {Chen, Jing and Bardes, Eric E. and Aronow, Bruce J. and Jegga, Anil G.},
      title = {ToppGene Suite for gene list enrichment analysis and candidate gene prioritization},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2009},
      volume = {37},
      number = {Suppl. S},
      pages = {W305-W311},
      doi = {{10.1093/nar/gkp427}}
    }
    
    Chen, L., Shadbolt, N., Goble, C., Tao, F., Cox, S., Puleston, C. & Smart, P. Towards a knowledge-based approach to semantic service composition {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {319-334} 
    inproceedings  
    Abstract: The successful application of Grid and Web Service technologies to real-world problems, such as e-Science [1], requires not only the development of a common vocabulary and meta-data framework as the basis for inter-agent communication and service integration but also the access and use of a rich repository of domain-specific knowledge for problem solving. Both requirements are met by the respective outcomes of ontological and knowledge engineering initiatives. In this paper we discuss a novel, knowledge-based approach to resource synthesis (service composition), which draws on the functionality of semantic web services to represent and expose available resources. The approach we use exploits domain knowledge to guide the service composition process and provide advice on service selection and instantiation. The approach has been implemented in a prototype workflow construction environment that supports the runtime recommendation of a service solution, service discovery via semantic service descriptions, and knowledge-based configuration of selected services. The use of knowledge provides a basis for full automation of service composition via conventional planning algorithms. Workflows produced by this system can be executed through a domain-specific direct mapping mechanism or via a more fluid approach such as WSDL-based service grounding. The approach and prototype have been used to demonstrate practical benefits in the context of the Geodise initiative [2].
    BibTeX:
    @inproceedings{Chen2003,
      author = {Chen, LM and Shadbolt, NR and Goble, C and Tao, F and Cox, SJ and Puleston, C and Smart, PR},
      title = {Towards a knowledge-based approach to semantic service composition},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {319-334},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Chen, R. & Hsieh, C. Web page classification based on a support vector machine using a weighted vote schema {2006} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {31}({2}), pp. {427-435} 
    article DOI  
    Abstract: Traditional information retrieval method use keywords occurring in documents to determine the class of the documents, but usually retrieves unrelated web pages. In order to effectively classify web pages solving the synonymous keyword problem, we propose a web page classification based on support vector machine using a weighted vote schema for various features. The system uses both latent semantic analysis and web page feature selection training and recognition by the SVM model. Latent semantic analysis is used to find the semantic relations between keywords, and between documents. The latent semantic analysis method projects terms and a document into a vector space to find latent information in the document. At the same time, we also extract text features from web page content. Through text features, web pages are classified into a suitable category. These two features are sent to the SVM for training and testing respectively. Based on the output of the SVM, a voting schema is used to determine the category of the web page. Experimental results indicate our method is more effective than traditional methods. (C) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Chen2006,
      author = {Chen, RC and Hsieh, CH},
      title = {Web page classification based on a support vector machine using a weighted vote schema},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2006},
      volume = {31},
      number = {2},
      pages = {427-435},
      doi = {{10.1016/j.eswa.2005.09.079}}
    }
    
    Chen, Y., Zhou, L. & Zhang, D. Ontology-supported web service composition: An approach to service-oriented knowledge management in corporate financial services {2006} JOURNAL OF DATABASE MANAGEMENT
    Vol. {17}({1}), pp. {67-84} 
    article  
    Abstract: Web service composition can enhance the efficiency and agility of knowledge management by composing individual Web services together for complex business requirements. There are two main research streams in knowledge representation for Web service composition: the syntactic-based approach anti the semantic-based approach. Despite the promises brought by each approach, the two streams are largely separated from each other In this article, we propose an integrated ontology-supported Web service composition framework, which provides a novel solution to organizational knowledge management. By synergistically leveraging both syntactic-based and semantic-based approaches, this framework provides dual modes to perform service composition. Ontologies are employed to enrich semantics at both the service description and composition levels. The proposed conceptual framework has been implemented in the corporate financial services domain. It is demonstrated that the shared ontology helps to fulfill automated and on-the-fly service composition in particular and knowledge management in general.
    BibTeX:
    @article{Chen2006a,
      author = {Chen, Y and Zhou, L and Zhang, DS},
      title = {Ontology-supported web service composition: An approach to service-oriented knowledge management in corporate financial services},
      journal = {JOURNAL OF DATABASE MANAGEMENT},
      year = {2006},
      volume = {17},
      number = {1},
      pages = {67-84}
    }
    
    Cheung, K., Yip, K., Smith, A., deKnikker, R., Masiar, A. & Gerstein, M. YeastHub: a semantic web use case for integrating data in the life sciences domain {2005} BIOINFORMATICS
    Vol. {21}({Suppl. 1}), pp. {I85-I96} 
    article DOI  
    Abstract: Motivation: As the semantic web technology is maturing and the need for life sciences data integration over the web is growing, it is important to explore how data integration needs can be addressed by the semantic web. The main problem that we face in data integration is a lack of widely-accepted standards for expressing the syntax and semantics of the data. We address this problem by exploring the use of semantic web technologies-including resource description framework (RDF), RDF site summary (RSS), relational-database-to-RDF mapping (D2RQ) and native RDF data repository - to represent, store and query both metadata and data across life sciences datasets. Results: As many biological datasets are presently available in tabular format, we introduce an RDF structure into which they can be converted. Also, we develop a prototype web-based application called YeastHub that demonstrates how a life sciences data warehouse can be built using a native RDF data store (Sesame). This data warehouse allows integration of different types of yeast genome data provided by different resources in different formats including the tabular and RDF formats. Once the data are loaded into the data warehouse, RDF-based queries can be formulated to retrieve and query the data in an integrated fashion.
    BibTeX:
    @article{Cheung2005,
      author = {Cheung, KH and Yip, KY and Smith, A and deKnikker, R and Masiar, A and Gerstein, M},
      title = {YeastHub: a semantic web use case for integrating data in the life sciences domain},
      journal = {BIOINFORMATICS},
      year = {2005},
      volume = {21},
      number = {Suppl. 1},
      pages = {I85-I96},
      note = {13th International Conference on Intelligent Systems for Molecular Biology, Detroit, MI, JUN 25-29, 2005},
      doi = {{10.1093/bioinformatics/bti1026}}
    }
    
    Cheung, K.-H., Yip, K.Y., Townsend, J.P. & Scotch, M. HCLS 2.0/3.0: Health care and life sciences data mashup using Web 2.0/3.0 {2008} JOURNAL OF BIOMEDICAL INFORMATICS
    Vol. {41}({5, Sp. Iss. SI}), pp. {694-705} 
    article DOI  
    Abstract: We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies. (C) 2008 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Cheung2008,
      author = {Cheung, Kei-Hoi and Yip, Kevin Y. and Townsend, Jeffrey P. and Scotch, Matthew},
      title = {HCLS 2.0/3.0: Health care and life sciences data mashup using Web 2.0/3.0},
      journal = {JOURNAL OF BIOMEDICAL INFORMATICS},
      year = {2008},
      volume = {41},
      number = {5, Sp. Iss. SI},
      pages = {694-705},
      doi = {{10.1016/j.jbi.2008.04.001}}
    }
    
    Chidlovskii, B. & Borghoff, U. Semantic caching of Web queries {2000} VLDB JOURNAL
    Vol. {9}({1}), pp. {2-17} 
    article  
    Abstract: In meta-searchers accessing distributed Web-based information repositories, performance is a major issue. Efficient query processing requires an appropriate caching mechanism. Unfortunately, standard page-based as well as tuple-based caching mechanisms designed for conventional databases are not efficient on the Web, where keyword-based querying is often the only way to retrieve data. In this work, we study the problem of semantic caching of Web queries and develop a caching mechanism for conjunctive Web queries based on signature files. Our algorithms cope with both relations of semantic containment and intersection between a query and the corresponding cache items. We also develop the cache replacement strategy to treat situations when cached items differ in size and contribution when providing partial query answers. We report results of experiments and show how the caching mechanism is realized in the Knowledge Broker system.
    BibTeX:
    @article{Chidlovskii2000,
      author = {Chidlovskii, B and Borghoff, UM},
      title = {Semantic caching of Web queries},
      journal = {VLDB JOURNAL},
      year = {2000},
      volume = {9},
      number = {1},
      pages = {2-17}
    }
    
    Choi, N., Song, I.-Y. & Han, H. A survey on ontology mapping {2006} SIGMOD RECORD
    Vol. {35}({3}), pp. {34-41} 
    article  
    Abstract: Ontology is increasingly seen as a key factor for enabling interoperability across heterogeneous systems and semantic web applications. Ontology mapping is required for combining distributed and heterogeneous ontologies. Developing such ontology mapping has been a core issue of recent ontology research. This paper presents ontology mapping categories, describes the characteristics of each category, compares these characteristics, and surveys tools, systems, and related work based on each category of ontology mapping. We believe this paper provides readers with a comprehensive understanding of ontology mapping and points to various research topics about the specific roles of ontology mapping.
    BibTeX:
    @article{Choi2006,
      author = {Choi, Namyoun and Song, Il-Yeol and Han, Hyoil},
      title = {A survey on ontology mapping},
      journal = {SIGMOD RECORD},
      year = {2006},
      volume = {35},
      number = {3},
      pages = {34-41}
    }
    
    Christopoulou, E. & Kameas, A. GAS Ontology: An ontology for collaboration among ubiquitous computing devices {2005} INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
    Vol. {62}({5}), pp. {664-685} 
    article DOI  
    Abstract: The vision of ubiquitous computing is that the addition of computation and communication abilities to the artefacts that surround people will enable the users to set up their living spaces in a way that will serve them best minimising at the same time the required human intervention. The ontologies can help us to address some key issues of ubiquitous computing environments such as knowledge representation, semantic interoperability and service discovery. The GAS Ontology is an ontology that was developed in order to describe the semantics of the basic concepts of a ubiquitous computing environment and define their inter-relations. The basic goal of this ontology is to provide a common language for the communication and collaboration among the heterogeneous devices that constitute these environments. The GAS Ontology also supports the service discovery mechanism that a ubiquitous computing environment requires. In this paper, we present the GAS Ontology as well as the design challenges that we faced and the way that we handled them. In order to select the language and the tool that we used for the development of the GAS Ontology, we designed a prototype ontology and evaluated a number of languages and tools. The ontology development tool that proved to be the most suitable from this evaluation was Protege-2000. We also present how we use the GAS Ontology in our eGadgets project achieving, semantic interoperability and service discovery. Finally, we present the GAS Ontology manager, which runs on each device, manages the device's ontology and processes the knowledge that each device acquires over time. (c) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Christopoulou2005,
      author = {Christopoulou, E and Kameas, A},
      title = {GAS Ontology: An ontology for collaboration among ubiquitous computing devices},
      journal = {INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES},
      year = {2005},
      volume = {62},
      number = {5},
      pages = {664-685},
      note = {6th International Protege Users Conference, Manchester, ENGLAND, 2003},
      doi = {{10.1016/j.ijhcs.2005.02.007}}
    }
    
    Cilibrasi, R.L. & Vitanyi, P.M.B. The Google similarity distance {2007} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {19}({3}), pp. {370-383} 
    article  
    Abstract: Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers, the equivalent of ``society'' is ``database,'' and the equivalent of `` use'' is ``a way to search the database.'' We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts, we use the World Wide Web (WWW) as the database, and Google as the search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the WWW using Google page counts. The WWW is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87 percent with the expert crafted WordNet categories.
    BibTeX:
    @article{Cilibrasi2007,
      author = {Cilibrasi, Rudi L. and Vitanyi, Paul M. B.},
      title = {The Google similarity distance},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2007},
      volume = {19},
      number = {3},
      pages = {370-383},
      note = {IEEE Information Theory Workshop on Coding and Complexity, Rotorua, NEW ZEALAND, AUG-SEP, 2005}
    }
    
    Clark, T., Martin, S. & Liefeld, T. Globally distributed object identification for biological knowledgebases {2004} BRIEFINGS IN BIOINFORMATICS
    Vol. {5}({1}), pp. {59-70} 
    article  
    Abstract: The World-Wide Web provides a globally distributed communication framework that is essential for almost all scientific collaboration, including bioinformatics. However, several limits and inadequacies have become apparent, one of which is the inability to programmatically identify locally named objects that may be widely distributed over the network. This shortcoming limits our ability to integrate multiple knowledgebases, each of which gives partial information of a shared domain, as is commonly seen in bioinformatics. The Life Science Identifier (LSID) and LSID Resolution System (LSRS) provide simple and elegant solutions to this problem, based on the extension of existing internet technologies. LSID and LSRS are consistent with next-generation semantic web and semantic grid approaches. This article describes the syntax, operations, infrastructure compatibility considerations, use cases and potential future applications of LSID and LSRS. We see the adoption of these methods as important steps toward simpler, more elegant and more reliable integration of the world's biological knowledgebases, and as facilitating stronger global collaboration in biology.
    BibTeX:
    @article{Clark2004,
      author = {Clark, T and Martin, S and Liefeld, T},
      title = {Globally distributed object identification for biological knowledgebases},
      journal = {BRIEFINGS IN BIOINFORMATICS},
      year = {2004},
      volume = {5},
      number = {1},
      pages = {59-70}
    }
    
    Coles, S., Day, N., Murray-Rust, P., Rzepa, H. & Zhang, Y. Enhancement of the chemical semantic web through the use of InChI identifiers {2005} ORGANIC & BIOMOLECULAR CHEMISTRY
    Vol. {3}({10}), pp. {1832-1834} 
    article  
    Abstract: Molecules, as defined by connectivity specified via the International Chemical Identifier (InChI), are precisely indexed by major web search engines so that Internet tools can be transparently used for unique structure searches.
    BibTeX:
    @article{Coles2005,
      author = {Coles, SJ and Day, NE and Murray-Rust, P and Rzepa, HS and Zhang, Y},
      title = {Enhancement of the chemical semantic web through the use of InChI identifiers},
      journal = {ORGANIC & BIOMOLECULAR CHEMISTRY},
      year = {2005},
      volume = {3},
      number = {10},
      pages = {1832-1834}
    }
    
    Colucci, S., Di Noia, T., Di Sciascio, E., Donini, F.M. & Mongiello, M. Concept abduction and contraction for semantic-based discovery of matches and negotiation spaces in an e-marketplace {2005} ELECTRONIC COMMERCE RESEARCH AND APPLICATIONS
    Vol. {4}({4}), pp. {345-361} 
    article DOI  
    Abstract: In this paper, we present a Description Logic approach - fully compliant with the Semantic web vision and technologies - to extended matchmaking between demands and supplies in a semantic-enabled Electronic Marketplace, which allows the semantic-based treatment of negotiable and strict requirements in the demand/supply descriptions. To this aim, we exploit two novel non-standard Description Logic inference services, Concept Contraction - which extends satisfiability - and Concept Abduction - which extends subsumption. Based on these services, we devise algorithms, which allow to find negotiation spaces and to determine the quality of a possible match, also in the presence of a distinction between strictly required and optional elements. Both the algorithms and the semantic-based approach are novel, and enable a mechanism to boost logic-based discovery and negotiation stages within an e-marketplace. A set of simple experiments confirm the validity of the approach. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Colucci2005,
      author = {Colucci, Simona and Di Noia, Tommaso and Di Sciascio, Eugenio and Donini, Francesco M. and Mongiello, Marina},
      title = {Concept abduction and contraction for semantic-based discovery of matches and negotiation spaces in an e-marketplace},
      journal = {ELECTRONIC COMMERCE RESEARCH AND APPLICATIONS},
      year = {2005},
      volume = {4},
      number = {4},
      pages = {345-361},
      note = {6th International Conference on Electronic Commerce, Delft, NETHERLANDS, OCT, 2004},
      doi = {{10.1016/j.elerap.2005.06.004}}
    }
    
    Corby, O., Dieng-Kuntz, R., Gandon, F. & Faron-Zucker, C. Searching the Semantic Web: Approximate query processing based on ontologies {2006} IEEE INTELLIGENT SYSTEMS
    Vol. {21}({1}), pp. {20-27} 
    article  
    BibTeX:
    @article{Corby2006,
      author = {Corby, O and Dieng-Kuntz, R and Gandon, F and Faron-Zucker, C},
      title = {Searching the Semantic Web: Approximate query processing based on ontologies},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2006},
      volume = {21},
      number = {1},
      pages = {20-27}
    }
    
    Corcho, O., Gomez-Perez, A., Lopez-Cima, A., Lopez-Garcia, V. & Suarez-Figueroa, M. ODESeW. Automatic generation of knowledge portals for Intranets and Extranets {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {802-817} 
    inproceedings  
    Abstract: This paper presents ODESeW (Semantic Web Portal based on WebODE platform [1]) as an ontology-based application that automatically generates and manages a knowledge portal for Intranets and Extranets. ODESeW is designed on the top of WebODE ontology engineering platform. This paper shows the service architecture that allows configuring the visualization of ontology-based information for different kinds of users, establishing reading and updating access policies to its content, and performing consistency checking between the portal information and the ontologies underlying it.
    BibTeX:
    @inproceedings{Corcho2003,
      author = {Corcho, O and Gomez-Perez, A and Lopez-Cima, A and Lopez-Garcia, V and Suarez-Figueroa, MDC},
      title = {ODESeW. Automatic generation of knowledge portals for Intranets and Extranets},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {802-817},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    de Coronado, S., Haber, M., Sioutos, N., Tuttle, M. & Wright, L. NCI thesaurus: Using science-based terminology to integrate cancer research results {2004}
    Vol. {107}MEDINFO 2004: PROCEEDINGS OF THE 11TH WORLD CONGRESS ON MEDICAL INFORMATICS, PT 1 AND 2, pp. {33-37} 
    inproceedings  
    Abstract: Cancer researchers need to be able to organize and report their results in a way that others can find, build upon, and relate to the specific clinical conditions of individual patients. NCI Thesaurus (TM) is a description logic terminology based on current science that helps individuals and software applications connect and organize the results of cancer research, e.g., by disease and underlying biology. Currently containing some 34,000 concepts covering chemicals, drugs and other therapies, diseases, genes and gene products, anatomy, organisms, animal models, techniques, biologic processes, and administrative categories - NCI Thesaurus serves applications and the Web from a terminology server. As a scalable, formal terminology, the deployed Thesaurus, and associated applications and interfaces, are a model for some of the standards required for the NHII (National Health Information Infrastructure) and the Semantic Web.
    BibTeX:
    @inproceedings{Coronado2004,
      author = {de Coronado, S and Haber, MW and Sioutos, N and Tuttle, MS and Wright, LW},
      title = {NCI thesaurus: Using science-based terminology to integrate cancer research results},
      booktitle = {MEDINFO 2004: PROCEEDINGS OF THE 11TH WORLD CONGRESS ON MEDICAL INFORMATICS, PT 1 AND 2},
      year = {2004},
      volume = {107},
      pages = {33-37},
      note = {11th World Congress on Medical Informatics, San Francisco, CA, SEP 07-11, 2004}
    }
    
    Courbis, C. & Finkelstein, A. Towards aspect weaving applications {2005} ICSE 05: 27th International Conference on Software Engineering, Proceedings, pp. {69-77}  inproceedings  
    Abstract: Software must be adapted to accommodate new features in the context of changing requirements. In this paper, we illustrate how applications with aspect weaving capabilities can be easily and dynamically adapted with unforseen features. Aspects were used at three levels: in the context of semantic analysers, within a BPEL engine that orchestrates Web Services, and finally within BPEL processes themselves. Each level uses its own tailored domain-specific aspect language that is easier to manipulate than a general-purpose one (close to the programming language) and the pointcuts are independent from the implementation.
    BibTeX:
    @inproceedings{Courbis2005,
      author = {Courbis, C and Finkelstein, A},
      title = {Towards aspect weaving applications},
      booktitle = {ICSE 05: 27th International Conference on Software Engineering, Proceedings},
      year = {2005},
      pages = {69-77},
      note = {27th International Conference on Software Engineering (ICSE 2005), St Louis, MO, MAY 15-21, 2005}
    }
    
    Crestani, F. & Lee, P. Searching the web by constrained spreading activation {2000} INFORMATION PROCESSING & MANAGEMENT
    Vol. {36}({4}), pp. {585-605} 
    article  
    Abstract: Intelligent Information Retrieval is concerned with the application of intelligent techniques, like for example semantic networks, neural networks and inference nets to Information Retrieval. This field of research has seen a number of applications of Constrained Spreading Activation (CSA) techniques on domain knowledge networks. However, there has never been any application of these techniques to the World Wide Web. The Web is a very important information resource, but users find that looking for a relevant piece of information in the Web can be like `looking for a needle in a haystack'. We were therefore motivated to design and develop a prototype system, WebSCSA (Web Search by CSA), that applied a CSA technique to retrieve information from the Web using an ostensive approach to querying similar to query-by-example. In this paper we describe the system and its underlying model. Furthermore, we report on an experiment carried out with human subjects to evaluate the effectiveness of WebSCSA. We tested whether WebSCSA improves retrieval of relevant information on top of Web search engines results and how well WebSCSA serves as an agent browser for the user. The results of the experiments are promising, and show that there is much potential for further research on the use of CSA techniques to search the Web. (C) 2000 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Crestani2000,
      author = {Crestani, F and Lee, PL},
      title = {Searching the web by constrained spreading activation},
      journal = {INFORMATION PROCESSING & MANAGEMENT},
      year = {2000},
      volume = {36},
      number = {4},
      pages = {585-605}
    }
    
    Crow, L. & Shadbolt, N. Extracting focused knowledge from the semantic web {2001} INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
    Vol. {54}({1}), pp. {155-184} 
    article DOI  
    Abstract: Ontologies are increasingly being recognized as a critical component in making networked knowledge accessible. Software architectures which can assemble knowledge from networked sources coherently according to the requirements of a particular task or perspective will be at a premium in the next generation of web services. We argue that the ability to generate task-relevant ontologies efficiently and relate them to web resources will be essential for creating a machine-inferencable ``semantic web''. The Internet-based multi-agent problem solving (IMPS) architecture described here is designed to facilitate the retrieval, restructuring, integration and formalization of task-relevant ontological knowledge from the web. There are rich structured and semi-structured sources of knowledge available on the web that present implicit or explicit ontologies of domains. Knowledge-level models of tasks have an important role to play in extracting and structuring useful focused problem-solving knowledge from these web sources. IMPS uses a multi-agent architecture to combine these models with a selection of web knowledge extraction heuristics to provide clean syntactic integration of ontological knowledge from diverse sources and support a range of ontology merging operations at the semantic level. Whilst our specific aim is to enable on-line knowledge acquisition from web sources to support knowledge-based problem solving by a community of software agents encapsulating problem-sloving inferences, the techniques described here can be applied to more general task-based integration of knowledge from diverse web sources, and the provision of services such as the critical comparison, fusion, maintenance and update of both formal informal ontologies. (C) 2001 Academic Press.
    BibTeX:
    @article{Crow2001,
      author = {Crow, L and Shadbolt, N},
      title = {Extracting focused knowledge from the semantic web},
      journal = {INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES},
      year = {2001},
      volume = {54},
      number = {1},
      pages = {155-184},
      doi = {{10.1006/ijhc.2000.0453}}
    }
    
    Daquin, M., Motta, E., Sabou, M., Angeletou, S., Gridinoc, L., Lopez, V. & Guidi, D. Toward a new generation of Semantic Web applications {2008} IEEE INTELLIGENT SYSTEMS
    Vol. {23}({3}), pp. {20-28} 
    article  
    BibTeX:
    @article{Daquin2008,
      author = {Daquin, Mathieu and Motta, Enrico and Sabou, Marta and Angeletou, Sofia and Gridinoc, Laurian and Lopez, Vanessa and Guidi, Davide},
      title = {Toward a new generation of Semantic Web applications},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2008},
      volume = {23},
      number = {3},
      pages = {20-28}
    }
    
    Dasiopoulou, S., Mezaris, V., Kompatsiaris, I., Papastathis, V. & Strintzis, M. Knowledge-assisted semantic video object detection {2005} IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
    Vol. {15}({10}), pp. {1210-1224} 
    article DOI  
    Abstract: An approach to knowledge-assisted semantic video object detection based on a multimedia ontology infrastructure is presented. Semantic concepts in the context of the examined domain are defined in an ontology, enriched with qualitative attributes (e.g., color homogeneity), low-level features (e.g., color model components distribution), object spatial relations, and multimedia processing methods (e.g., color clustering). Semantic Web technologies are used for knowledge representation in the RDF(S) metadata standard. Rules in F-logic are defined to describe how tools for multimedia analysis should be applied, depending on concept attributes and low-level features, for the detection of video objects corresponding to the semantic concepts defined in the ontology. This supports flexible and managed execution of various application and domain independent multimedia analysis tasks. Furthermore, this semantic analysis approach can be used in semantic annotation and transcoding systems, which take into consideration the users environment including preferences, devices used, available network bandwidth and content identity. The proposed approach was tested for the detection of semantic objects on video data of three different domains.
    BibTeX:
    @article{Dasiopoulou2005,
      author = {Dasiopoulou, S and Mezaris, V and Kompatsiaris, I and Papastathis, VK and Strintzis, MG},
      title = {Knowledge-assisted semantic video object detection},
      journal = {IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY},
      year = {2005},
      volume = {15},
      number = {10},
      pages = {1210-1224},
      doi = {{10.1109/TCSVT.2005.854238}}
    }
    
    Davies, J., Studer, R., Sure, Y. & Warren, P. Next generation knowledge management {2005} BT TECHNOLOGY JOURNAL
    Vol. {23}({3}), pp. {175-190} 
    article DOI  
    Abstract: Despite its explosive growth over the last decade, the Web remains essentially a tool to allow humans to access information. The next generation of the Web, dubbed the `Semantic Web', will extend the Web's capability through the increased availability machine-processable information. These machine-processable descriptions of Web information resources are called meta-data and are associated with ontologies, or conceptualisations of the domain of application. Meta-data and associated ontologies then allows more intelligent software systems to be written, automating the analysis and exploitation of Web-based information. This paper describes how knowledge management can be improved through the adoption of Semantic Web technology. To realise this, a number of different technologies need to be brought together. Their fusion provides the infrastructure which makes semantic knowledge management possible. Specifically, the paper discusses the use of knowledge discovery and human language technology to (semi-)automatically derive the required ontologies and meta-data, along with a methodology to support this process. We describe techniques for management and controlled evolution of ontologies and a set of semantic knowledge access tools for enhanced information access. Finally, a set of application scenarios for the technology are sketched.
    BibTeX:
    @article{Davies2005,
      author = {Davies, J and Studer, R and Sure, Y and Warren, PW},
      title = {Next generation knowledge management},
      journal = {BT TECHNOLOGY JOURNAL},
      year = {2005},
      volume = {23},
      number = {3},
      pages = {175-190},
      doi = {{10.1007/s10550-005-0040-3}}
    }
    
    Davies, N., Fensel, D. & Richardson, M. The future of Web Services {2004} BT TECHNOLOGY JOURNAL
    Vol. {22}({1}), pp. {118-130} 
    article  
    Abstract: Much of the Web's success can be attributed to its simplicity. It offers a straightforward means by which static information could be published and interconnected on a global basis. The Web Services initiative effectively adds computational objects to the static information of yesterday's Web and as such offers a distributed services capability over a network. Web Services have the potential to create new paradigms for both the delivery of software capabilities and the models by which networked enterprises will trade. Today's Web Services technology, useful though it is, will be enhanced over the next 2-5 years by the harnessing of Semantic Web technology to deliver a step change in capability. Web Services provide an easy way to make existing (or indeed new) components available to applications via the Internet. However, currently, Web Services are essentially described using semi-structured natural language mechanisms, which means that considerable human intervention is needed to find and combine Web Services into an end application. The Semantic Web will enable the accessing of Web resources by semantic content rather than just by keywords. Resources ( in this case Web Services) are defined in such a way that they can be automatically `understood' and processed by machine. This will enable the realisation of Semantic Web Services, involving the automation of service discovery, acquisition, composition and monitoring. Software agents will be able automatically to create new services from already published services, with potentially huge implications for models of eBusiness. Having identified limitations in current Web Services technology, this paper will survey existing research in Semantic Web Services, most notably USA's DAML-S initiative and the European WSMF work, and describe BT's research into creating a set of tools to support next-generation Semantic Web Services.
    BibTeX:
    @article{Davies2004,
      author = {Davies, NJ and Fensel, D and Richardson, M},
      title = {The future of Web Services},
      journal = {BT TECHNOLOGY JOURNAL},
      year = {2004},
      volume = {22},
      number = {1},
      pages = {118-130}
    }
    
    De Roure, D. & Hendler, J. E-science: The grid and the semantic Web {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({1}), pp. {65-71} 
    article  
    BibTeX:
    @article{DeRoure2004,
      author = {De Roure, D and Hendler, JA},
      title = {E-science: The grid and the semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {1},
      pages = {65-71}
    }
    
    De Roure, D., Jennings, N. & Shadbolt, N. The Semantic Grid: Past, present, and future {2005} PROCEEDINGS OF THE IEEE
    Vol. {93}({3}), pp. {669-681} 
    article DOI  
    Abstract: Grid computing offers significant enhancements to our capabilities for computation, information processing, and collaboration, and has exciting ambitious in many fields of endeavor. In this paper we argue that the full richness of the Grid vision, with its application in e-Science, e-Research, or e-Business, requires the ``Semantic Grid.'' The Semantic Grid is an extension of the current Grid in which information and services are given well-defined meaning, better enabling computers and people to work in cooperation. To this end, we outline the requirements of the Semantic Grid, discuss the state of the art in achieving them, and identify the key research challenges in realizing this vision.
    BibTeX:
    @article{DeRoure2005,
      author = {De Roure, D and Jennings, NR and Shadbolt, NR},
      title = {The Semantic Grid: Past, present, and future},
      journal = {PROCEEDINGS OF THE IEEE},
      year = {2005},
      volume = {93},
      number = {3},
      pages = {669-681},
      doi = {{10.1109/JPROC.2004.842781}}
    }
    
    Decker, S., Melnik, S., Van Harmelen, F., Fensel, D., Klein, M., Broekstra, J., Erdmann, M. & Horrocks, I. The semantic Web: The roles of XML and RDF {2000} IEEE INTERNET COMPUTING
    Vol. {4}({5}), pp. {63-74} 
    article  
    BibTeX:
    @article{Decker2000,
      author = {Decker, S and Melnik, S and Van Harmelen, F and Fensel, D and Klein, M and Broekstra, J and Erdmann, M and Horrocks, I},
      title = {The semantic Web: The roles of XML and RDF},
      journal = {IEEE INTERNET COMPUTING},
      year = {2000},
      volume = {4},
      number = {5},
      pages = {63-74}
    }
    
    Decker, S., Mitra, P. & Melnik, S. Framework for the semantic Web: An RDF tutorial {2000} IEEE INTERNET COMPUTING
    Vol. {4}({6}), pp. {68-73} 
    article  
    BibTeX:
    @article{Decker2000a,
      author = {Decker, S and Mitra, P and Melnik, S},
      title = {Framework for the semantic Web: An RDF tutorial},
      journal = {IEEE INTERNET COMPUTING},
      year = {2000},
      volume = {4},
      number = {6},
      pages = {68-73}
    }
    
    Demirkan, H., Kauffman, R.J., Vayghan, J.A., Fill, H.-G., Karagiannis, D. & Maglio, P.P. Service-oriented technology and management: Perspectives on research and practice for the coming decade {2008} ELECTRONIC COMMERCE RESEARCH AND APPLICATIONS
    Vol. {7}({4}), pp. {356-376} 
    article DOI  
    Abstract: Service-oriented technologies and management have gained attention in the past few years, promising a way to create the basis for agility so that companies can deliver new, more flexible business processes that harness the value of the services approach from a customer's perspective. Service-oriented approaches are used for developing software applications and software-as-a-service that can be sourced as virtual hardware resources, including on-demand and utility computing. The driving forces come from the software engineering community and the e-business community. Service-oriented architecture promotes the loose coupling of software components so that interoperability across programming languages and platforms, and dynamic choreography of business processes can be achieved. Nevertheless, one of today's most pervasive and perplexing challenges for senior managers deals with how and when to make a commitment to the new practices. The purpose of this article is to shed light on multiple issues associated with service-oriented technologies and management by examining several interrelated questions: why is it appropriate now to study the related business problems from the point of view of services research? What new conceptual frameworks and theoretical perspectives are appropriate for studying service-oriented technologies and management? What value will a service science and business process modeling offer to the firms that adopt them? And, how can these approaches be implemented so as to address the major challenges that organizations face with technology, information and strategy? We contribute new knowledge in this area by tying the economics and information technology strategy perspectives to the semantic and design science perspectives for a broader audience. Usually the more technical perspective is offered on a standalone basis, and confined to the systems space - even when the discussion is about business processes. This article also offers insights on these issues from the multiple perspectives of industry and academic thought leaders. (C) 2008 Published by Elsevier B. V.
    BibTeX:
    @article{Demirkan2008,
      author = {Demirkan, Haluk and Kauffman, Robert J. and Vayghan, Jamshid A. and Fill, Hans-Georg and Karagiannis, Dimitris and Maglio, Paul P.},
      title = {Service-oriented technology and management: Perspectives on research and practice for the coming decade},
      journal = {ELECTRONIC COMMERCE RESEARCH AND APPLICATIONS},
      year = {2008},
      volume = {7},
      number = {4},
      pages = {356-376},
      note = {International Conference on Electronic Commerce, Minneapolis, MN, AUG 21, 2007},
      doi = {{10.1016/j.elerap.2008.07.002}}
    }
    
    Denker, G., Kagal, L., Finin, T., Paolucci, M. & Sycara, K. Security for DAML web services: Annotation and matchmaking {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {335-350} 
    inproceedings  
    Abstract: In the next generation of the Internet semantic annotations will enable software agents to extract and interpret web content more quickly than it is possible with current techniques. The focus of this paper is to develop security annotations for web services that are represented in DAML-S and used by agents. We propose several security-related ontologies that are designed to represent well-known security concepts. These ontologies are used to describe the security requirements and capabilities of web services providers and requesting agents. A reasoning engine decides whether agents and web service have comparable security characteristics. Our prototypical implementation uses the Java Theorem Prover from Stanford for deciding the degree to which the requirements and capabilities match based on our matching algorithm. The security reasoner is integrated with the Semantic Matchmaker from CMU giving it the ability to provide security brokering between agents and services.
    BibTeX:
    @inproceedings{Denker2003,
      author = {Denker, G and Kagal, L and Finin, T and Paolucci, M and Sycara, K},
      title = {Security for DAML web services: Annotation and matchmaking},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {335-350},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Devedzic, V. Key issues in next-generation Web-based education {2003} IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS
    Vol. {33}({3}), pp. {339-349} 
    article DOI  
    Abstract: This paper analyzes and categorizes limitations and weaknesses of current Web-based educational technology, suggests the steps to overcome them, and presents a framework for developing next-generation Web-based educational systems. It suggests developing Web-based educational applications with more theory- and content-oriented intelligence, more semantic interoperation between two or more educational applications, and realistic technological support to achieve such kinds of flexibility.
    BibTeX:
    @article{Devedzic2003,
      author = {Devedzic, VB},
      title = {Key issues in next-generation Web-based education},
      journal = {IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS},
      year = {2003},
      volume = {33},
      number = {3},
      pages = {339-349},
      doi = {{10.1109/TSMCC.2003.817361}}
    }
    
    Di Giacomo, E., Didimo, W., Grilli, L. & Liotta, G. Graph visualization techniques for web clustering engines {2007} IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
    Vol. {13}({2}), pp. {294-304} 
    article  
    Abstract: One of the most challenging issues in mining information from the World Wide Web is the design of systems that present the data to the end user by clustering them into meaningful semantic categories. We show that the analysis of the results of a clustering engine can significantly take advantage of enhanced graph drawing and visualization techniques. We propose a graph-based user interface for Web clustering engines that makes it possible for the user to explore and visualize the different semantic categories and their relationships at the desired level of detail.
    BibTeX:
    @article{DiGiacomo2007,
      author = {Di Giacomo, Emilio and Didimo, Walter and Grilli, Luca and Liotta, Giuseppe},
      title = {Graph visualization techniques for web clustering engines},
      journal = {IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS},
      year = {2007},
      volume = {13},
      number = {2},
      pages = {294-304}
    }
    
    Di Noia, T., Di Sciascio, E. & Donini, F.M. Semantic matchmaking as non-monotonic reasoning: A description logic approach {2007} JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
    Vol. {29}, pp. {269-307} 
    article  
    Abstract: Matchmaking arises when supply and demand meet in an electronic marketplace, or when agents search for a web service to perform some task, or even when recruiting agencies match curricula and job profiles. In such open environments, the objective of a matchmaking process is to discover best available offers to a given request. We address the problem of matchmaking from a knowledge representation perspective, with a formalization based on Description Logics. We devise Concept Abduction and Concept Contraction as non-monotonic inferences in Description Logics suitable for modeling matchmaking in a logical framework, and prove some related complexity results. We also present reasonable algorithms for semantic matchmaking based on the devised inferences, and prove that they obey to some commonsense properties. Finally, we report on the implementation of the proposed matchmaking framework, which has been used both as a mediator in e-marketplaces and for semantic web services discovery.
    BibTeX:
    @article{DiNoia2007,
      author = {Di Noia, Tommaso and Di Sciascio, Eugenio and Donini, Francesco M.},
      title = {Semantic matchmaking as non-monotonic reasoning: A description logic approach},
      journal = {JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH},
      year = {2007},
      volume = {29},
      pages = {269-307}
    }
    
    Dieng-Kuntz, R., Minier, D., Ruzicka, M., Corby, F., Corby, O. & Alamarguy, L. Building and using a medical ontology for knowledge management and cooperative work in a health care network {2006} COMPUTERS IN BIOLOGY AND MEDICINE
    Vol. {36}({7-8}), pp. {871-892} 
    article DOI  
    Abstract: In the context of a health care network, we describe our method for reconstitution of a medical ontology via the translation of a medical database (DB) towards RDF(S) language. Then we show how we extended this ontology among others through natural language processing of a textual corpus. Then, we present the construction of a tool called ``Virtual Staff'', enabling a cooperative diagnosis by some of the health care network actors, by relying on this medical ontology and on the creation of SOAP and QOC graphs. (c) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Dieng-Kuntz2006,
      author = {Dieng-Kuntz, Rose and Minier, David and Ruzicka, Marek and Corby, Frederic and Corby, Olivier and Alamarguy, Laurent},
      title = {Building and using a medical ontology for knowledge management and cooperative work in a health care network},
      journal = {COMPUTERS IN BIOLOGY AND MEDICINE},
      year = {2006},
      volume = {36},
      number = {7-8},
      pages = {871-892},
      doi = {{10.1016/j.compbiomed.2005.04.015}}
    }
    
    Ding, L., Pan, R., Finin, T., Joshi, A., Peng, Y. & Kolari, P. Finding and ranking knowledge on the Semantic Web {2005}
    Vol. {3729}SEMANTIC WEB - ISWC 2005, PROCEEDINGS, pp. {156-170} 
    inproceedings  
    Abstract: Swoogle helps software agents and knowledge engineers find Semantic Web knowledge encoded in RDF and OWL documents on the Web. Navigating such a Semantic Web on the Web is difficult due to the paucity of explicit hyperlinks beyond the namespaces in URIrefs and the few inter-document links like rdfs:seeAlso and owl:imports. In order to solve this issue, this paper proposes a novel Semantic Web navigation model providing additional navigation paths through Swoogle's search services such as the Ontology Dictionary. Using this model, we have developed algorithms for ranking the importance of Semantic Web objects at three levels of granularity: documents, terms and RDF graphs. Experiments show that Swoogle outperforms conventional web search engine and other ontology libraries in finding more ontologies, ranking their importance, and thus promoting the use and emergence of consensus ontologies.
    BibTeX:
    @inproceedings{Ding2005,
      author = {Ding, L and Pan, R and Finin, T and Joshi, A and Peng, Y and Kolari, P},
      title = {Finding and ranking knowledge on the Semantic Web},
      booktitle = {SEMANTIC WEB - ISWC 2005, PROCEEDINGS},
      year = {2005},
      volume = {3729},
      pages = {156-170},
      note = {4th International Semantic Web Conference (ISWC 2005), Galway, IRELAND, NOV 06-10, 2005}
    }
    
    Ding, Y., Fensel, D., Klein, M. & Omelayenko, B. The semantic web: yet another hip? {2002} DATA & KNOWLEDGE ENGINEERING
    Vol. {41}({2-3}), pp. {205-227} 
    article  
    Abstract: Currently, computers are changing from single, isolated devices into entry points to a worldwide network of information exchange and business transactions called the World Wide Web (WWW). For this reason, support in data, information, and knowledge exchange has become a key issue in current computer technology. The success of the WWW has made it increasingly difficult to find, access, present, and maintain the information required by a wide variety of users. In response to this problem, many new research initiatives and commercial enterprises have been set up to enrich available information with machine processable semantics. This semantic web will provide intelligent access to heterogeneous, distributed information, enabling software products (agents) to mediate between user needs and the information sources available. This paper summarizes ongoing research in the area of the semantic web, focusing especially on ontology technology. (C) 2002 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Ding2002,
      author = {Ding, Y and Fensel, D and Klein, M and Omelayenko, B},
      title = {The semantic web: yet another hip?},
      journal = {DATA & KNOWLEDGE ENGINEERING},
      year = {2002},
      volume = {41},
      number = {2-3},
      pages = {205-227}
    }
    
    Ding, Y. & Foo, S. Ontology research and development. Part 2 - a review of ontology mapping and evolving {2002} JOURNAL OF INFORMATION SCIENCE
    Vol. {28}({5}), pp. {375-388} 
    article  
    Abstract: This is the second of a two-part paper to review ontology research and development, in particular, ontology mapping and evolving. Ontology is defined as a formal explicit specification of a shared conceptualization. Ontology itself is not a static model so that it must have the potential to capture changes of meanings and relations. As such, mapping and evolving ontologies is part of an essential task of ontology learning and development. Ontology mapping is concerned with reusing existing ontologies, expanding and combining them by some means and enabling a larger pool of information and knowledge in different domains to be integrated to support new communication and use. Ontology evolving, likewise, is concerned with maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired. It is apparent from the reviews that current research into semi-automatic or automatic ontology research in all the three aspects of generation, mapping and evolving have so far achieved limited success. Expert human input is essential in almost all cases. Achievements have been made largely in the form of tools and aids to assist the human expert. Many research challenges remain in this field and many of such challenges need to be overcome if the next generation of the Semantic Web is to be realized.
    BibTeX:
    @article{Ding2002a,
      author = {Ding, Y and Foo, S},
      title = {Ontology research and development. Part 2 - a review of ontology mapping and evolving},
      journal = {JOURNAL OF INFORMATION SCIENCE},
      year = {2002},
      volume = {28},
      number = {5},
      pages = {375-388}
    }
    
    Ding, Y. & Foo, S. Ontology research and development. Part I - a review of ontology generation {2002} JOURNAL OF INFORMATION SCIENCE
    Vol. {28}({2}), pp. {123-136} 
    article  
    Abstract: Ontology is an important emerging discipline that has the huge potential to improve information organization, management and understanding. It has a crucial role to play in enabling content-based access, interoperability, communications, and providing qualitatively new levels of services on the next wave of web transformation in the form of the Semantic Web. The issues pertaining to ontology generation, mapping and maintenance are critical key areas that need to be understood and addressed. This survey is presented in two parts. The first part reviews the state-of-the-art techniques and work done on semi-automatic and automatic ontology generation, as well as the problems facing such research. The second complementary survey is dedicated to ontology mapping and ontology `evolving'. Through this survey, we have identified that shallow information extraction and natural language processing techniques are deployed to extract concepts or classes from free-text or semi-structured data. However, relation extraction is a very complex and difficult issue to resolve and it has turned out to be the main impediment to ontology learning and applicability. Further research is encouraged to find appropriate and efficient ways to detect or identify relations through semi-automatic and automatic means.
    BibTeX:
    @article{Ding2002b,
      author = {Ding, Y and Foo, S},
      title = {Ontology research and development. Part I - a review of ontology generation},
      journal = {JOURNAL OF INFORMATION SCIENCE},
      year = {2002},
      volume = {28},
      number = {2},
      pages = {123-136}
    }
    
    Ding, Y., Sun, H. & Hao, K. A bio-inspired emergent system for intelligent Web service composition and management {2007} KNOWLEDGE-BASED SYSTEMS
    Vol. {20}({5}), pp. {457-465} 
    article DOI  
    Abstract: Some important mechanisms in neuroendocrine-immune (NEI) system are inspired to design a decentralized, evolutionary, scalable, and adaptive system for Web service composition and management. We first abstract a novel intelligent network model inspired from the NEI system. Based on this model, we then propose a method for Web service emergence by designing a bio-entity as an autonomous agent to represent Web service. As such, automatic composition and dynamic management of Web services can be achieved. Also, we build its computation platform which allows the bio-entities to cooperate over Web services and exploits capabilities of their partners. Finally, the simulation results on the platform show that Web service emergence can be achieved through self-organizing, cooperating, and compositing. The proposed method provides a novel solution for intelligent composition and management of Web services. (C) 2007 Published by Elsevier B.V.
    BibTeX:
    @article{Ding2007,
      author = {Ding, Yongsheng and Sun, Hongbin and Hao, Kuangrong},
      title = {A bio-inspired emergent system for intelligent Web service composition and management},
      journal = {KNOWLEDGE-BASED SYSTEMS},
      year = {2007},
      volume = {20},
      number = {5},
      pages = {457-465},
      note = {International Conference on Intelligent Systems and Knowledge Engineering, Shanghai, PEOPLES R CHINA, APR 06-07, 2006},
      doi = {{10.1016/j.knosys.2007.01.007}}
    }
    
    Doan, A., Madhavan, J., Dhamankar, R., Domingos, P. & Halevy, A. Learning to match ontologies on the Semantic Web {2003} VLDB JOURNAL
    Vol. {12}({4}), pp. {303-319} 
    article DOI  
    Abstract: On the Semantic Web, data will inevitably come from many different ontologies, and information processing across ontologies is not possible without knowing the semantic mappings between them. Manually finding such mappings is tedious, error-prone, and clearly not possible on the Web scale. Hence the development of tools to assist in the ontology mapping process is crucial to the success of the Semantic Web. We describe GLUE, a system that employs machine learning techniques to find such mappings. Given two ontologies, for each concept in one ontology GLUE finds the most similar concept in the other ontology. We give well-founded probabilistic definitions to several practical similarity measures and show that GLUE can work with all of them. Another key feature of GLUE is that it uses multiple learning strategies, each of which exploits well a different type of information either in the data instances or in the taxonomic structure of the ontologies. To further improve matching accuracy, we extend GLUE to incorporate commonsense knowledge and domain constraints into the matching process. Our approach is thus distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge. We describe a set of experiments on several real-world domains and show that GLUE proposes highly accurate semantic mappings. Finally, we extend GLUE to find complex mappings between ontologies and describe experiments that show the promise of the approach.
    BibTeX:
    @article{Doan2003,
      author = {Doan, A and Madhavan, J and Dhamankar, R and Domingos, P and Halevy, A},
      title = {Learning to match ontologies on the Semantic Web},
      journal = {VLDB JOURNAL},
      year = {2003},
      volume = {12},
      number = {4},
      pages = {303-319},
      doi = {{10.1007/s00778-003-0104-2}}
    }
    
    Dogac, A., Laleci, G., Kirbas, S., Kabak, Y., Sinir, S., Yildz, A. & Gurcan, Y. Artemis: Deploying semantically enriched Web services in the healthcare domain {2006} INFORMATION SYSTEMS
    Vol. {31}({4-5}), pp. {321-339} 
    article DOI  
    Abstract: An essential element in defining the semantics of Web services is the domain knowledge. Medical informatics is one of the few domains to have considerable domain knowledge exposed through standards. These standards offer significant value in terms of expressing the semantics of Web services in the healthcare domain. In this paper, we describe the architecture of the Artemis project, which exploits ontologies based on the domain knowledge exposed by the healthcare information standards through standard bodies like HL7, CEN TC251, ISO TC215 and GEHR. We use these standards for two purposes: first to describe the Web service functionality semantics, that is, the meaning associated with what a Web service does and secondly to describe the meaning associated with the messages or documents exchanged through Web services. Artemis Web service architecture uses ontologies to describe semantics but it does not propose globally agreed ontologics; rather healthcare institutes reconcile their semantic differences through a mediator component. The mediator component uses ontologies based on prominent healthcare standards as references to facilitate semantic mediation among involved institutes. Mediators have a P2P communication architecture to provide scalability and to facilitate the discovery of other mediators. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Dogac2006,
      author = {Dogac, A and Laleci, GB and Kirbas, S and Kabak, Y and Sinir, SS and Yildz, A and Gurcan, Y},
      title = {Artemis: Deploying semantically enriched Web services in the healthcare domain},
      journal = {INFORMATION SYSTEMS},
      year = {2006},
      volume = {31},
      number = {4-5},
      pages = {321-339},
      doi = {{10.1016/j.is.2005.02.006}}
    }
    
    Donderler, M., Saykol, E., Arslan, U., Ulusoy, O. & Gudukbay, U. BilVideo: Design and implementation of a video database management system {2005} MULTIMEDIA TOOLS AND APPLICATIONS
    Vol. {27}({1}), pp. {79-104} 
    article  
    Abstract: With the advances in information technology, the amount of multimedia data captured, produced, and stored is increasing rapidly. As a consequence, multimedia content is widely used for many applications in today's world, and hence, a need for organizing this data, and accessing it from repositories with vast amount of information has been a driving stimulus both commercially and academically. In compliance with this inevitable trend, first image and especially later video database management systems have attracted a great deal of attention, since traditional database systems are designed to deal with alphanumeric information only, thereby not being suitable for multimedia data. In this paper, a prototype video database management system, which we call BilVideo, is introduced. The system architecture of BilVideo is original in that it provides full support for spatio-temporal queries that contain any combination of spatial, temporal, object-appearance, external-predicate, trajectory-projection, and similarity-based object-trajectory conditions by a rule-based system built on a knowledge-base, while utilizing an object-relational database to respond to semantic (keyword, event/activity, and category-based), color, shape, and texture queries. The parts of BilVideo (Fact-Extractor, Video-Annotator, its Web-based visual query interface, and its SQL-like textual query language) are presented, as well. Moreover, our query processing strategy is also briefly explained.
    BibTeX:
    @article{Donderler2005,
      author = {Donderler, ME and Saykol, E and Arslan, U and Ulusoy, O and Gudukbay, U},
      title = {BilVideo: Design and implementation of a video database management system},
      journal = {MULTIMEDIA TOOLS AND APPLICATIONS},
      year = {2005},
      volume = {27},
      number = {1},
      pages = {79-104}
    }
    
    Dou, D., McDermott, D. & Qi, P. Ontology translation on the semantic Web {2003}
    Vol. {2888}ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2003: COOPIS, DOA, AND ODBASE, pp. {952-969} 
    inproceedings  
    Abstract: Ontologies as means for formally specifying the vocabulary and relationship of concepts are seen playing a key role on the Semantic Web. However, the Web's distributed nature makes ontology translation one of the most difficult problems that web-based agents must cope with when they share information. Ontology translation is required when translating datasets, generating ontology extensions and querying through different ontologies. OntoMerge, an online system by ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in DAML+OIL or other web languages. The merge of two related ontologies is obtained by taking the union of the concepts and the axioms defining them. We add bridging axioms not only as ``bridges'' between concepts in two related ontologies but also to make this merge into a new ontology for further merging with other ontologies. Our uniform internal representation, Web-PDDL, is a strong typed first-order logic language for web application, used to separate ontology translation into syntactic translation and semantic translation. Syntactic translation is done by an automatic translator between Web-PDDL and DAML+OIL or other web languages. Semantic translation is implemented using an inference engine (OntoEngine) which processes assertions and queries in Web-PDDL syntax, running in either a data-driven (forward chaining) or demand-driven (backward chaining) way.
    BibTeX:
    @inproceedings{Dou2003,
      author = {Dou, DJ and McDermott, D and Qi, PS},
      title = {Ontology translation on the semantic Web},
      booktitle = {ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2003: COOPIS, DOA, AND ODBASE},
      year = {2003},
      volume = {2888},
      pages = {952-969},
      note = {OTM Confederated International Conference CoopIS, DOA and ODBASE, CATANIA, ITALY, NOV 03-07, 2003}
    }
    
    Douglis, F., Feldmann, A., Krishnamurthy, B. & Mogul, J. Rate of change and other metrics: a live study of the World Wide Web {1997} PROCEEDINGS OF THE USENIX SYMPOSIUM ON INTERNET TECHNOLOGIES AND SYSTEMS, pp. {147-158}  inproceedings  
    Abstract: Caching in the World Wide Web is based on two critical assumptions: that a significant fraction of requests reaccess resources that have already been retrieved; and that those resources do not change between accesses. We tested the validity of these assumptions, and their dependence on characteristics of Web resources, including access rate, age at time of reference, content type, resource size, and Internet top-level domain. We also measured the rate at which resources change, and the prevalence of duplicate copies in the Web. We quantified the potential benefit of a shared proxy-caching server in a large environment by using traces that were collected at the Internet connection points for two large corporations, representing significant numbers of references. Only 22% of the resources referenced in the traces we analyzed were accessed more than once, but about half of the references were to those multiply-referenced resources. Of this half, 13% were to a resource that had been modified since the previous traced reference to it. We found that the content type and rate of access have a strong influence on these metrics, the domain has a moderate influence, and size has little effect. In addition, we studied other aspects of the rate of change, including semantic differences such as the insertion or deletion of anchors, phone numbers, and email addresses.
    BibTeX:
    @inproceedings{Douglis1997,
      author = {Douglis, F and Feldmann, A and Krishnamurthy, B and Mogul, J},
      title = {Rate of change and other metrics: a live study of the World Wide Web},
      booktitle = {PROCEEDINGS OF THE USENIX SYMPOSIUM ON INTERNET TECHNOLOGIES AND SYSTEMS},
      year = {1997},
      pages = {147-158},
      note = {USENIX Symposium on Internet Technologies and Systems, MONTEREY, CA, DEC 08-11, 1997}
    }
    
    Duke, A., Davies, J. & Richardson, M. Enabling a scalable service-oriented architecture with semantic Web Services {2005} BT TECHNOLOGY JOURNAL
    Vol. {23}({3}), pp. {191-201} 
    article DOI  
    Abstract: Service-oriented architectures (SOAs) aim to improve the ability of organisations to quickly create and reconfigure IT systems to support new and rapidly changing customer services. The key idea is to move away from monolithic systems, towards systems which are designed as a number of interoperable components, for enhanced flexibility and reuse. This paper will describe how semantic descriptions of such services can improve the service discovery and composition processes and move towards an SOA that is more dynamic and scalable. An account of a case study involving BT Wholesale's B2B gateway is given.
    BibTeX:
    @article{Duke2005,
      author = {Duke, A and Davies, J and Richardson, M},
      title = {Enabling a scalable service-oriented architecture with semantic Web Services},
      journal = {BT TECHNOLOGY JOURNAL},
      year = {2005},
      volume = {23},
      number = {3},
      pages = {191-201},
      doi = {{10.1007/s10550-005-0041-2}}
    }
    
    Dzbor, M., Domingue, J. & Motta, E. Magpie - Towards a Semantic Web browser {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {690-705} 
    inproceedings  
    Abstract: Web browsing involves two tasks: finding the right web page and then making sense of its content. So far, research has focused on supporting the task of finding web resources through `standard' information retrieval mechanisms, or semantics-enhanced search. Much less attention has been paid to the second problem. In this paper we describe Magpie, a tool which supports the interpretation of web pages. Magpie offers complementary knowledge sources, which a reader can call upon to quickly gain access to any background knowledge relevant to a web resource. Magpie automatically associates an ontology-based semantic layer to web resources, allowing relevant services to be invoked within a standard web browser. Hence, Magpie may be seen as a step towards a semantic web browser. The functionality of Magpie is illustrated using examples of how it has been integrated with our lab's web resources.
    BibTeX:
    @inproceedings{Dzbor2003,
      author = {Dzbor, M and Domingue, J and Motta, E},
      title = {Magpie - Towards a Semantic Web browser},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {690-705},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Dzitac, I. & Barbat, B.E. Artificial Intelligence plus Distributed Systems = Agents {2009} INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL
    Vol. {4}({1}), pp. {17-26} 
    article  
    Abstract: The connection with Wirth's book goes beyond the title, albeit confining the area to modern Artificial Intelligence (AT). Whereas thirty years ago, to devise effective programs, it became necessary to enhance the classical algorithmic framework with approaches applied to limited and focused subdomains, in the context of broad-band technology and semantic web, applications - running in open, heterogeneous, dynamic and uncertain environments-current paradigms are not enough, because of the shift from programs to processes. Beside the structure as position paper, to give more weight to some basic assertions, results of recent research are abridged and commented upon in line with new paradigms. Among the conclusions: a) Non-deterministic software is unavoidable; its development entails not just new design principles but new computing paradigms. b) Agent-oriented systems, to be effectual, should merge conventional agent design with approaches employed in advanced distributed systems (where parallelism is intrinsic to the problem, not just a mean to speed up).
    BibTeX:
    @article{Dzitac2009,
      author = {Dzitac, Ioan and Barbat, Boldur E.},
      title = {Artificial Intelligence plus Distributed Systems = Agents},
      journal = {INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL},
      year = {2009},
      volume = {4},
      number = {1},
      pages = {17-26}
    }
    
    Eiter, T., Ianni, G., Lukasiewicz, T., Schindlauer, R. & Tompits, H. Combining answer set programming with description logics for the semantic Web {2008} ARTIFICIAL INTELLIGENCE
    Vol. {172}({12-13}), pp. {1495-1539} 
    article DOI  
    Abstract: We propose a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN(D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. To this end, we introduce description logic programs (or di-programs), which consist of a description logic knowledge base L and a finite set P of description logic rules (or dl-rules). Such rules are similar to usual rules in nonmonotonic logic programs, but they may also contain queries to L, possibly under default negation, in their bodies. They allow for building rules on top of ontologies but also, to a limited extent, building ontologies on top of rules. We define a suite of semantics for various classes of dl-programs, which conservatively extend the standard semantics of the respective classes and coincide with it in absence of a description logic knowledge base. More concretely, we generalize positive, stratified, and arbitrary normal logic programs to dl-programs, and define a Herbrand model semantics for them. We show that they have similar properties as ordinary logic programs, and also provide fixpoint characterizations in terms of (iterated) consequence operators. For arbitrary dl-programs, we define answer sets by generalizing Gelfond and Lifschitz's notion of a transform, leading to a strong and a weak answer set semantics, which are based on reductions to the semantics of positive dl-programs and ordinary positive logic programs, respectively. We also show how the weak answer sets can be computed utilizing answer sets of ordinary normal logic programs. Furthermore, we show how some advanced reasoning tasks for the Semantic Web, including different forms of closed-world reasoning and default reasoning, as well as DL-safe rules, can be realized on top of dl-programs. Finally, we give a precise picture of the computational complexity of dl-programs, and we describe efficient algorithms and a prototype implementation of dl-programs which is available on the Web. (c) 2008 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Eiter2008,
      author = {Eiter, Thomas and Ianni, Giovambattista and Lukasiewicz, Thomas and Schindlauer, Roman and Tompits, Hans},
      title = {Combining answer set programming with description logics for the semantic Web},
      journal = {ARTIFICIAL INTELLIGENCE},
      year = {2008},
      volume = {172},
      number = {12-13},
      pages = {1495-1539},
      note = {9th International Conference on Principles of Knowledge Representation and Reasoning, Whistler, CANADA, JUN 02-05, 2004},
      doi = {{10.1016/j.artint.2008.04.002}}
    }
    
    Eiter, T., Ianni, G., Schindlauer, R. & Tompits, H. Effective integration of declarative rules with external evaluations for Semantic-Web reasoning {2006}
    Vol. {4011}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {273-287} 
    inproceedings  
    Abstract: Towards providing a suitable tool for building the Rule Layer of the Semantic Web, HEX-programs have been introduced as a special kind of logic programs featuring capabilities for higher-order reasoning, interfacing with extemal sources of computation, and default negation. Their semantics is based on the notion of answer sets, providing a transparent interoperability with the Ontology Layer of the Semantic Web and full declarativity. In this paper, we identify classes Of HEX-programs feasible for implementation yet keeping the desirable advantages of the full language. A general method for combining and evaluating sub-programs belonging to arbitrary classes is introduced, thus enlarging the variety of programs whose execution is practicable. Implementation activity on the current prototype is also reported.
    BibTeX:
    @inproceedings{Eiter2006,
      author = {Eiter, Thomas and Ianni, Giovambattista and Schindlauer, Roman and Tompits, Hans},
      title = {Effective integration of declarative rules with external evaluations for Semantic-Web reasoning},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2006},
      volume = {4011},
      pages = {273-287},
      note = {3rd European Semantic Web Conference, Budva, SERBIA MONTENEG, JUN 11-14, 2006}
    }
    
    Eiter, T., Lukasiewicz, T., Schindlauer, R. & Tompits, H. Well-founded semantics for description logic programs in the Semantic Web {2004}
    Vol. {3323}RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS, pp. {81-97} 
    inproceedings  
    Abstract: In previous work, towards the integration of rules and ontologies in the Semantic Web, we have proposed a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN(D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. More precisely, we have introduced description logic programs (or dl-programs), which consist of a description logic knowledge base L and a finite set of description logic rules P, and we have defined their answer set semantics. In this paper, we continue this line of research. Here, as a central contribution, we present the well-founded semantics for dl-programs, and we analyze its semantic properties. In particular, we show that it generalizes the well-founded semantics for ordinary normal programs. Furthermore, we show that in the general case, the well-founded semantics of dl-programs is a partial model that approximates the answer set semantics, whereas in the positive and the stratified case, it is a total model that coincides with the answer set semantics. Finally, we also provide complexity results for dl-programs under the well-founded semantics.
    BibTeX:
    @inproceedings{Eiter2004,
      author = {Eiter, T and Lukasiewicz, T and Schindlauer, R and Tompits, H},
      title = {Well-founded semantics for description logic programs in the Semantic Web},
      booktitle = {RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS},
      year = {2004},
      volume = {3323},
      pages = {81-97},
      note = {3rd International Workshop on Rules and Rule Markup Languages for the Semantic Web, Hiroshima, JAPAN, NOV 08, 2004}
    }
    
    El-Diraby, T., Lima, C. & Feis, B. Domain taxonomy for construction concepts: Toward a formal ontology for construction knowledge {2005} JOURNAL OF COMPUTING IN CIVIL ENGINEERING
    Vol. {19}({4}), pp. {394-406} 
    article DOI  
    Abstract: With the advancement of the semantic web, the construction industry is at a stage where intelligent knowledge management systems can be used. Such systems support more effective collaboration, where virtual teams of skilled users, not software, exchange ideas, decisions, and best practice. To achieve that, there is a need to create consistent semantic representation of construction knowledge. Existing representations, in the form of classification systems and product data models, lack effective modeling of concept semantics-a fundamental requirement for human-based exchange of knowledge. Toward this objective, this paper presents a domain taxonomy that was developed as part of the e-COGNOS project. The taxonomy was developed as a first step in the establishment of domain ontology for construction. The taxonomy was developed to be process-centered and to allow for utilization of already existing classification systems (BS6100, Master Format, and UniClass, for example). The taxonomy uses seven major domains to classify construction concepts: Process, Product, Project, Actor, Resource, Technical Topics, and Systems. The taxonomy was developed and validated through extensive interaction with domain experts. The taxonomy was used to develop a prototype ontology for the construction domain including semantic relationships and axioms. The ontology was used to support several applications in semantic knowledge management as part of the e-COGNOS portal, including semantic indexing and retrieval of information and ontology-based collaborative project development.
    BibTeX:
    @article{El-Diraby2005,
      author = {El-Diraby, TA and Lima, C and Feis, B},
      title = {Domain taxonomy for construction concepts: Toward a formal ontology for construction knowledge},
      journal = {JOURNAL OF COMPUTING IN CIVIL ENGINEERING},
      year = {2005},
      volume = {19},
      number = {4},
      pages = {394-406},
      doi = {{10.1061/(ASCE)0887-3801(2005)19:4(394)}}
    }
    
    Elenius, D., Denker, G., Martin, D., Gilham, F., Khouri, J., Sadaati, S. & Senanayake, R. The OWL-S editor - A development tool for semantic web services {2005}
    Vol. {3532}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {78-92} 
    inproceedings  
    Abstract: The power of Web Service (WS) technology lies in the fact that it establishes a common, vendor-neutral platform for integrating distributed computing applications, in intranets as well as the Internet at large. Semantic Web Services (SWSs) promise to provide solutions to the challenges associated with automated discovery, dynamic composition, enactment, and other tasks associated with managing and using service-based systems. One of the barriers to a wider adoption of SWS technology is the lack of tools for creating SWS specifications. OWL-S is one of the major SWS description languages. This paper presents an OWL-S Editor, whose objective is to allow easy, intuitive OWL-S service development and to provide a variety of special-purpose capabilities to facilitate SWS design. The editor is implemented as a plugin to the Protege OWL ontology editor, and is being developed as open-source software.
    BibTeX:
    @inproceedings{Elenius2005,
      author = {Elenius, D and Denker, G and Martin, D and Gilham, F and Khouri, J and Sadaati, S and Senanayake, R},
      title = {The OWL-S editor - A development tool for semantic web services},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2005},
      volume = {3532},
      pages = {78-92},
      note = {2nd European Semantic Web Conference, Heraklion, GREECE, MAY 29-JUN 01, 2005}
    }
    
    Eppig, J.T., Blake, J.A., Bult, C.J., Kadin, J.A., Richardson, J.E. & Mouse Genome Database Grp The mouse genome database (MGD): new features facilitating a model system {2007} NUCLEIC ACIDS RESEARCH
    Vol. {35}({Sp. Iss. SI}), pp. {D630-D637} 
    article DOI  
    Abstract: The mouse genome database (MGD, http://www.informatics.jax.org/), the international community database for mouse, provides access to extensive integrated data on the genetics, genomics and biology of the laboratory mouse. The mouse is an excellent and unique animal surrogate for studying normal development and disease processes in humans. Thus, MGD's primary goals are to facilitate the use of mouse models for studying human disease and enable the development of translational research hypotheses based on comparative genotype, phenotype and functional analyses. Core MGD data content includes gene characterization and functions, phenotype and disease model descriptions, DNA and protein sequence data, polymorphisms, gene mapping data and genome coordinates, and comparative gene data focused on mammals. Data are integrated from diverse sources, ranging from major resource centers to individual investigator laboratories and the scientific literature, using a combination of automated processes and expert human curation. MGD collaborates with the bioinformatics community on the development of data and semantic standards, and it incorporates key ontologies into the MGD annotation system, including the Gene Ontology (GO), the Mammalian Phenotype Ontology, and the Anatomical Dictionary for Mouse Development and the Adult Anatomy. MGD is the authoritative source for mouse nomenclature for genes, alleles, and mouse strains, and for GO annotations to mouse genes. MGD provides a unique platform for data mining and hypothesis generation where one can express complex queries simultaneously addressing phenotypic effects, biochemical function and process, sub-cellular location, expression, sequence, polymorphism and mapping data. Both web-based querying and computational access to data are provided. Recent improvements in MGD described here include the incorporation of single nucleotide polymorphism data and search tools, the addition of PIR gene superfamily classifications, phenotype data for NIH-acquired knockout mice, images for mouse phenotypic genotypes, new functional graph displays of GO annotations, and new orthology displays including sequence information and graphic displays.
    BibTeX:
    @article{Eppig2007,
      author = {Eppig, Janan T. and Blake, Judith A. and Bult, Carol J. and Kadin, James A. and Richardson, Joel E. and Mouse Genome Database Grp},
      title = {The mouse genome database (MGD): new features facilitating a model system},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2007},
      volume = {35},
      number = {Sp. Iss. SI},
      pages = {D630-D637},
      doi = {{10.1093/nar/gkl940}}
    }
    
    Estrada, E. & Rodriguez-Velazquez, J. Spectral measures of bipartivity in complex networks {2005} PHYSICAL REVIEW E
    Vol. {72}({4, Part 2}) 
    article DOI  
    Abstract: We introduce a quantitative measure of network bipartivity as a proportion of even to total number of closed walks in the network. Spectral graph theory is used to quantify how close to bipartite a network is and the extent to which individual nodes and edges contribute to the global network bipartivity. It is shown that the bipartivity characterizes the network structure and can be related to the efficiency of semantic or communication networks, trophic interactions in food webs, construction principles in metabolic networks, or communities in social networks.
    BibTeX:
    @article{Estrada2005,
      author = {Estrada, E and Rodriguez-Velazquez, JA},
      title = {Spectral measures of bipartivity in complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2005},
      volume = {72},
      number = {4, Part 2},
      doi = {{10.1103/PhysRevE.72.046105}}
    }
    
    Eysenbach, G. Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness {2008} JOURNAL OF MEDICAL INTERNET RESEARCH
    Vol. {10}({3}) 
    article DOI  
    Abstract: In a very significant development for eHealth, a broad adoption of Web 2.0 technologies and approaches coincides with the more recent emergence of Personal Health Application Platforms and Personally Controlled Health Records such as Google Health, Microsoft HealthVault, and Dossia. ``Medicine 2.0'' applications, services, and tools are defined as Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtual reality approaches to enable and facilitate specifically 1) social networking, 2) participation, 3) apomediation, 4) openness, and 5) collaboration, within and between these user groups. The Journal of Medical Internet Research (JMIR) publishes a Medicine 2.0 theme issue and sponsors a conference on ``How Social Networking and Web 2.0 changes Health, Health Care, Medicine, and Biomedical Research'', to stimulate and encourage research in these five areas.
    BibTeX:
    @article{Eysenbach2008,
      author = {Eysenbach, Gunther},
      title = {Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness},
      journal = {JOURNAL OF MEDICAL INTERNET RESEARCH},
      year = {2008},
      volume = {10},
      number = {3},
      doi = {{10.2196/jmir.1030}}
    }
    
    Feigenbaum, L., Herman, I., Hongsermeier, T., Neumann, E. & Stephens, S. The semantic web in action {2007} SCIENTIFIC AMERICAN
    Vol. {297}({6}), pp. {90-97} 
    article  
    Abstract: Semantic web is a set of formats and languages that find and analyze data on the World Wide Web, a system that pinpoints genetic cause of heart disease and another system that reveals early stages of influenza outbreaks. The companies working through World Wide Web Consortium are developing standards that are making Semantic Web more accessible and easy to use.
    BibTeX:
    @article{Feigenbaum2007,
      author = {Feigenbaum, Lee and Herman, Ivan and Hongsermeier, Tonya and Neumann, Eric and Stephens, Susie},
      title = {The semantic web in action},
      journal = {SCIENTIFIC AMERICAN},
      year = {2007},
      volume = {297},
      number = {6},
      pages = {90-97}
    }
    
    Felfernig, A., Friedrich, G., Jannach, D., Stumptner, M. & Zanker, M. Configuration knowledge representations for Semantic Web applications {2003} AI EDAM-ARTIFICIAL INTELLIGENCE FOR ENGINEERING DESIGN ANALYSIS AND MANUFACTURING
    Vol. {17}({1}), pp. {31-50} 
    article DOI  
    Abstract: Today's economy exhibits a growing trend toward highly specialized solution providers cooperatively offering configurable products and services to their customers. This paradigm shift requires the extension of current standalone configuration technology with capabilities of knowledge sharing and distributed problem solving. In this context a standardized configuration knowledge representation language with formal semantics is needed in order to support knowledge interchange between different configuration environments. Languages such as Ontology Inference Layer (OIL) and DARPA Agent Markup Language (DAML+ OIL) are based on such formal semantics (description logic) and are very popular for knowledge representation in the Semantic Web. In this paper we analyze the applicability of those languages with respect to configuration knowledge representation and discuss additional demands on expressivity. For joint configuration problem solving it is necessary to agree on a common problem definition. Therefore, we give a description logic based definition of a configuration problem and show its equivalence with existing consistency-based definitions, thus joining the two major streams in knowledge-based configuration (description logics and predicate logic /constraint based configuration).
    BibTeX:
    @article{Felfernig2003,
      author = {Felfernig, A and Friedrich, G and Jannach, D and Stumptner, M and Zanker, M},
      title = {Configuration knowledge representations for Semantic Web applications},
      journal = {AI EDAM-ARTIFICIAL INTELLIGENCE FOR ENGINEERING DESIGN ANALYSIS AND MANUFACTURING},
      year = {2003},
      volume = {17},
      number = {1},
      pages = {31-50},
      doi = {{10.1017/S0890060403171041}}
    }
    
    Fensel, D. Triple-space computing: Semantic web services based on persistent publication of information {2004}
    Vol. {3283}INTELLIGENCE IN COMMUNICATION SYSTEMS, pp. {43-53} 
    inproceedings  
    Abstract: This paper discusses possible routes to moving the web from a collection of human readable pieces of information connecting humans, to a webthat connects computing devices based on machine-processable semantics of dataand distributed computing. The current shortcomings of web service technologyare analyzed and a new paradigm for fully enabled semantic web services isproposed which is called triple-based or triple-space computing.
    BibTeX:
    @inproceedings{Fensel2004,
      author = {Fensel, D},
      title = {Triple-space computing: Semantic web services based on persistent publication of information},
      booktitle = {INTELLIGENCE IN COMMUNICATION SYSTEMS},
      year = {2004},
      volume = {3283},
      pages = {43-53},
      note = {IFIP International Conference on Intelligence in Communication Systems (INTELLCOMM 2004), Bangkok, THAILAND, NOV 23-26, 2004}
    }
    
    Fensel, D. The semantic Web and its languages {2000} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {15}({6}), pp. {67} 
    article  
    BibTeX:
    @article{Fensel2000,
      author = {Fensel, D},
      title = {The semantic Web and its languages},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2000},
      volume = {15},
      number = {6},
      pages = {67}
    }
    
    Fensel, D., van Harmelen, F., Horrocks, I., McGuinness, D. & Patel-Schneider, P. OIL: An ontology infrastructure for the Semantic Web {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({2}), pp. {38-45} 
    article  
    BibTeX:
    @article{Fensel2001,
      author = {Fensel, D and van Harmelen, F and Horrocks, I and McGuinness, DL and Patel-Schneider, PF},
      title = {OIL: An ontology infrastructure for the Semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {2},
      pages = {38-45}
    }
    
    Fensel, D. & Musen, M. The semantic web: A brain for humankind {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({2}), pp. {24-25} 
    article  
    BibTeX:
    @article{Fensel2001a,
      author = {Fensel, D and Musen, MA},
      title = {The semantic web: A brain for humankind},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {2},
      pages = {24-25}
    }
    
    Fenza, G., Loia, V. & Senatore, S. A hybrid approach to semantic web services matchmaking {2008} INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
    Vol. {48}({3}), pp. {808-828} 
    article DOI  
    Abstract: Deploying the semantics embedded in web services is a mandatory step in the automation of discovery, invocation and composition activities. The semantic annotation is the ``add-on'' to cope with the actual interoperability limitations and to assure a valid support to the interpretation of services capabilities. Nevertheless many issues have to be reached to support semantics in the web services and to guarantee accurate functionality descriptions. Early efforts address automatic matchmaking tasks, in order to find eligible advertised services which appropriately meet the consumer's demand. In the most of approaches, this activity is often entrusted to software agents, able to drive reasoning/planning activities, to discover the required service which can be single or composed of more atomic services. This paper presents a hybrid framework which achieves a fuzzy matchmaking of semantic web services. Central role is entrusted to task-oriented agents that, given a service request, interact to discover approximate reply, when no exact match occurs among the available web services. The matchmaking activity exploits a mathematical model, the fuzzy multiset to suitably represent the multi-granular information, enclosed into an OWLS-based description of a semantic web service. (C) 2008 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Fenza2008,
      author = {Fenza, Giuseppe and Loia, Vincenzo and Senatore, Sabrina},
      title = {A hybrid approach to semantic web services matchmaking},
      journal = {INTERNATIONAL JOURNAL OF APPROXIMATE REASONING},
      year = {2008},
      volume = {48},
      number = {3},
      pages = {808-828},
      note = {Session on Theory of NMR and Uncertainty, Lake Dist, ENGLAND, APR 01, 2006},
      doi = {{10.1016/j.ijar.2008.01.005}}
    }
    
    Ferdinand, M., Zirpins, C. & Trastour, D. Lifting XML Schema to OWL {2004}
    Vol. {3140}WEB ENGINEERING, PROCEEDINGS, pp. {354-358} 
    inproceedings  
    Abstract: The Semantic Web will allow software agents to understand and reason about data provided by Web applications. Unfortunately, formal ontologies, needed to express data semantics, are often not readily available. However, common data schemas can help to create ontologies. We propose mappings from XML Schema to OWL as well as XML to RDF and show how web engineering can benefit from the gained expressiveness as well as the use of inference services.
    BibTeX:
    @inproceedings{Ferdinand2004,
      author = {Ferdinand, M and Zirpins, C and Trastour, D},
      title = {Lifting XML Schema to OWL},
      booktitle = {WEB ENGINEERING, PROCEEDINGS},
      year = {2004},
      volume = {3140},
      pages = {354-358},
      note = {4th International Conference on Web Engineering (ICWE 2004), Munich, GERMANY, JUL 26-30, 2004}
    }
    
    Fileto, R., Liu, L., Pu, C., Assad, E. & Medeiros, C. POESIA: An ontological workflow approach for composing Web services in agriculture {2003} VLDB JOURNAL
    Vol. {12}({4}), pp. {352-367} 
    article DOI  
    Abstract: This paper describes the POESIA approach to systematic composition of Web services. This pragmatic approach is strongly centered in the use of domain-specific multidimensional ontologies. Inspired by applications needs and founded on ontologies, workflows, and activity models, POESIA provides well-defined operations (aggregation, specialization, and instantiation) to support the composition of Web services. POESIA complements current proposals for Web services definition and composition by providing a higher degree of abstraction with verifiable consistency properties. We illustrate the POESIA approach using a concrete application scenario in agroenvironmental planning.
    BibTeX:
    @article{Fileto2003,
      author = {Fileto, R and Liu, L and Pu, C and Assad, ED and Medeiros, CB},
      title = {POESIA: An ontological workflow approach for composing Web services in agriculture},
      journal = {VLDB JOURNAL},
      year = {2003},
      volume = {12},
      number = {4},
      pages = {352-367},
      doi = {{10.1007/s00778-003-0103-3}}
    }
    
    Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G. & Ruppin, E. Placing search in context: The concept revisited {2002} ACM TRANSACTIONS ON INFORMATION SYSTEMS
    Vol. {20}({1}), pp. {116-131} 
    article  
    Abstract: Keyword-based search engines are in widespread use today as a popular means for Web-based information retrieval. Although such systems seem deceptively simple, a considerable amount of skill is required in order to satisfy non-trivial information needs. This paper presents a new conceptual paradigm for performing search in context, that largely automates the search process, providing even non-professional users with highly relevant results. This paradigm is implemented in practice in the IntelliZap system, where search is initiated from a text query marked by the user in a document she views, and is guided by the text surrounding the marked query in that document (''the context''). The context-driven information retrieval process involves semantic keyword extraction and clustering to automatically generate new, augmented queries. The latter are submitted to a host of general and domain-specific search engines. Search results are then semantically reranked, using context. Experimental results testify that using context to guide search, effectively offers even inexperienced users an advanced search tool on the Web.
    BibTeX:
    @article{Finkelstein2002,
      author = {Finkelstein, L and Gabrilovich, E and Matias, Y and Rivlin, E and Solan, Z and Wolfman, G and Ruppin, E},
      title = {Placing search in context: The concept revisited},
      journal = {ACM TRANSACTIONS ON INFORMATION SYSTEMS},
      year = {2002},
      volume = {20},
      number = {1},
      pages = {116-131},
      note = {10th International World Wide Web Conference (WWW10), HONG KONG, PEOPLES R CHINA, MAY, 2001}
    }
    
    Fisler, K., Krishnamurthi, S., Meyerovich, L. & Tschantz, M. Verification and change-impact analysis of access-control policies {2005} ICSE 05: 27th International Conference on Software Engineering, Proceedings, pp. {196-205}  inproceedings  
    Abstract: Sensitive data are increasingly available on-line through the Web and other distributed protocols. This heightens the need to carefully control access to data. Control means not only preventing the leakage of data but also permitting access to necessary information. Indeed, the same datum is often treated differently depending on context. System designers create policies to express conditions on the access to data. To reduce source clutter and improve maintenance, developers increasingly use domain-specific, declarative languages to express these policies. In turn, administrators need to analyze policies relative to properties, and to understand the effect of policy changes even in the absence of properties. This paper presents Margrave, a software suite for analyzing role-based access-control policies. Margrave includes a verifier that analyzes policies written in the XACML language, translating them into a form of decision-diagram to answer queries. It also provides semantic differencing information between versions of policies. We have implemented these techniques and applied them to policies from a working software application.
    BibTeX:
    @inproceedings{Fisler2005,
      author = {Fisler, K and Krishnamurthi, S and Meyerovich, LA and Tschantz, MC},
      title = {Verification and change-impact analysis of access-control policies},
      booktitle = {ICSE 05: 27th International Conference on Software Engineering, Proceedings},
      year = {2005},
      pages = {196-205},
      note = {27th International Conference on Software Engineering (ICSE 2005), St Louis, MO, MAY 15-21, 2005}
    }
    
    Fitter, A. Darkness visible: reflections on underground ecology {2005} JOURNAL OF ECOLOGY
    Vol. {93}({2}), pp. {231-243} 
    article DOI  
    Abstract: 1 Soil science and ecology have developed independently, making it difficult for ecologists to contribute to urgent current debates on the destruction of the global soil resource and its key role in the global carbon cycle. Soils are believed to be exceptionally biodiverse parts of ecosystems, a view confirmed by recent data from the UK Soil Biodiversity Programme at Sourhope, Scotland, where high diversity was a characteristic of small organisms, but not of larger ones. Explaining this difference requires knowledge that we currently lack about the basic biology and biogeography of micro-organisms. 2 It seems inherently plausible that the high levels of biological diversity in soil play some part in determining the ability of soils to undertake ecosystem-level processes, such as carbon and mineral cycling. However, we lack conceptual models to address this issue, and debate about the role of biodiversity in ecosystem processes has centred around the concept of functional redundancy, and has consequently been largely semantic. More precise construction of our experimental questions is needed to advance understanding. 3 These issues are well illustrated by the fungi that form arbuscular mycorrhizas, the Glomeromycota. This ancient symbiosis of plants and fungi is responsible for phosphate uptake in most land plants, and the phylum is generally held to be species-poor and non-specific, with most members readily colonizing any plant species. Molecular techniques have shown both those assumptions to be unsafe, raising questions about what factors have promoted diversification in these fungi. One source of this genetic diversity may be functional diversity. 4 Specificity of the mycorrhizal interaction between plants and fungi would have important ecosystem consequences. One example would be in the control of invasiveness in introduced plant species: surprisingly, naturalized plant species in Britain are disproportionately from mycorrhizal families, suggesting that these fungi may play a role in assisting invasion. 5 What emerges from an attempt to relate biodiversity and ecosystem processes in soil is our extraordinary ignorance about the organisms involved. There are fundamental questions that are now answerable with new techniques and sufficient will, such as how biodiverse are natural soils? Do microbes have biogeography? Are there rare or even endangered microbes?
    BibTeX:
    @article{Fitter2005,
      author = {Fitter, AH},
      title = {Darkness visible: reflections on underground ecology},
      journal = {JOURNAL OF ECOLOGY},
      year = {2005},
      volume = {93},
      number = {2},
      pages = {231-243},
      doi = {{10.1111/j.0022-0477.2005.00990.x}}
    }
    
    Fodor, O. & Werthner, H. Harmonise: A step toward an interoperable e-tourism marketplace {2004} INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE
    Vol. {9}({2}), pp. {11-39} 
    article  
    Abstract: Travel and tourism comprise the leading application field in business-to-consumer (B2C) e-commerce, representing approximately half of the total worldwide B2C turnover. Even in the 1960s, travel applications (i.e., computerized airline reservation systems) were at the forefront of information technology (IT). Several facts explain this-the product is a confidence good, consumer decisions rely on information available beforehand, and the industry is highly networked, based on worldwide cooperation between stakeholders of different types. The latter factor and the related problem of interoperability represent a major challenge for IT solutions. Harmonise, a European project based on a Semantic Web approach and utilizing a Web services infrastructure, deals with business-to-business (B2B) integration on the ``information'' layer by means of an ontology-based mediation. It allows tourism organizations with different data standards to exchange information seamlessly without having to change their proprietary data schemas. The ``weak'' coupling takes into consideration the specific industry context, with its majority of small or medium-sized enterprises (SMEs) and with many different, also legacy, solutions. Real-world business tests show that this approach meets industry expectations and can facilitate the necessary network effect in order to create an interoperable e-tourism marketplace.
    BibTeX:
    @article{Fodor2004,
      author = {Fodor, O and Werthner, H},
      title = {Harmonise: A step toward an interoperable e-tourism marketplace},
      journal = {INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE},
      year = {2004},
      volume = {9},
      number = {2},
      pages = {11-39}
    }
    
    Formica, A. Concept similarity in formal concept analysis: An information content approach {2008} KNOWLEDGE-BASED SYSTEMS
    Vol. {21}({1}), pp. {80-87} 
    article DOI  
    Abstract: Formal Concept Analysis (FCA) is revealing interesting in supporting difficult activities that are becoming fundamental in the development of the Semantic Web. Assessing concept similarity is one of such activities since it allows the identification of different concepts that are semantically close. In this paper, a method for measuring the similarity of FCA concepts is presented, which is a refinement of a previous proposal of the author. The refinement consists in determining the similarity of concept descriptors (attributes) by using the information content approach, rather than relying on human domain expertise. The information content approach which has been adopted allows a higher correlation with human judgement than other proposals for evaluating concept similarity in a taxonomy defined in the literature. (c) 2007 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Formica2008,
      author = {Formica, Anna},
      title = {Concept similarity in formal concept analysis: An information content approach},
      journal = {KNOWLEDGE-BASED SYSTEMS},
      year = {2008},
      volume = {21},
      number = {1},
      pages = {80-87},
      doi = {{10.1016/j.knosys.2007.02.001}}
    }
    
    Formica, A. Ontology-based concept similarity in Formal Concept Analysis {2006} INFORMATION SCIENCES
    Vol. {176}({18}), pp. {2624-2641} 
    article DOI  
    Abstract: Both domain ontologies and Formal Concept Analysis (FCA) aim at modeling concepts, although with different purposes. In the literature, a promising research area concerns the role of FCA in ontology engineering, in particular, in supporting the critical task of reusing independently developed domain ontologies. With this regard, the possibility of evaluating concept similarity; is acquiring an increasing relevance, since it allows the identification of different concepts that are semantically close. In this paper, an ontology-based method for assessing similarity between FCA concepts is proposed. Such a method is intended to support the ontology engineer in difficult activities that are becoming fundamental in the development of the Semantic Web, such us ontology merging and ontology mapping and, in particular, it can be used in parallel to existing semi-automatic tools relying on FCA. (C) 2005 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Formica2006,
      author = {Formica, Anna},
      title = {Ontology-based concept similarity in Formal Concept Analysis},
      journal = {INFORMATION SCIENCES},
      year = {2006},
      volume = {176},
      number = {18},
      pages = {2624-2641},
      doi = {{10.1016/j.ins.2005.11.014}}
    }
    
    Frankewitsch, T. & Prokosch, U. Navigation in medical Internet image databases {2001} MEDICAL INFORMATICS AND THE INTERNET IN MEDICINE
    Vol. {26}({1}), pp. {1-15} 
    article  
    Abstract: The world wide web (WWW) changes common ideas of database access. Hypertext Markup Language allows the simultaneous presentation of information from different sources such as static pages, results of queries from databases or dynamically generated pages. Therefore, the metaphor of the WWW itself as a database was proposed by Mendelzon and Milo in 1998. Against this background the techniques of navigation within WWW-database and the semantic types of their queries have been analysed. Forty eight image repostitories of different types and content, but all concerning medical essence, have been found by search-engines. Many different techniques are offered to enable navigation ranging from simple HTML-link-lists to complex applets. The applets in particular promise an improvement for navigation. Within the meta-information for querying, only ACR- and UMLS-encoding were found, but not standardized vocabularies like ICD10 or Terminologia Anatomica. UMLS especially shows that a well definded thesaurus can improve navigation. However, of the analysed databases only the UMLS `metathesaurus' is currently implemented without providing additional navigation support based on the UMLS `semantic network'. Including the information about relationships between the concepts of the metathesaurus or using UMLS semantic network could provide a much easier navigation with in a network of concepts pointing to multimedia files stored somewhere in the WWW.
    BibTeX:
    @article{Frankewitsch2001,
      author = {Frankewitsch, T and Prokosch, U},
      title = {Navigation in medical Internet image databases},
      journal = {MEDICAL INFORMATICS AND THE INTERNET IN MEDICINE},
      year = {2001},
      volume = {26},
      number = {1},
      pages = {1-15}
    }
    
    Gal, A., Anaby-Tavor, A., Trombetta, A. & Montesi, D. A framework for modeling and evaluating automatic semantic reconciliation {2005} VLDB JOURNAL
    Vol. {14}({1}), pp. {50-67} 
    article DOI  
    Abstract: The introduction of the Semantic Web vision and the shift toward machine understandable Web resources has unearthed the importance of automatic semantic reconciliation. Consequently, new tools for automating the process were proposed. In this work we present a formal model of semantic reconciliation and analyze in a systematic manner the properties of the process outcome, primarily the inherent uncertainty of the matching process and how it reflects on the resulting mappings. An important feature of this research is the identification and analysis of factors that impact the effectiveness of algorithms for automatic semantic reconciliation, leading, it is hoped, to the design of better algorithms by reducing the uncertainty of existing algorithms. Against this background we empirically study the aptitude of two algorithms to correctly match concepts. This research is both timely and practical in light of recent attempts to develop and utilize methods for automatic semantic reconciliation.
    BibTeX:
    @article{Gal2005,
      author = {Gal, A and Anaby-Tavor, A and Trombetta, A and Montesi, D},
      title = {A framework for modeling and evaluating automatic semantic reconciliation},
      journal = {VLDB JOURNAL},
      year = {2005},
      volume = {14},
      number = {1},
      pages = {50-67},
      doi = {{10.1007/s00778-003-0115-z}}
    }
    
    Gal, A., Modica, G., Jamil, H. & Eyal, A. Automatic ontology matching using application semantics {2005} AI MAGAZINE
    Vol. {26}({1}), pp. {21-31} 
    article  
    Abstract: We propose the use of application semantics to enhance the process of semantic reconciliation. Application semantics involves those elements of business reasoning that affect the way concepts are presented to users: their layout, and so on. In particular, we pursue in this article the notion of precedence, in which temporal constraints. determine the order in which concepts are presented to the user. Existing matching algorithms use either syntactic means (such as term matching and domain matching) or model semantic means, the use of structural information that is provided by the specific data model to enhance the matching process. The novelty of our approach lies in proposing a class of matching techniques that takes advantage of ontological structures and application semantics. As an example, the use of precedence to reflect business rules has not been applied elsewhere, to the best of our knowledge. We have tested the process for a variety of web sites in domains such as car rentals and airline reservations, and we share our experiences with precedence and its limitations.
    BibTeX:
    @article{Gal2005a,
      author = {Gal, A and Modica, G and Jamil, H and Eyal, A},
      title = {Automatic ontology matching using application semantics},
      journal = {AI MAGAZINE},
      year = {2005},
      volume = {26},
      number = {1},
      pages = {21-31}
    }
    
    Gangemi, A. Ontology design patterns for Semantic Web content {2005}
    Vol. {3729}SEMANTIC WEB - ISWC 2005, PROCEEDINGS, pp. {262-276} 
    inproceedings  
    Abstract: The paper presents a framework for introducing design patterns that facilitate or improve the techniques used during ontology lifecycle. Some distinctions are drawn between kinds of ontology design patterns. Some content-oriented patterns are presented in order to illustrate their utility at different degrees of abstraction, and how they can be specialized or composed. The proposed framework and the initial set of patterns are designed in order to function as a pipeline connecting domain modelling, user requirements, and ontology-driven tasks/queries to be executed.
    BibTeX:
    @inproceedings{Gangemi2005,
      author = {Gangemi, A},
      title = {Ontology design patterns for Semantic Web content},
      booktitle = {SEMANTIC WEB - ISWC 2005, PROCEEDINGS},
      year = {2005},
      volume = {3729},
      pages = {262-276},
      note = {4th International Semantic Web Conference (ISWC 2005), Galway, IRELAND, NOV 06-10, 2005}
    }
    
    Garcia-Barriocanal, E., Sicilia, M. & Sanchez-Alonso, S. Usability evaluation of ontology editors {2005} KNOWLEDGE ORGANIZATION
    Vol. {32}({1}), pp. {1-9} 
    article  
    Abstract: Ontology editors are software tools that allow the creation and maintenance of ontologies through a graphical user interface. As the semantic web effort grows, a larger community of users for this kind of tool is expected. New users include people not specifically skilled in the use of ontology formalisms. In consequence, the usability of ontology editors can be viewed as a key adoption precondition for semantic web technologies. In this paper, the usability evaluation of several representative ontology editors is described. This evaluation is carried out by combining a heuristic pre-assessment with a subsequent user-testing phase. The target population is comprised of people with no specific ontology-creation skills that have a general knowledge about domain modelling. For this kind of user, current editors are adequate for the creation and maintenance of simple ontologies. Also, there is room for improvement, especially in browsing mechanisms, help systems, and visualization metaphors.
    BibTeX:
    @article{Garcia-Barriocanal2005,
      author = {Garcia-Barriocanal, E and Sicilia, MA and Sanchez-Alonso, S},
      title = {Usability evaluation of ontology editors},
      journal = {KNOWLEDGE ORGANIZATION},
      year = {2005},
      volume = {32},
      number = {1},
      pages = {1-9}
    }
    
    Gavriloaie, R., Nejdl, W., Olmedilla, D., Seamons, K. & Winslett, M. No registration needed: How to use declarative policies and negotiation to access sensitive resources on the semantic web {2004}
    Vol. {3053}SEMANTIC WEB: RESEARCH AND APPLICATIONS, pp. {342-356} 
    inproceedings  
    Abstract: Gaining access to sensitive resources on the Web usually involves an explicit registration step, where the client has to provide it predetermined set of information to the server. The registration process yields a login/password combination, a cookie, or something similar that can be used to access the sensitive resources. In this paper we show how an explicit registration step can be avoided on the Semantic Web by using appropriate semantic annotations, rule-oriented access control policies, and automated trust negotiation. After presenting the PeerTrust language for policies and trust negotiation, we describe our implementation of implicit registration and authentication that runs under the Java-based MINERVA Prolog engine. The implementation includes it PeerTrust policy applet and evaluator, facilities to import local metadata, policies and credentials, and secure communication channels between all parties.
    BibTeX:
    @inproceedings{Gavriloaie2004,
      author = {Gavriloaie, R and Nejdl, W and Olmedilla, D and Seamons, KE and Winslett, M},
      title = {No registration needed: How to use declarative policies and negotiation to access sensitive resources on the semantic web},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS},
      year = {2004},
      volume = {3053},
      pages = {342-356},
      note = {1st European Semantic Web Symposium, Heraklion, GREECE, MAY 10-12, 2004}
    }
    
    Gennari, J., Musen, M., Fergerson, R., Grosso, W., Crubezy, M., Eriksson, H., Noy, N. & Tu, S. The evolution of Protege: an environment for knowledge-based systems development {2003} INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
    Vol. {58}({1}), pp. {89-123} 
    article DOI  
    Abstract: The Protege project has come a long way since Mark Musen first built the Protege meta-tool for knowledge-based systems in 1987. The original tool was a small application, aimed at building knowledge-acquisition tools for a few specialized programs in medical planning. From this initial tool, the Protege system has evolved into a durable, extensible platform for knowledge-based systems development and research. The current version, Protege-2000, can be run on a variety of platforms, supports customized user-interface extensions, incorporates the Open Knowledge-Base Connectivity (OKBC) knowledge model, interacts with standard storage formats such as relational databases, XML, and RDF, and has been used by hundreds of individuals and research groups. In this paper, we follow the evolution of the Protege project through three distinct re-implementations. We describe our overall methodology, our design decisions, and the lessons we have learned over the duration of the project. We believe that our success is one of infrastructure: Protege is a flexible, well-supported, and robust development environment. Using Protege, developers and domain experts can easily build effective knowledge-based systems, and researchers can explore ideas in a variety of knowledge-based domains. (C) 2002 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Gennari2003,
      author = {Gennari, JH and Musen, MA and Fergerson, RW and Grosso, WE and Crubezy, M and Eriksson, H and Noy, NF and Tu, SW},
      title = {The evolution of Protege: an environment for knowledge-based systems development},
      journal = {INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES},
      year = {2003},
      volume = {58},
      number = {1},
      pages = {89-123},
      doi = {{10.1016/S1071-5819(02)00127-1}}
    }
    
    Gilchrist, A. Thesauri, taxonomies and ontologies - an etymological note {2003} JOURNAL OF DOCUMENTATION
    Vol. {59}({1}), pp. {7-18} 
    article DOI  
    Abstract: The amount of work to be done in rendering the digital information space more efficient and effective has attracted a wide range of disciplines which, in turn, has given rise to a degree of confusion in the terminology applied to information problems. This note seeks to shed some light on the three terms thesauri, taxonomies and ontologies as they are currently being used by, among others, information scientists, AI practitioners, and those working on the foundations of the semantic Web. The paper is not a review of the techniques themselves.
    BibTeX:
    @article{Gilchrist2003,
      author = {Gilchrist, A},
      title = {Thesauri, taxonomies and ontologies - an etymological note},
      journal = {JOURNAL OF DOCUMENTATION},
      year = {2003},
      volume = {59},
      number = {1},
      pages = {7-18},
      doi = {{10.1108/00220410310457984}}
    }
    
    Giugno, R. & Lukasiewicz, T. P-SHOQ(D): a Probabilistic extension of SHOQ(D) for Probabilistic ontologies in the semantic web {2002}
    Vol. {2424}LOGICS IN ARTIFICIAL INTELLIGENCE 8TH, pp. {86-97} 
    inproceedings  
    Abstract: Ontologies play a central role in the development of the semantic web, as they provide precise definitions of shared terms in web resources. One important web ontology language is DAML+OIL; it has a formal semantics and a reasoning support through a mapping to the expressive description logic SHOQ(D) with the addition of inverse roles. In this paper, we present a probabilistic extension of SHOQ(D), called P-SHOQ(D), to allow for dealing with probabilistic ontologies in the semantic web. The description logic P-SHOQ(D) is based on the notion of probabilistic lexicographic entailment from probabilistic default reasoning. It allows to express rich probabilistic knowledge about concepts and instances, as well as default knowledge about concepts. We also present sound and complete reasoning techniques for P-SHOQ(D), which are based on reductions to classical reasoning in SHOQ(D) and to linear programming, and which show in particular that reasoning in P-SHOQ(D) is decidable.
    BibTeX:
    @inproceedings{Giugno2002,
      author = {Giugno, R and Lukasiewicz, T},
      title = {P-SHOQ(D): a Probabilistic extension of SHOQ(D) for Probabilistic ontologies in the semantic web},
      booktitle = {LOGICS IN ARTIFICIAL INTELLIGENCE 8TH},
      year = {2002},
      volume = {2424},
      pages = {86-97},
      note = {8th European Conference on Logics in Artificial Intelligence (JELIA 02), COSENZA, ITALY, SEP 23-26, 2002}
    }
    
    Gkoutos, G., Murray-Rust, P., Rzepa, H. & Wright, M. Chemical markup, XML, and the World-Wide Web. 3. Toward a signed semantic chemical web of trust {2001} JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES
    Vol. {41}({5}), pp. {1124-1130} 
    article DOI  
    Abstract: We describe how a collection of documents expressed in XML-conforming languages such as CML and XHTML can be authenticated and validated against digital signatures which make use of established X.509 certificate technology. These can be associated either with specific nodes in the XML document or with the entire document. We illustrate this with two examples. An entire journal article expressed in XML has its individual components digitally signed by separate authors, and the collection is placed in an envelope and again signed. The second example involves using a software robot agent to acquire a collection of documents from a specified URL, to perform various operations and transformations on the content, including expressing molecules in CML, and to automatically sign the various components and deposit the result in a repository. We argue that these operations can used as components for building what we term an authenticated and semantic chemical web of trust.
    BibTeX:
    @article{Gkoutos2001,
      author = {Gkoutos, GV and Murray-Rust, P and Rzepa, HS and Wright, M},
      title = {Chemical markup, XML, and the World-Wide Web. 3. Toward a signed semantic chemical web of trust},
      journal = {JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES},
      year = {2001},
      volume = {41},
      number = {5},
      pages = {1124-1130},
      doi = {{10.1021/ci000406v}}
    }
    
    Goble, C. & De Roure, D. The Grid: An application of the Semantic Web {2002} SIGMOD RECORD
    Vol. {31}({4}), pp. {65-70} 
    article  
    Abstract: The Grid is an emerging platform to support on-demand ``virtual organisations'' for coordinated resource sharing and problem solving on a global scale. The application thrust is large-scale scientific endeavour, and the scale and complexity of scientific data presents challenges for databases. The Grid is beginning to exploit technologies developed for Web Services and to realise its potential it also stands to benefit from Semantic Web technologies; conversely, the Grid and its scientific users provide application pull which will benefit the Semantic Web.
    BibTeX:
    @article{Goble2002,
      author = {Goble, C and De Roure, D},
      title = {The Grid: An application of the Semantic Web},
      journal = {SIGMOD RECORD},
      year = {2002},
      volume = {31},
      number = {4},
      pages = {65-70},
      note = {Amicalola Workshop on DB-IS Research for Semantic Web and Enterprises, GEORGIA, APR 03-05, 2002}
    }
    
    Goble, C. & Stevens, R. State of the nation in data integration for bioinformatics {2008} JOURNAL OF BIOMEDICAL INFORMATICS
    Vol. {41}({5, Sp. Iss. SI}), pp. {687-693} 
    article DOI  
    Abstract: Data integration is a perennial issue in broinformatics, with many systems being developed and many technologies offered as a panacea for its resolution. Tire fact that it is still a problem indicates a persistence of underlying issues. Progress has been made, but we Should ask ``what lessons have been learnt?'', and ``what still needs to be done?'' Semantic Web and Web 2.0 technologies are the latest to find traction within bioinformatics data integration. Now we can ask whether the Semantic Web, mashups, or their combination, have the potential to help. This paper is based on the opening invited talk by Carole Goble given at the Health Care and Life Sciences Data Integration for the Semantic Web Workshop collocated with WWW2007. The paper expands on that talk. We attempt to place some perspective on past efforts, highlight the reasons for success and failure, and indicate some pointers to the future. (C) 2008 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Goble2008,
      author = {Goble, Carole and Stevens, Robert},
      title = {State of the nation in data integration for bioinformatics},
      journal = {JOURNAL OF BIOMEDICAL INFORMATICS},
      year = {2008},
      volume = {41},
      number = {5, Sp. Iss. SI},
      pages = {687-693},
      doi = {{10.1016/j.jbi.2008.01.008}}
    }
    
    Godoy, D. & Amandi, A. Modeling user interests by conceptual clustering {2006} INFORMATION SYSTEMS
    Vol. {31}({4-5}), pp. {247-265} 
    article DOI  
    Abstract: As more information becomes available on the Web, there has been a crescent interest in effective personalization techniques. Personal agents providing assistance based on the content of Web documents and the user interests emerged as a viable alternative to this problem. Provided that these agents rely on having knowledge about users contained into user profiles, i.e., models of user preferences and interests gathered by observation of user behavior, the capacity of acquiring and modeling user interest categories has become a critical component in personal agent design. User profiles have to summarize categories corresponding to diverse user information interests at different levels of abstraction in order to allow agents to decide on the relevance of new pieces of information. In accomplishing this goal, document clustering offers the advantage that an a priori knowledge of categories is not needed, therefore the categorization is completely unsupervised. In this paper we present a document clustering algorithm, named WebDCC (Web Document Conceptual Clustering), that carries out incremental, unsupervised concept learning over Web documents in order to acquire user profiles. Unlike most user profiling approaches, this algorithm offers comprehensible clustering solutions that can be easily interpreted and explored by both users and other agents. By extracting semantics from Web pages, this algorithm also produces intermediate results that can be finally integrated in a machine-understandable format such as an ontology. Empirical results of using this algorithm in the context of an intelligent Web search agent proved it can reach high levels of accuracy in suggesting Web pages. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Godoy2006,
      author = {Godoy, D and Amandi, A},
      title = {Modeling user interests by conceptual clustering},
      journal = {INFORMATION SYSTEMS},
      year = {2006},
      volume = {31},
      number = {4-5},
      pages = {247-265},
      doi = {{10.1016/j.is.2005.02.008}}
    }
    
    Goh, C., Bressan, S., Madnick, S. & Siegel, M. Context interchange: New features and formalisms for the intelligent integration of information {1999} ACM TRANSACTIONS ON INFORMATION SYSTEMS
    Vol. {17}({3}), pp. {270-293} 
    article  
    Abstract: The Context Interchange strategy presents a novel perspective for mediated data access in which semantic conflicts among heterogeneous systems are not identified a priori, but are detected and reconciled by a context mediator through comparison of contexts axioms corresponding to the systems engaged in data exchange. In this article, we show that queries formulated on shared views, export schema, and shared ``ontologies'' can be mediated in the same way using the Context Interchange framework. The proposed framework provides a logic-based object-oriented formalism for representing and reasoning about data semantics in disparate systems, and has been validated in a prototype implementation providing mediated data access to both traditional and web-based information sources.
    BibTeX:
    @article{Goh1999,
      author = {Goh, CH and Bressan, S and Madnick, S and Siegel, M},
      title = {Context interchange: New features and formalisms for the intelligent integration of information},
      journal = {ACM TRANSACTIONS ON INFORMATION SYSTEMS},
      year = {1999},
      volume = {17},
      number = {3},
      pages = {270-293}
    }
    
    Golbreich, C. Combining rule and ontology reasoners for the Semantic Web {2004}
    Vol. {3323}RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS, pp. {6-22} 
    inproceedings  
    Abstract: Using rules in conjunction with ontologies is a major challenge for the Semantic Web. We propose a pragmatic approach for reasoning with ontologies and rules, based on the Semantic Web standards and tools currently available. We first achieved an implementation of SWRL, the emerging OWL/RuleML-combining rule standard, using the Protege OWL plugin. We then developed a Protege plugin, SWRLJessTab, which enables to compute inferences with the Racer classifier and the Jess inference engine, in order to reason with rules and ontologies, both represented in OWL. A small example, including an OWL ontology and a SWRL rule base, shows that all the domain knowledge, i.e. the SWRL rule base and the OWL ontology, is required to obtain complete inferences. It illustrates that some reasoning support must be provided to interoperate between SWRL and OWL, not only syntactically and semantically, but also inferentially.
    BibTeX:
    @inproceedings{Golbreich2004,
      author = {Golbreich, C},
      title = {Combining rule and ontology reasoners for the Semantic Web},
      booktitle = {RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS},
      year = {2004},
      volume = {3323},
      pages = {6-22},
      note = {3rd International Workshop on Rules and Rule Markup Languages for the Semantic Web, Hiroshima, JAPAN, NOV 08, 2004}
    }
    
    Gomez-Perez, A. & Corcho, O. Ontology languages for the Semantic Web {2002} IEEE INTELLIGENT SYSTEMS
    Vol. {17}({1}), pp. {54-60} 
    article  
    BibTeX:
    @article{Gomez-Perez2002,
      author = {Gomez-Perez, A and Corcho, O},
      title = {Ontology languages for the Semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2002},
      volume = {17},
      number = {1},
      pages = {54-60}
    }
    
    Gomez-Perez, A., Gonzalez-Cabero, R. & Lama, M. ODE SWS: A framework for designing and composing Semantic Web Services {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({4}), pp. {24-31} 
    article  
    BibTeX:
    @article{Gomez-Perez2004,
      author = {Gomez-Perez, A and Gonzalez-Cabero, R and Lama, M},
      title = {ODE SWS: A framework for designing and composing Semantic Web Services},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {4},
      pages = {24-31}
    }
    
    Good, B.M. & Wilkinson, M.D. The Life Sciences Semantic Web is full of creeps! {2006} BRIEFINGS IN BIOINFORMATICS
    Vol. {7}({3}), pp. {275-286} 
    article DOI  
    Abstract: The Semantic Web for the Life Sciences (SWLS), when realized, will dramatically improve our ability to conduct bioinformatics analyses using the vast and growing stores of web-accessible resources. This ability will be achieved through the widespread acceptance and application of standards for naming, representing, describing and accessing biological information. The W3C-led Semantic Web initiative has established most, if not all, of the standards and technologies needed to achieve a unified, global SWLS. Unfortunately, the bioinformatics community has, thus far, appeared reluctant to fully adopt them. Rather, we are seeing what could be described as `semantic creep'-timid, piecemeal and ad hoc adoption of parts of standards by groups that should be stridently taking a leadership role for the community. We suggest that, at this point, the primary hindrances to the creation of the SWLS may be social rather than technological in nature, and that, like the original Web, the establishment of the SWLS will depend primarily on the will and participation of its consumers.
    BibTeX:
    @article{Good2006,
      author = {Good, Benjamin M. and Wilkinson, Mark D.},
      title = {The Life Sciences Semantic Web is full of creeps!},
      journal = {BRIEFINGS IN BIOINFORMATICS},
      year = {2006},
      volume = {7},
      number = {3},
      pages = {275-286},
      doi = {{10.1093/bib/bbl025}}
    }
    
    Governatori, G. Representing business contracts in ruleML {2005} INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS
    Vol. {14}({2-3}), pp. {181-216} 
    article  
    Abstract: This paper presents an approach for the specification and implementation of translating contracts from a human-oriented form into an executable representation for monitoring. This will be done in the setting of RuleML. The task of monitoring contract execution and performance requires a logical account of deontic and defeasible aspects of legal language; currently such aspects axe not covered by RuleML; accordingly we show how to extend it to cover such notions. From its logical form, the contract will thus be transformed into a machine readable rule notation and eventually implemented as executable semantics via any mark-up languages depending on the client's preference, for contract monitoring purposes.
    BibTeX:
    @article{Governatori2005,
      author = {Governatori, G},
      title = {Representing business contracts in ruleML},
      journal = {INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS},
      year = {2005},
      volume = {14},
      number = {2-3},
      pages = {181-216}
    }
    
    Grau, B.C., Horrocks, I., Motik, B., Parsia, B., Patel-Schneider, P. & Sattler, U. OWL 2: The next step for OWL {2008} JOURNAL OF WEB SEMANTICS
    Vol. {6}({4}), pp. {309-322} 
    article DOI  
    Abstract: Since achieving W3C recommendation status in 2004, the Web Ontology Language (OWL) has been successfully applied to many problems in computer science. Practical experience with OWL has been quite positive in general; however, it has also revealed room for improvement in several areas. We systematically analyze the identified shortcomings of OWL, such as expressivity issues, problems with its syntaxes, and deficiencies in the definition of OWL species. Furthermore, we present an overview of OWL 2-an extension to and revision of OWL that is currently being developed within the W3C OWL Working Group. Many aspects of OWL have been thoroughly reengineered in OWL 2, thus producing a robust platform for future development of the language. (C) 2008 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Grau2008,
      author = {Grau, Bernardo Cuenca and Horrocks, Ian and Motik, Boris and Parsia, Bijan and Patel-Schneider, Peter and Sattler, Ulrike},
      title = {OWL 2: The next step for OWL},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2008},
      volume = {6},
      number = {4},
      pages = {309-322},
      doi = {{10.1016/j.websem.2008.05.001}}
    }
    
    Grau, B.C., Parsia, B. & Sirin, E. Combining OWL ontologies using epsilon-Connections {2006} JOURNAL OF WEB SEMANTICS
    Vol. {4}({1}), pp. {40-59} 
    article DOI  
    Abstract: The standardization of the Web Ontology Language ( OWL) leaves ( at least) two crucial issues for Web-based ontologies unsatisfactorily resolved, namely how to represent and reason with multiple distinct, but linked ontologies, and how to enable effective knowledge reuse and sharing on the Semantic Web. In this paper, we present a solution for these fundamental problems based on E- Connections. We aim to use E- Connections to provide modelers with suitable means for developing Web ontologies in a modular way and to provide an alternative to the owl: imports construct. With such motivation, we present in this paper a syntactic and semantic extension of the Web Ontology language that covers E- Connections of OWL-DL ontologies. We show how to use such an extension as an alternative to the owl: imports construct in many modeling situations. We investigate different combinations of the logics SHIN( D), SHON( D) and SHIO( D) for which it is possible to design and implement reasoning algorithms, well- suited for optimization. Finally, we provide support for E-Connections in both an ontology editor, SWOOP, and an OWL reasoner, Pellet. (c) 2005 Elsevier B. V. All rights reserved.
    BibTeX:
    @article{Grau2006,
      author = {Grau, Bernardo Cuenca and Parsia, Bijan and Sirin, Evren},
      title = {Combining OWL ontologies using epsilon-Connections},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2006},
      volume = {4},
      number = {1},
      pages = {40-59},
      doi = {{10.1016/j.websem.2005.09.010}}
    }
    
    Greaves, M. Semantic Web 2.0 {2007} IEEE INTELLIGENT SYSTEMS
    Vol. {22}({2}), pp. {94-96} 
    article  
    BibTeX:
    @article{Greaves2007,
      author = {Greaves, Mark},
      title = {Semantic Web 2.0},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2007},
      volume = {22},
      number = {2},
      pages = {94-96}
    }
    
    Green, J., Hastings, A., Arzberger, P., Ayala, F., Cottingham, K., Cuddington, K., Davis, F., Dunne, J., Fortin, M., Gerber, L. & Neubert, M. Complexity in ecology and conservation: Mathematical, statistical, and computational challenges {2005} BIOSCIENCE
    Vol. {55}({6}), pp. {501-510} 
    article  
    Abstract: Creative approaches at the interface of ecology, statistics, mathematics, informatics, and computational science are essential for improving our understanding of complex ecological systems. For example, new information technologies, including powerful computers, spatially embedded sensor networks, and Semantic Web tools, are emerging as potentially revolutionary tools for studying ecological phenomena. These technologies can play an important role in developing and testing detailed models that describe real-world systems at multiple scales. Key challenges include choosing the appropriate level of model complexity necessary for understanding biological patterns across space and time, and applying this understanding to solve problems in conservation biology and resource management.. Meeting these challenges requires novel statistical and mathematical techniques for distinguishing among alternative ecological theories and hypotheses. Examples from a wide array of research areas in population biology and community ecology highlight the importance of fostering synergistic ties across disciplines for current and future research and application.
    BibTeX:
    @article{Green2005,
      author = {Green, JL and Hastings, A and Arzberger, P and Ayala, FJ and Cottingham, KL and Cuddington, K and Davis, F and Dunne, JA and Fortin, MJ and Gerber, L and Neubert, M},
      title = {Complexity in ecology and conservation: Mathematical, statistical, and computational challenges},
      journal = {BIOSCIENCE},
      year = {2005},
      volume = {55},
      number = {6},
      pages = {501-510}
    }
    
    Gruber, T. Collective knowledge systems: Where the Social Web meets the Semantic Web {2008} JOURNAL OF WEB SEMANTICS
    Vol. {6}({1}), pp. {4-13} 
    article DOI  
    Abstract: What can happen if we combine the best ideas from the Social Web and Semantic Web? The Social Web is an ecosystem of participation, where value is created by the aggregation of many individual user contributions. The Semantic Web is an ecosystem of data, where value is created by the integration of structured data from many sources. What applications can best synthesize the strengths of these two approaches, to create a new level of value that is both rich with human participation and powered by well-structured information? This paper proposes a class of applications called collective knowledge systems, which unlock the ``collective intelligence'' of the Social Web with knowledge representation and reasoning techniques of the Semantic Web. (c) 2007 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Gruber2008,
      author = {Gruber, Tom},
      title = {Collective knowledge systems: Where the Social Web meets the Semantic Web},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2008},
      volume = {6},
      number = {1},
      pages = {4-13},
      doi = {{10.1016/j.websem.2007.11.011}}
    }
    
    Gruber, T. Ontology of folksonomy: A mash-up of apples and oranges {2007} INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS
    Vol. {3}({1}), pp. {1-11} 
    article  
    Abstract: Ontologies are enabling technology for the Semantic Web. They are a means for people to state what they mean by the terms used in data that they might generate, share, or consume. Folksonomies are an emergent phenomenon of the social Web. They arise from data about how people associate terms with content that they generate, share, or consume. Recently the two ideas have been put into opposition, as if they were right and left poles of apolitical spectrum. This is a false dichotomy; they are more like apples and oranges. In fact, as the Semantic Web matures and the social Web grows, there is increasing value in applying Semantic Web technologies to the data of the social Web. This article is an attempt to clarify the distinct roles for ontologies and folksonomies, and preview some new work that applies the two ideas together-an ontology off olksonomy.
    BibTeX:
    @article{Gruber2007,
      author = {Gruber, Thomas},
      title = {Ontology of folksonomy: A mash-up of apples and oranges},
      journal = {INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS},
      year = {2007},
      volume = {3},
      number = {1},
      pages = {1-11}
    }
    
    Gu, T., Pung, H. & Zhang, D. A service-oriented middleware for building context-aware services {2005} JOURNAL OF NETWORK AND COMPUTER APPLICATIONS
    Vol. {28}({1}), pp. {1-18} 
    article DOI  
    Abstract: The advancement of wireless networks and mobile computing necessitates more advanced applications and services to be built with context-awareness enabled and adaptability to their changing contexts. Today, building context-aware services is a complex task due to the lack of an adequate infrastructure support in pervasive computing environments. In this article, we propose a Service-Oriented Context-Aware Middleware (SOCAM) architecture for the building and rapid prototyping of context-aware services. It provides efficient support for acquiring, discovering, interpreting and accessing various contexts to build context-aware services. We also propose a formal context model based on ontology using Web Ontology Language to address issues including semantic representation, context reasoning, context classification and dependency. We describe our context model and the middleware architecture, and present a performance study for our prototype in a smart home environment. (C) 2004 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Gu2005,
      author = {Gu, T and Pung, HK and Zhang, DQ},
      title = {A service-oriented middleware for building context-aware services},
      journal = {JOURNAL OF NETWORK AND COMPUTER APPLICATIONS},
      year = {2005},
      volume = {28},
      number = {1},
      pages = {1-18},
      doi = {{10.1016/j.jnca.2004.06.002}}
    }
    
    Guha, R. & McCool, R. TAP: a Semantic web platform {2003} COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING
    Vol. {42}({5}), pp. {557-577} 
    article DOI  
    Abstract: Activities such as Web Services and the Semantic Web are working to create a distributed web of machine understandable data. We address three important problems that need to be solved to realize this vision. We discuss the problem of scalable and deployable query systems and present a simple, but general query interface called GetData. We address the issue of creating global agreements on vocabularies and introduce the concept of Semantic Negotiation, a process by which two programs can bootstrap from small shared vocabularies to larger shared vocabularies. We discuss the problem of programs determining which data sources to trust and present a solution that uses a Web of Trust between Semantic Web registries. We briefly describe TAP, a system that implements the GetData interface, Semantic Negotiation and Web of Trust enabled registries. We then introduce an application of the Semantic Web called Semantic Search and describe an implemented system which uses the data from the Semantic Web to improve traditional search results. (C) 2003 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Guha2003,
      author = {Guha, R and McCool, R},
      title = {TAP: a Semantic web platform},
      journal = {COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING},
      year = {2003},
      volume = {42},
      number = {5},
      pages = {557-577},
      doi = {{10.1016/S1389-1286(03)00225-1}}
    }
    
    Guha, R., McCool, R. & Fikes, R. Contexts for the semantic web {2004}
    Vol. {3298}SEMANTIC WEB - ISWC 2004, PROCEEDINGS, pp. {32-46} 
    inproceedings  
    Abstract: A central theme of the Semantic Web is that programs should be able to easily aggregate data from different sources. Unfortunately, even if two sites provide their data using the same data model and vocabulary, subtle differences in their use of terms and in the assumptions they make pose challenges for aggregation. Experiences with the TAP project reveal some of the phenomena that pose obstacles to a simplistic model of aggregation. Similar experiences have been reported by Al projects such as Cyc, which has led to the development and use of various context mechanisms. In this paper we report on some of the problems with aggregating independently published data and propose a context mechanism to handle some of these problems. We briefly survey the context mechanisms developed in Al and contrast them with the requirements of a context mechanism for the Semantic Web. Finally, we present a context mechanism for the Semantic Web that is adequate to handle the aggregation tasks, yet simple from both computational and model theoretic perspectives.
    BibTeX:
    @inproceedings{Guha2004,
      author = {Guha, R and McCool, R and Fikes, R},
      title = {Contexts for the semantic web},
      booktitle = {SEMANTIC WEB - ISWC 2004, PROCEEDINGS},
      year = {2004},
      volume = {3298},
      pages = {32-46},
      note = {3rd International Semantic Web Conference, Hiroshima, JAPAN, NOV 07-11, 2004}
    }
    
    Guo, Y., Pan, Z. & Heflin, J. LUBM: A benchmark for OWL knowledge base systems {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({2-3}), pp. {158-182} 
    article DOI  
    Abstract: We describe our method for benchmarking Semantic Web knowledge base systems with respect to use in large OWL applications. We present the Lehigh University Benchmark (LUBM) as an example of how to design such benchmarks. The LUBM features an ontology for the university domain, synthetic OWL data scalable to an arbitrary size, 14 extensional queries representing a variety of properties, and several performance metrics. The LUBM can be used to evaluate systems with different reasoning capabilities and storage mechanisms. We demonstrate this with an evaluation of two memory-based systems and two systems with persistent storage. c 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Guo2005,
      author = {Guo, YB and Pan, ZX and Heflin, J},
      title = {LUBM: A benchmark for OWL knowledge base systems},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {2-3},
      pages = {158-182},
      note = {3rd International Semantic Web Conference, Hiroshima, JAPAN, NOV 07-11, 2004},
      doi = {{10.1016/j.websem.2005.06.005}}
    }
    
    Ha, Y., Sohn, J., Cho, Y. & Yoon, H. Towards a ubiquitous robotic companion: Design and implementation of ubiquitous robotic service framework {2005} ETRI JOURNAL
    Vol. {27}({6}), pp. {666-676} 
    article  
    Abstract: In recent years, motivated by the emergence of ubiquitous computing technologies, a new class of networked robots, ubiquitous robots, has been introduced. The Ubiquitous Robotic Companion (URC) is our conceptual vision of ubiquitous service robots that provide users with the services they need, anytime and anywhere in ubiquitous computing environments. To realize the vision of URC, one of the essential requirements for robotic systems is to support ubiquity of services: that is' a robot service must be always available even though there are changes in the service environments. Specifically robotic systems need to be automatically interoperable with sensors and devices in current service environments, rather than statically preprogrammed for them. In this paper, the design and implementation of a semantic-based ubiquitous robotic space (SemanticURS) is presented. SemanticURS enables automated integration of networked robots into ubiquitous computing environments exploiting Semantic Web Services and AI-based planning technologies.
    BibTeX:
    @article{Ha2005,
      author = {Ha, YG and Sohn, JC and Cho, YJ and Yoon, H},
      title = {Towards a ubiquitous robotic companion: Design and implementation of ubiquitous robotic service framework},
      journal = {ETRI JOURNAL},
      year = {2005},
      volume = {27},
      number = {6},
      pages = {666-676}
    }
    
    Halevy, A., Ives, Z., Madhavan, J., Mork, P., Suciu, D. & Tatarinov, I. The Piazza Peer Data Management System {2004} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {16}({7}), pp. {787-798} 
    article  
    Abstract: Intuitively, data management and data integration tools should be well-suited for exchanging information in a semantically meaningful way. Unfortunately, they suffer from two significant problems: They typically require a comprehensive schema design before they can be used to store or share information and they are difficult to extend because schema evolution is heavyweight and may break backward compatibility. As a result, many small-scale data sharing tasks are more easily facilitated by non-database-oriented tools that have little support for semantics. The goal of the peer data management system ( PDMS) is to address this need: We propose the use of a decentralized, easily extensible data management architecture in which any user can contribute new data, schema information, or even mappings between other peers' schemas. PDMSs represent a natural step beyond data integration systems, replacing their single logical schema with an interlinked collection of semantic mappings between peers' individual schemas. This paper describes several aspects of the Piazza PDMS, including the schema mediation formalism, query answering and optimization algorithms, and the relevance of PDMSs to the Semantic Web.
    BibTeX:
    @article{Halevy2004,
      author = {Halevy, AY and Ives, ZG and Madhavan, J and Mork, P and Suciu, D and Tatarinov, I},
      title = {The Piazza Peer Data Management System},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2004},
      volume = {16},
      number = {7},
      pages = {787-798}
    }
    
    Halevy, A., Ives, Z., Suciu, D. & Tatarinov, I. Schema mediation for large-scale semantic data sharing {2005} VLDB JOURNAL
    Vol. {14}({1}), pp. {68-83} 
    article DOI  
    Abstract: Intuitively, data management and data integration tools should be well suited for exchanging information in a semantically meaningful way. Unfortunately, they suffer from two significant problems: they typically require a common and comprehensive schema design before they can be used to store or share information, and they are difficult to extend because schema evolution is heavyweight and may break backward compatibility. As a result, many large-scale data sharing tasks are more easily facilitated by non-database-oriented tools that have little support for semantics. The goal of the peer data management system (PDMS) is to address this need: we propose the use of a decentralized, easily extensible data management architecture in which any user can contribute new data, schema information, or even mappings between other peers' schemas. PDMSs represent a natural step beyond data integration systems, replacing their single logical schema with an interlinked collection of semantic mappings between peers' individual schemas. This paper considers the problem of schema mediation in a PDMS. Our first contribution is a flexible language for mediating between peer schemas that extends known data integration formalisms to our more complex architecture. We precisely characterize the complexity of query answering for our language. Next, we describe a reformulation algorithm for our language that generalizes both global-as-view and local-as-view query answering algorithms. Then we describe several methods for optimizing the reformulation algorithm and an initial set of experiments studying its performance. Finally, we define and consider several global problems in managing semantic mappings in a PDMS.
    BibTeX:
    @article{Halevy2005,
      author = {Halevy, AY and Ives, ZG and Suciu, D and Tatarinov, I},
      title = {Schema mediation for large-scale semantic data sharing},
      journal = {VLDB JOURNAL},
      year = {2005},
      volume = {14},
      number = {1},
      pages = {68-83},
      doi = {{10.1007/s00778-003-0116-y}}
    }
    
    Halkidi, M., Nguyen, B., Varlamis, I. & Vazirgiannis, M. THESUS: Organizing Web document collections based on link semantics {2003} VLDB JOURNAL
    Vol. {12}({4}), pp. {320-332} 
    article DOI  
    Abstract: The requirements for effective search and management of the WWW are stronger than ever. Currently Web documents are classified based on their content not taking into account the fact that these documents are connected to each other by links. We claim that a page's classification is enriched by the detection of its incoming links' semantics. This would enable effective browsing and enhance the validity of search results in the WWW context. Another aspect that is underaddressed and strictly related to the tasks of browsing and searching is the similarity of documents at the semantic level. The above observations lead us to the adoption of a hierarchy of concepts (ontology) and a thesaurus to exploit links and provide a better characterization of Web documents. The enhancement of document characterization makes operations such as clustering and labeling very interesting. To this end, we devised a system called THESUS. The system deals with an initial sets of Web documents, extracts keywords from all pages' incoming links, and converts them to semantics by mapping them to a domain's ontology. Then a clustering algorithm is applied to discover groups of Web documents. The effectiveness of the clustering process is based on the use of a novel similarity measure between documents characterized by sets of terms. Web documents are organized into thematic subsets based on their semantics. The subsets are then labeled, thereby enabling easier management (browsing, searching, querying) of the Web. In this article, we detail the process of this system and give an experimental analysis of its results.
    BibTeX:
    @article{Halkidi2003,
      author = {Halkidi, M and Nguyen, B and Varlamis, I and Vazirgiannis, M},
      title = {THESUS: Organizing Web document collections based on link semantics},
      journal = {VLDB JOURNAL},
      year = {2003},
      volume = {12},
      number = {4},
      pages = {320-332},
      doi = {{10.1007/s00778-003-0100-6}}
    }
    
    Han, J. & Chang, K. Data mining for Web intelligence {2002} COMPUTER
    Vol. {35}({11}), pp. {64+} 
    article  
    Abstract: Data mining tools holds the key to uncovering and cataloging the authoritative links, traversal patterns, and semantic structures that will bring intelligence and direction to our Web interactions.
    BibTeX:
    @article{Han2002,
      author = {Han, JW and Chang, KCC},
      title = {Data mining for Web intelligence},
      journal = {COMPUTER},
      year = {2002},
      volume = {35},
      number = {11},
      pages = {64+}
    }
    
    Handschuh, S. & Staab, S. CREAM: CREAting Metadata for the Semantic Web {2003} COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING
    Vol. {42}({5}), pp. {579-598} 
    article DOI  
    Abstract: Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows creation of metadata for existing Web pages, the authoring mode lets authors create metadata-almost for free-while putting together the content of a page. As a feature of our framework, CREAM allows creating relational metadata, i.e., metadata that instantiate interrelated definitions of classes in a domain ontology rather than a comparatively rigid template-like schema such as Dublin Core. We discuss some of the requirements one has to meet when developing such an ontology-based framework, e.g., the integration of a metadata crawler, inference services, document management and a meta-ontology, and describe its implementation, viz. OntoMat, a component-based, ontology-driven Web-page authoring and annotation tool. (C) 2003 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Handschuh2003,
      author = {Handschuh, S and Staab, S},
      title = {CREAM: CREAting Metadata for the Semantic Web},
      journal = {COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING},
      year = {2003},
      volume = {42},
      number = {5},
      pages = {579-598},
      doi = {{10.1016/S1389-1286(03)00226-3}}
    }
    
    Handschuh, S., Staab, S. & Ciravegna, F. S-CREAM - Semi-automatic CREAtion of metadata {2002}
    Vol. {2473}KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT, PROCEEDINGS - ONTOLOGIES AND THE SEMANTIC WEB , pp. {358-372} 
    inproceedings  
    Abstract: Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, S-CREAM, that allows for creation of metadata, and is trainable for a specific domain. Annotating web documents is one of the major techniques for creating metadata on the web. The implementation of S-CREAM, OntoMat-Annotizer supports now the semi-automatic annotation of web pages. This semi-automatic annotation is based on the information extraction component Amilcare. OntoMat-Annotizer extract with the help of Amilcare knowledge structure from web pages through the use of knowledge extraction rules. These rules are the result of a learning-cycle based on already annotated pages.
    BibTeX:
    @inproceedings{Handschuh2002,
      author = {Handschuh, S and Staab, S and Ciravegna, F},
      title = {S-CREAM - Semi-automatic CREAtion of metadata},
      booktitle = {KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT, PROCEEDINGS - ONTOLOGIES AND THE SEMANTIC WEB },
      year = {2002},
      volume = {2473},
      pages = {358-372},
      note = {13th International Conference on Knowledge Engineering and Knowledge Management (EKAW 2002), Siguenza, SPAIN, OCT 01-04, 2002}
    }
    
    Hardoon, D., Szedmak, S. & Shawe-Taylor, J. Canonical correlation analysis: An overview with application to learning methods {2004} NEURAL COMPUTATION
    Vol. {16}({12}), pp. {2639-2664} 
    article  
    Abstract: We present a general method using kernel canonical correlation analysis to learn a semantic representation to web images and their associated text. The semantic space provides a common representation and enables a comparison between the text and images. In the experiments, we look at two approaches of retrieving images based on only their content from a text query. We compare orthogonalization approaches against a standard cross-representation retrieval technique known as the generalized vector space model.
    BibTeX:
    @article{Hardoon2004,
      author = {Hardoon, DR and Szedmak, S and Shawe-Taylor, J},
      title = {Canonical correlation analysis: An overview with application to learning methods},
      journal = {NEURAL COMPUTATION},
      year = {2004},
      volume = {16},
      number = {12},
      pages = {2639-2664}
    }
    
    Hatala, M. & Wakkary, R. Ontology-based user modeling in an augmented audio reality system for museums {2005} USER MODELING AND USER-ADAPTED INTERACTION
    Vol. {15}({3-4}), pp. {339-380} 
    article DOI  
    Abstract: Ubiquitous computing is a challenging area that allows us to further our understanding and techniques of context-aware and adaptive systems. Among the challenges is the general problem of capturing the larger context in interaction from the perspective of user modeling and human-computer interaction (HCI). The imperative to address this issue is great considering the emergence of ubiquitous and mobile computing environments. This paper provides an account of our addressing the specific problem of supporting functionality as well as the experience design issues related to museum visits through user modeling in combination with an audio augmented reality and tangible user interface system. This paper details our deployment and evaluation of ec(h)o - an augmented audio reality system for museums. We explore the possibility of supporting a context-aware adaptive system by linking environment, interaction objects and users at an abstract semantic level instead of at the content level. From the user modeling perspective ec(h)o is a knowledge-based recommender system. In this paper we present our findings from user testing and how our approach works well with an audio and tangible user interface within a ubiquitous computing system. We conclude by showing where further research is needed.
    BibTeX:
    @article{Hatala2005,
      author = {Hatala, M and Wakkary, R},
      title = {Ontology-based user modeling in an augmented audio reality system for museums},
      journal = {USER MODELING AND USER-ADAPTED INTERACTION},
      year = {2005},
      volume = {15},
      number = {3-4},
      pages = {339-380},
      doi = {{10.1007/s11257-005-2304-5}}
    }
    
    He, B. & Chang, K. Automatic complex schema matching across Web query interfaces: A correlation mining approach {2006} ACM TRANSACTIONS ON DATABASE SYSTEMS
    Vol. {31}({1}), pp. {346-395} 
    article  
    Abstract: To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1: 1 matchings. To tackle this challenge, this article takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this ``deep Web,'' query interfaces generally form complex matchings between attribute groups ( e. g., author corresponds to first name, last name in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes ( e. g., first name, last name) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction. We evaluate the DCM framework on manually extracted interfaces and the results show good accuracy for discovering complex matchings. Further, to automate the entire matching process, we incorporate automatic techniques for interface extraction. Executing the DCM framework on automatically extracted interfaces, we find that the inevitable errors in automatic interface extraction may significantly affect the matching result. To make the DCM framework robust against such ``noisy'' schemas, we integrate it with a novel ``ensemble'' approach, which creates an ensemble of DCM matchers, by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting. As a principled basis, we provide analytic justification of the robustness of the ensemble approach. Empirically, our experiments show that the ``ensemblization'' indeed significantly boosts the matching accuracy, over automatically extracted and thus noisy schema data. By employing the DCM framework with the ensemble approach, we thus complete an automatic process of matchings Web query interfaces.
    BibTeX:
    @article{He2006,
      author = {He, B and Chang, KCC},
      title = {Automatic complex schema matching across Web query interfaces: A correlation mining approach},
      journal = {ACM TRANSACTIONS ON DATABASE SYSTEMS},
      year = {2006},
      volume = {31},
      number = {1},
      pages = {346-395}
    }
    
    Heckmann, D., Schwartz, T., Brandherm, B., Schmitz, M. & von Wilamowitz-Moellendorff, M. Gumo - The General User Model Ontology {2005}
    Vol. {3538}USER MODELING 2005, PROCEEDINGS, pp. {428-432} 
    inproceedings  
    Abstract: We introduce the general user model ontology GUMO for the uniform interpretation of distributed user models in intelligent semantic web enriched environments. We discuss design decisions, show the relation to the user model markup language USERML and present the integration of ubiquitous applications with the u2m.org user model service.
    BibTeX:
    @inproceedings{Heckmann2005,
      author = {Heckmann, D and Schwartz, T and Brandherm, B and Schmitz, M and von Wilamowitz-Moellendorff, M},
      title = {Gumo - The General User Model Ontology},
      booktitle = {USER MODELING 2005, PROCEEDINGS},
      year = {2005},
      volume = {3538},
      pages = {428-432},
      note = {10th International Conference on User Modeling, Edinburgh, SCOTLAND, JUL 24-29, 2005}
    }
    
    Heflin, J. & Hendler, J. A portrait of the Semantic Web in action {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({2}), pp. {54-59} 
    article  
    BibTeX:
    @article{Heflin2001,
      author = {Heflin, J and Hendler, J},
      title = {A portrait of the Semantic Web in action},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {2},
      pages = {54-59}
    }
    
    Hendler, J. Communication - Science and the Semantic Web {2003} SCIENCE
    Vol. {299}({5606}), pp. {520-521} 
    article  
    BibTeX:
    @article{Hendler2003,
      author = {Hendler, J},
      title = {Communication - Science and the Semantic Web},
      journal = {SCIENCE},
      year = {2003},
      volume = {299},
      number = {5606},
      pages = {520-521}
    }
    
    Hendler, J. Agents and the Semantic Web {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({2}), pp. {30-37} 
    article  
    BibTeX:
    @article{Hendler2001,
      author = {Hendler, J},
      title = {Agents and the Semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {2},
      pages = {30-37}
    }
    
    Henrickson, L. & McKelvey, B. Foundations of ``new'' social science: Institutional legitimacy from philosophy, complexity science, postmodernism, and agent-based modeling {2002} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {99}({Suppl. 3}), pp. {7288-7295} 
    article DOI  
    Abstract: Since the death of positivism in the 1970s, philosophers have turned their attention to scientific realism, evolutionary epistemology, and the Semantic Conception of Theories. Building on these trends, Campbellian Realism allows social scientists to accept real-world phenomena as criterion variables against which theories may be tested without denying the reality of individual interpretation and social construction. The Semantic Conception reduces the importance of axioms, but reaffirms the role of models and experiments. Philosophers now see models as ``autonomous agents'' that exert independent influence on the development of a science, in addition to theory and data. The inappropriate molding effects of math models on social behavior modeling are noted. Complexity science offers a ``new'' normal science epistemology focusing on order creation by self-organizing heterogeneous agents and agent-based models. The more responsible core of postmodernism builds on the idea that agents operate in a constantly changing web of interconnections among other agents. The connectionist agent-based models of complexity science draw on the same conception of social ontology as do postmodernists. These recent developments combine to provide foundations for a ``new'' social science centered on formal modeling not requiring the mathematical assumptions of agent homogeneity and equilibrium conditions. They give this ``new'' social science legitimacy in scientific circles that current social science approaches lack.
    BibTeX:
    @article{Henrickson2002,
      author = {Henrickson, L and McKelvey, B},
      title = {Foundations of ``new'' social science: Institutional legitimacy from philosophy, complexity science, postmodernism, and agent-based modeling},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2002},
      volume = {99},
      number = {Suppl. 3},
      pages = {7288-7295},
      note = {Sackler Colloquium on Adaptive Agents, Intelligence, and Emergent Human Organization - Capturing Complexity through Agent-Based Modeling, IRVINE, CALIFORNIA, OCT 04-06, 2001},
      doi = {{10.1073/pnas.092079799}}
    }
    
    Henze, N., Dolog, P. & Nejdl, W. Reasoning and ontologies for personalized e-Learning in the semantic web {2004} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {7}({4}), pp. {82-97} 
    article  
    Abstract: The challenge of the semantic web is the provision of distributed information with well-defined meaning, understandable for different parties. Particularly, applications should be able to provide individually optimized access to information by taking the individual needs and requirements of the users into account. In this paper we propose a framework for personalized e-Learning in the semantic web and show how the semantic web resource description formats can be utilized for automatic generation of hypertext structures from distributed metadata. Ontologies and metadata for three types of resources ( domain, user, and observation) are investigated. We investigate a logic-based approach to educational hypermedia using TRIPLE, a rule and query language for the semantic web.
    BibTeX:
    @article{Henze2004,
      author = {Henze, N and Dolog, P and Nejdl, W},
      title = {Reasoning and ontologies for personalized e-Learning in the semantic web},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2004},
      volume = {7},
      number = {4},
      pages = {82-97}
    }
    
    Hepp, M. Products and services ontologies: A methodology for deriving OWL ontologies from industrial categorization standards {2006} INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS
    Vol. {2}({1}), pp. {72-99} 
    article  
    Abstract: Using Semantic Web technologies for e-business tasks, like product search or content integration, requires ontologies for products and services. Their manual creation is problematic due to (1) the high specificity, resulting in a large number of concepts, and (2) the need for timely ontology maintenance due to product innovation; and due to cost, since building such ontologies from scratch requires significant resources. At the same time, industrial categorization standards, like UNSPSC1, eCl@ss(2), eOTD(3), or the RosettaNet Technical Dictionary(4), reflect some degree of consensus and contain a wealth of concept definitions plus a hierarchy. They can thus be valuable input for creating domain ontologies. However the transformation of existing standards, originally developed for some purpose other than ontology engineering, into useful ontologies is not as straightforward as it appears. In this paper (1) we argue that deriving products and services ontologies from industrial taxonomies is more feasible than manual ontology engineering; (2) show that the representation of the original semantics of the input standard, especially the taxonomic relationship, is an important modeling decision that determines the usefulness of the resulting ontology; (3) illustrate the problem by analyzing existing ontologies derived from UNSPCS and eCl@ss; (4) present a methodology for creating ontologies in OWL based on the reuse of existing standards; and (5) demonstrate this approach by transforming eCl@ss 5.1 into a practically useful products and services ontology.
    BibTeX:
    @article{Hepp2006,
      author = {Hepp, Martin},
      title = {Products and services ontologies: A methodology for deriving OWL ontologies from industrial categorization standards},
      journal = {INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS},
      year = {2006},
      volume = {2},
      number = {1},
      pages = {72-99}
    }
    
    Hepp, M. Semantic Web and Semantic Web services - Father and son or indivisible twins? {2006} IEEE INTERNET COMPUTING
    Vol. {10}({2}), pp. {85-88} 
    article  
    BibTeX:
    @article{Hepp2006a,
      author = {Hepp, M},
      title = {Semantic Web and Semantic Web services - Father and son or indivisible twins?},
      journal = {IEEE INTERNET COMPUTING},
      year = {2006},
      volume = {10},
      number = {2},
      pages = {85-88}
    }
    
    Hepp, M., Leymann, F., Domingue, J., Wahler, A. & Fensel, D. Semantic Business Process Management: A vision towards using semantic Web services for business process management {2005} ICEBE 2005: IEEE INTERNATIONAL CONFERENCE ON E-BUSINESS ENGINEERING, PROCEEDINGS, pp. {535-540}  inproceedings  
    Abstract: Business Process Management (BPM) is the approach to manage the execution of IT-supported business operations from a business expert's view rather than from a technical perspective. However, the degree of mechanization in BPM is still very limited, creating inertia in the necessary evolution and dynamics of business processes, and BPM does not provide a truly unified view on the process space of an organization. We trace back the problem of mechanization Of BPM to an ontological one, i.e. the lack of machine-accessible semantics, and argue that the Modeling constructs of Semantic Web services frameworks, especially WSMO [13, 14], are a natural fit to creating such a representation. As a consequence, we propose to combine SWS and BPM and create one consolidated technology, which we call Semantic Business Process Management (SBPA4).
    BibTeX:
    @inproceedings{Hepp2005,
      author = {Hepp, M and Leymann, F and Domingue, J and Wahler, A and Fensel, D},
      title = {Semantic Business Process Management: A vision towards using semantic Web services for business process management},
      booktitle = {ICEBE 2005: IEEE INTERNATIONAL CONFERENCE ON E-BUSINESS ENGINEERING, PROCEEDINGS},
      year = {2005},
      pages = {535-540},
      note = {IEEE International Conference on e-Business Engineering, Beijing, PEOPLES R CHINA, OCT 18-21, 2005}
    }
    
    Herskovic, J.R., Tantaka, L.Y., Hersh, W. & Bernstam, E.V. A day in the life of PubMed: Analysis of a typical day's query log {2007} JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION
    Vol. {14}({2}), pp. {212-220} 
    article DOI  
    Abstract: Objective: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. Design: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. Measurements: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. Results: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. Conclusion: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.
    BibTeX:
    @article{Herskovic2007,
      author = {Herskovic, Jorge R. and Tantaka, Len Y. and Hersh, William and Bernstam, Elmer V.},
      title = {A day in the life of PubMed: Analysis of a typical day's query log},
      journal = {JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION},
      year = {2007},
      volume = {14},
      number = {2},
      pages = {212-220},
      doi = {{10.1197/jamia.M2191}}
    }
    
    Heymans, S., Van Nieuwenborgh, D. & Vermeir, D. Nonmonotonic ontological and rule-based reasoning with extended conceptual logic programs {2005}
    Vol. {3532}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {392-407} 
    inproceedings  
    Abstract: We present extended conceptual logic programs (ECLPs), for which reasoning is decidable and, moreover, can be reduced to finite answer set programming. ECLPs are useful to reason with both ontological and rule-based knowledge, which is illustrated by simulating reasoning in an expressive description logic (DL) equipped with DL-safe rules. Furthermore, ECLPs are more expressive in the sense that they enable normonotonic reasoning, a desirable feature in locally closed subareas of the Semantic Web.
    BibTeX:
    @inproceedings{Heymans2005,
      author = {Heymans, S and Van Nieuwenborgh, D and Vermeir, D},
      title = {Nonmonotonic ontological and rule-based reasoning with extended conceptual logic programs},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2005},
      volume = {3532},
      pages = {392-407},
      note = {2nd European Semantic Web Conference, Iraklion, GREECE, MAY 29-JUN 01, 2005}
    }
    
    Hildebrand, M., van Ossenbruggen, J. & Hardman, L. /facet: A browser for heterogeneous Semantic Web repositories {2006}
    Vol. {4273}SEMANTIC WEB - ISEC 2006, PROCEEDINGS, pp. {272-285} 
    inproceedings  
    Abstract: Facet browsing has become popular as a user friendly interface to data repositories. The Semantic Web raises new challenges due to the heterogeneous character of the data. First, users should be able to select and navigate through facets of resources of any type and to make selections based on properties of other, semantically related, types. Second, where traditional facet browsers require manual configuration of the software, a semantic web browser should be able to handle any RDFS dataset without any additional configuration. Third, hierarchical data on the semantic web is not designed for browsing: complementary techniques, such as search, should be available to overcome this problem. We address these requirements in our browser, /facet. Additionally, the interface allows the inclusion of facet-specific display options that go beyond the hierarchical navigation that characterizes current facet browsing. /facet is a tool for Semantic Web developers as an instant interface to their complete dataset. The automatic facet configuration generated by the system can then be further refined to configure it as a tool for end users. The implementation is based on current Web standards and open source software. The new functionality is motivated using a scenario from the cultural heritage domain.
    BibTeX:
    @inproceedings{Hildebrand2006,
      author = {Hildebrand, Michiel and van Ossenbruggen, Jacco and Hardman, Lynda},
      title = {/facet: A browser for heterogeneous Semantic Web repositories},
      booktitle = {SEMANTIC WEB - ISEC 2006, PROCEEDINGS},
      year = {2006},
      volume = {4273},
      pages = {272-285},
      note = {5th International Semantic Web Conference (ISWC 2006), Athens, GA, NOV 05-09, 2006}
    }
    
    Horrocks, I. DAML+OIL: A reason-able Web ontology language {2002}
    Vol. {2287}ADVANCES IN DATABASE TECHNOLOGY - EDBT 2002, pp. {2-13} 
    inproceedings  
    Abstract: Ontologies are set to play a key role in the ``Semantic Web'', ` extending syntactic interoperability to semantic interoperability by providing a source of shared and precisely defined terms. DAML+OIL is an ontology language specifically designed for use on the Web; it exploits existing Web standards (XML and RDF), adding the familiar ontological primitives of object oriented and frame based systems, and the formal rigor of a very expressive description logic. The logical basis of the language means that reasoning services can be provided, both to support ontology design and to make DAML+OIL described Web resources more accessible to automated processes.
    BibTeX:
    @inproceedings{Horrocks2002a,
      author = {Horrocks, I},
      title = {DAML+OIL: A reason-able Web ontology language},
      booktitle = {ADVANCES IN DATABASE TECHNOLOGY - EDBT 2002},
      year = {2002},
      volume = {2287},
      pages = {2-13},
      note = {8th International Conference on Extending Database Technology, PRAGUE, CZECH REPUBLIC, MAR 25-27, 2002}
    }
    
    Horrocks, I. Ontologies and the Semantic Web {2008} COMMUNICATIONS OF THE ACM
    Vol. {51}({12}), pp. {58-67} 
    article DOI  
    BibTeX:
    @article{Horrocks2008,
      author = {Horrocks, Ian},
      title = {Ontologies and the Semantic Web},
      journal = {COMMUNICATIONS OF THE ACM},
      year = {2008},
      volume = {51},
      number = {12},
      pages = {58-67},
      doi = {{10.1145/1409360.1409377}}
    }
    
    Horrocks, I., Parsia, B., Patel-Schneider, P. & Hendler, J. Semantic Web architecture: Stack or two towers? {2005}
    Vol. {3703}PRINCIPLES AND PRACTICE OF SEMANTIC WEB REASONING, PROCEEDINGS, pp. {37-41} 
    inproceedings  
    Abstract: We discuss language architecture for the Semantic Web, and in particular different proposals for extending this architecture with a rules component. We argue that an architecture that maximises compatibility with existing languages, in particular RDF and OWL, will benefit the development of the Semantic Web, and still allow for forms of closed world assumption and negation as failure.
    BibTeX:
    @inproceedings{Horrocks2005a,
      author = {Horrocks, I and Parsia, B and Patel-Schneider, P and Hendler, J},
      title = {Semantic Web architecture: Stack or two towers?},
      booktitle = {PRINCIPLES AND PRACTICE OF SEMANTIC WEB REASONING, PROCEEDINGS},
      year = {2005},
      volume = {3703},
      pages = {37-41},
      note = {3rd International Workshop on Principles and Practice of Semantic Web Reasoning, Dagstuhl Castle, GERMANY, SEP 11-16, 2005}
    }
    
    Horrocks, I., Patel-Schneider, P., Bechhofer, S. & Tsarkov, D. OWL rules: A proposal and prototype implementation {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({1}), pp. {23-40} 
    article DOI  
    Abstract: Although the OWL Web Ontology Language adds considerable expressive power to the Semantic Web it does have expressive limitations, particularly with respect to what can be said about properties. We present the Semantic Web Rule Language (SWRL), a Horn clause rules extension to OWL that overcomes many of these limitations. SWRL extends OWL in a syntactically and semantically coherent manner: the basic syntax for SWRL rules is an extension of the abstract syntax for OWL DL and OWL Lite; SWRL rules are given formal meaning via an extension of the OWL DL model-theoretic semantics; SWRL rules are given an XML syntax based on the OWL XML presentation syntax; and a mapping from SWRL rules to RDF graphs is given based on the OWL RDF/XML exchange syntax. We discuss the expressive power of SWRL, showing that the ontology consistency problem is undecidable, provide several examples of SWRL usage, and discuss a prototype implementation of reasoning support for SWRL (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Horrocks2005,
      author = {Horrocks, I and Patel-Schneider, PF and Bechhofer, S and Tsarkov, D},
      title = {OWL rules: A proposal and prototype implementation},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {1},
      pages = {23-40},
      doi = {{10.1016/j.websem.2005.05.003}}
    }
    
    Horrocks, I. & Tessaris, S. Querying the Semantic Web: A formal approach {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {177-191} 
    inproceedings  
    Abstract: Ontologies are set to play a key role in the Semantic Web, and several web ontology languages, like DAML+OIL, are based on DLs. These not only provide a clear semantics to the ontology languages, but allows them to exploit DL systems in order to provide correct and complete reasoning services. Recent results shown that DL systems can be enriched by a conjunctive query language, providing a solution to one of the weakness of traditional DL systems. These results can be transfered to the Semantic Web community, where the need for expressive query languages is witnessed by different proposals (like DQL for DAML+OIL). In this paper we present a logical framework for conjunctive query answering in DAML+OIL. Moreover, we provide a sound and complete algorithm based on recent Description Logic-research.
    BibTeX:
    @inproceedings{Horrocks2002,
      author = {Horrocks, I and Tessaris, S},
      title = {Querying the Semantic Web: A formal approach},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {177-191},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Horvath, T. & Vojtas, P. Ordinal classification with monotonicity constraints {2006}
    Vol. {4065}ADVANCES IN DATA MINING - APPLICATIONS IN MEDICINE, WEB MINING, MARKETING, IMAGE AND SIGNAL MINING , pp. {217-225} 
    inproceedings  
    Abstract: Classification methods commonly assume unordered class values. In many practical applications - for example grading - there is a natural ordering between class values. Furthermore, some attribute values of classified objects can be ordered, too. The standard approach in this case is to convert the ordered values into a numeric quantity and apply a regression learner to the transformed data. This approach can be used just in case of linear ordering. The proposed method for such a classification lies on the boundary between ordinal classification trees, classification trees with monotonicity constraints and multi-relational classification trees. The advantage of the proposed method is that it is able to handle non-linear ordering on the class and attribute values. For the better understanding, we use a toy example from the semantic web environment - prediction of rules for the user's evaluation of hotels.
    BibTeX:
    @inproceedings{Horvath2006,
      author = {Horvath, Tomas and Vojtas, Peter},
      title = {Ordinal classification with monotonicity constraints},
      booktitle = {ADVANCES IN DATA MINING - APPLICATIONS IN MEDICINE, WEB MINING, MARKETING, IMAGE AND SIGNAL MINING },
      year = {2006},
      volume = {4065},
      pages = {217-225},
      note = {6th Industrial Conference on Data Mining (ICDM 2006), Leipzig, GERMANY, JUL 14-15, 2006}
    }
    
    Hotho, A., Maedche, A., Staab, S. & Studer, R. SEAL-II - The soft spot between richly structured and unstructured knowledge {2001} JOURNAL OF UNIVERSAL COMPUTER SCIENCE
    Vol. {7}({7}), pp. {566-590} 
    article  
    Abstract: Recently, the idea of semantic portals on the Web or on the intranet has gained popularity. Their key concern is to allow a community of users to present and share knowledge in a particular (set of) domain(s) via semantic methods. Thus, semantic portals aim at creating high-quality access - in contrast to methods like information retrieval or document clustering that do not exploit any semantic background knowledge at all. However, by way of this construction semantic portals may easily suffer from a typical knowledge management problem. Their initial value is low, because only little richly structured knowledge is available. Hence the motivation of its potential users to extend the knowledge pool is small, too. We here present SEAL-II, a methodology for semantic portals that extends its previous version, by providing a range of ontology-based means for hitting the soft spot between unstructured knowledge, which virtually comes for free, but which is of little use, and richly structured knowledge, which is expensive to gain, but of tremendous possible value. Thus, we give the portal builder tools and techniques in an overall framework to start the knowledge process at a semantic portal. SEAL-II takes advantage of the ontology in order to initiate the portal with knowledge, which is more usable than unstructured knowledge, but cheaper than richly structured knowledge.
    BibTeX:
    @article{Hotho2001,
      author = {Hotho, A and Maedche, A and Staab, S and Studer, R},
      title = {SEAL-II - The soft spot between richly structured and unstructured knowledge},
      journal = {JOURNAL OF UNIVERSAL COMPUTER SCIENCE},
      year = {2001},
      volume = {7},
      number = {7},
      pages = {566-590},
      note = {International Conference on Knowledge Management (I-KNOW 01), GRAZ, AUSTRIA, JAN 01, 2001}
    }
    
    Houben, G., Barna, P., Frasincar, F. & Vdovjak, R. Hera: Development of semantic Web information systems {2003}
    Vol. {2722}WEB ENGINEERING, PROCEEDINGS, pp. {529-538} 
    inproceedings  
    Abstract: As a consequence of the success of the Web, methodologies for information system development need to consider systems that use the Web paradigm. These Web Information Systems (WIS) use Web technologies to retrieve information from the Web and to deliver information in a Web presentation to the users. Hera is a model-driven methodology supporting WIS design, focusing on the processes of integration, data retrieval, and presentation generation. Integration and data retrieval gather from Web sources the data that composes the result of a user query. Presentation generation produces the Web or hypermedia presentation format for the query result, such that the presentation and specifically its navigation suits the user's browser. We show how in Hera all these processes lead to data transformations based on RDF(S) models. Proving the value of RDF(S) for WIS design, we pave the way for the development of Semantic Web Information Systems.
    BibTeX:
    @inproceedings{Houben2003,
      author = {Houben, GJ and Barna, P and Frasincar, F and Vdovjak, R},
      title = {Hera: Development of semantic Web information systems},
      booktitle = {WEB ENGINEERING, PROCEEDINGS},
      year = {2003},
      volume = {2722},
      pages = {529-538},
      note = {3rd International Conference on Web Engineering (ICWE 2003), OVIEDO, SPAIN, JUL 14-18, 2003}
    }
    
    Hu, W., Wu, O., Chen, Z., Fu, Z. & Maybank, S. Recognition of pornographic web pages by classifying texts and images {2007} IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
    Vol. {29}({6}), pp. {1019-1034} 
    article DOI  
    Abstract: With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.
    BibTeX:
    @article{Hu2007,
      author = {Hu, Weiming and Wu, Ou and Chen, Zhouyao and Fu, Zhouyu and Maybank, Steve},
      title = {Recognition of pornographic web pages by classifying texts and images},
      journal = {IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE},
      year = {2007},
      volume = {29},
      number = {6},
      pages = {1019-1034},
      doi = {{10.1109/TPAMI.2007.1133}}
    }
    
    Huffaker, D. & Calvert, S. Gender, identity, and language use in teenage blogs {2005} JOURNAL OF COMPUTER-MEDIATED COMMUNICATION
    Vol. {10}({2}) 
    article  
    Abstract: This study examines issues of online identity and language use among male and female teenagers who created and maintained weblogs, personal journals made publicly accessible on the World Wide Web, Online identity and language use were examined in terms of the disclosure of personal information, sexual identity, emotive features, and semantic themes. Male and female teenagers presented themselves similarly in their blogs, often revealing personal information such as their real names, ages, and locations. Males more so than females used emoticons, employed an active and resolute style of language, and were more likely to present themselves as gay. The results suggest that teenagers stay closer to reality in their online expressions of self than has previously been suggested, and that these explorations involve issues, such as learning about their sexuality, that commonly occur during the adolescent years.
    BibTeX:
    @article{Huffaker2005,
      author = {Huffaker, DA and Calvert, SL},
      title = {Gender, identity, and language use in teenage blogs},
      journal = {JOURNAL OF COMPUTER-MEDIATED COMMUNICATION},
      year = {2005},
      volume = {10},
      number = {2}
    }
    
    Hughes, G., Mills, H., De Roure, D., Frey, J., Moreau, L., Schraefel, M., Smith, G. & Zaluska, E. The semantic smart laboratory: a system for supporting the chemical eScientist {2004} ORGANIC & BIOMOLECULAR CHEMISTRY
    Vol. {2}({22}), pp. {3284-3293} 
    article DOI  
    Abstract: One goal of eScience is to enable the end-to-end publication of experiments and results. In the Combechem project we have developed an innovative human-centred system which captures the process of a chemistry experiment from plan to execution. The system comprises an electronic lab book replacement, which has been successfully trialled in a synthetic organic chemistry laboratory, and a flexible back-end storage system. Working closely with the users, we found that a light touch and a high degree of flexibility was required in the user interface. In this paper, we concentrate on the representation and storage of human-scale experiment metadata, introducing an ontology to describe the record of an experiment, and a storage system for the data from our lab book software. Just as the interfaces need to be flexible to cope with whatever a chemist wishes to record, so the back end solutions need to be similarly flexible to store any metadata that may be created. The storage system is based on Semantic Web technologies, such as RDF, and Web Services. It gives a much higher degree of flexibility to the type of metadata it can store, compared to the use of rigid relational databases.
    BibTeX:
    @article{Hughes2004,
      author = {Hughes, G and Mills, H and De Roure, D and Frey, JG and Moreau, L and Schraefel, MC and Smith, G and Zaluska, E},
      title = {The semantic smart laboratory: a system for supporting the chemical eScientist},
      journal = {ORGANIC & BIOMOLECULAR CHEMISTRY},
      year = {2004},
      volume = {2},
      number = {22},
      pages = {3284-3293},
      doi = {{10.1039/b410075a}}
    }
    
    Huhns, M. Agents as Web services {2002} IEEE INTERNET COMPUTING
    Vol. {6}({4}), pp. {93-95} 
    article  
    BibTeX:
    @article{Huhns2002,
      author = {Huhns, MN},
      title = {Agents as Web services},
      journal = {IEEE INTERNET COMPUTING},
      year = {2002},
      volume = {6},
      number = {4},
      pages = {93-95}
    }
    
    Hull, R. & Su, J. Tools for composite web services: A short overview {2005} SIGMOD RECORD
    Vol. {34}({2}), pp. {86-95} 
    article  
    Abstract: Web services technologies enable flexible and dynamic interoperation of autonomous software and information systems. A central challenge is the development of modeling techniques and tools for eanbling the (semi-)automatic composition and analysis of these services, taking into account their semantic and behavioral properties. This paper presents an overview of the fundamental assumptions and concepts underlying current work on service composition, and provides a sampling of key results in the area. It also provides a brief tour of several composition models including semantic web services, the ``Roman'' model, and the Mealy/conversation model.
    BibTeX:
    @article{Hull2005,
      author = {Hull, R and Su, JW},
      title = {Tools for composite web services: A short overview},
      journal = {SIGMOD RECORD},
      year = {2005},
      volume = {34},
      number = {2},
      pages = {86-95}
    }
    
    Humphreys, B., McCray, A. & Cheh, M. Evaluating the coverage of controlled health data terminologies: Report on the results of the NLM/AHCPR large scale vocabulary test {1997} JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION
    Vol. {4}({6}), pp. {484-500} 
    article  
    Abstract: Objective: To determine the extent to which a combination of existing machine-readable health terminologies cover the concepts and terms needed for a comprehensive controlled vocabulary for health information systems by carrying out a distributed national experiment using the Internet and the UMLS Knowledge Sources,lexical programs, and server. Methods: Using a specially designed Web-based interface to the UMLS Knowledge Source Server, participants searched the more than 30 vocabularies in the 1996 UMLS Metathesaurus and three planned additions to determine if concepts for which they desired controlled terminology were present or absent. For each term submitted, the interface presented a candidate exact match or a set of potential approximate matches from which the participant selected the most closely related concept. The interface captured a profile of the terms submitted by the participant and for each term searched, information about the concept (if any) selected by the participant. The term information was loaded into a database at NLM for review and analysis and was also available to be downloaded by the participant. A team of subject experts reviewed records to identify matches missed by participants and to correct any obvious errors in relationships. The editors of SNOMED International and the Read Codes were given a random sample of reviewed terms for which exact meaning matches were not found to identify exact matches that were missed or any valid combinations of concepts that were synonymous to input terms. The 1997 UMLS Metathesaurus was used in the semantic type and vocabulary source analysis because it included most of the three planned additions. Results: Sixty-three participants submitted a total of 41,127 terms, which represented 32,679 normalized strings. Mure than 80% of the terms submitted were wanted for parts of the patient record related to the patient's condition. Following review, 58% of all submitted terms had exact meaning matches in the controlled vocabularies in the test, 41% had related concepts, and 1% were not found. Of the 28% of the terms which were narrower in meaning than a concept in the controlled vocabularies, 86% shared lexical items with the broader concept, but had additional modification; The percentage of exact meaning matches varied by specialty from 45% to 71 Twenty-nine different vocabularies contained meanings for some of the 23,837 terms (a maximum of 12,707 discrete concepts) with exact meaning matches. Based on preliminary data and analysis, individual vocabularies contained <1% to 63% of the terms and <1% to 54% of the concepts. Only SNOMED International and the Read Codes had more than 60% of the terms and more than 50% of the concepts. Conclusions: The combination of existing controlled vocabularies included in the test represents the meanings of the majority of the terminology needed to record patient conditions, providing substantially more exact matches than any individual vocabulary in the set. From a technical and organizational perspective, the test was successful and should serve as a useful model, both for distributed input to the enhancement of controlled vocabularies and for other kinds of collaborative informatics research.
    BibTeX:
    @article{Humphreys1997,
      author = {Humphreys, BL and McCray, AT and Cheh, ML},
      title = {Evaluating the coverage of controlled health data terminologies: Report on the results of the NLM/AHCPR large scale vocabulary test},
      journal = {JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION},
      year = {1997},
      volume = {4},
      number = {6},
      pages = {484-500}
    }
    
    Hunter, J. Enhancing the semantic interoperability of multimedia through a core ontology {2003} IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
    Vol. {13}({1}), pp. {49-58} 
    article DOI  
    Abstract: A core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse multimedia sources. A complete and extensible ontology that expresses the basic concepts that are common across a variety of domains and media types and that can provide the basis for specialization into domain-specific concepts and vocabularies, is essential for well-defined mappings between domain-specific knowledge representations (i.e., metadata vocabularies) and the subsequent building of a variety of services such as cross-domain searching, tracking, browsing, data mining and knowledge acquisition. As more and more communities develop metadata application profiles which combine terms from multiple vocabularies (e.g., Dublin Core, MPEG-7, MPEG-21, CIDOC/CRM, FGDC, IMS), a core ontology will provide a common understanding of the basic entities and relationships, which is essential for semantic interoperability and the development of additional services based on deductive inferencing. In this paper, we first propose such a core ontology (the ABC model) which was developed in response to a need to integrate information from multiple genres of multimedia content within digital libraries and archives. Although the MPEG-21 RDD was influenced by the ABC model and is based on a model extremely similar to ABC, we believe that it is important to define a separate and domain-independent top-level extensible ontology for scenarios in which either MPEG-21 is irrelevant or to enable the attachment of ontologies from communities external to MPEG, for example, the museum domain (CIDOC/CRM) or the biomedical domain (ON9.3). We evaluate the ABC model's ability to mediate and integrate between multimedia metadata vocabularies by illustrating how it can provide the foundation to facilitate semantic interoperability between MPEG-7, MPEG-21 and other domain-specific metadata vocabularies. By expressing the semantics of both MPEG-7 and MPEG-21 metadata terms in RDF SChema/DAML+OIL [and eventually the Web Ontology Language (OWL)] and attaching the MPEG-7 and MPEG-21 class and property hierarchies to the appropriate top-level classes and properties of the ABC model, we have defined a single distributed machine-understandable ontology. The resulting ontology provides semantic knowledge which is nonexistent within declarative XML schemas or XML-encoded metadata descriptions. Finally, in order to illustrate how such an ontology will contribute to the interoperability of data and services across the entire multimedia content delivery chain, we describe a number of valuable services which have been developed or could potentially be developed using the resulting merged ontologies.
    BibTeX:
    @article{Hunter2003,
      author = {Hunter, J},
      title = {Enhancing the semantic interoperability of multimedia through a core ontology},
      journal = {IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY},
      year = {2003},
      volume = {13},
      number = {1},
      pages = {49-58},
      doi = {{10.1109/TCSVT.2002.808088}}
    }
    
    Hunter, J., Drennan, J. & Little, S. Realizing the hydrogen economy through semantic Web technologies {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({1}), pp. {40-47} 
    article  
    BibTeX:
    @article{Hunter2004,
      author = {Hunter, J and Drennan, J and Little, S},
      title = {Realizing the hydrogen economy through semantic Web technologies},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {1},
      pages = {40-47}
    }
    
    Huynh, D., Mazzocchi, S. & Karger, D. Piggy Bank: Experience the Semantic Web inside your Web browser {2005}
    Vol. {3729}SEMANTIC WEB - ISWC 2005, PROCEEDINGS, pp. {413-430} 
    inproceedings  
    Abstract: The Semantic Web Initiative envisions a Web wherein information is offered free of presentation, allowing more effective exchange and mixing across web sites and across web pages. But without substantial Semantic Web content, few tools will be written to consume it; without many such tools, there is little appeal to publish Semantic Web content. To break this chicken-and-egg problem, thus enabling more flexible informa-tion access, we have created a web browser extension called Piggy Bankthat lets users make use of Semantic Web content within Web content as users browse the Web, Wherever Semantic Web content is not available, Piggy Bank can invoke screenscrapers to re-structure information within web pages into Semantic Web format. Through the use of Semantic Web technologies, Piggy Bank provides direct, immediate benefits to users in their use of the existing Web. Thus, the ex-istence of even just a few Semantic Web-enabled sites or a few scrapers already benefits users. Piggy Bank thereby offers an easy, incremental upgrade path to users without requiring a wholesale adoption of the Semantic Web's vision. To further improve this Semantic Web experience, we have created Semantic Bank, a web server application that lets Piggy Bank users share the Semantic Web information they have collected, enabling collaborative efforts to build so-phisticated Semantic Web information repositories through simple, everyday's use of Piggy Bank.
    BibTeX:
    @inproceedings{Huynh2005,
      author = {Huynh, D and Mazzocchi, S and Karger, D},
      title = {Piggy Bank: Experience the Semantic Web inside your Web browser},
      booktitle = {SEMANTIC WEB - ISWC 2005, PROCEEDINGS},
      year = {2005},
      volume = {3729},
      pages = {413-430},
      note = {4th International Semantic Web Conference (ISWC 2005), Galway, IRELAND, NOV 06-10, 2005}
    }
    
    Hyvonen, E., Makela, E., Salminen, M., Valo, A., Viljanen, K., Saarela, S., Junnila, M. & Kettula, S. MUSEUM FINLAND - Finnish museums on the semantic web {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({2-3}), pp. {224-241} 
    article DOI  
    Abstract: This article presents the semantic portal MuseumFinland for publishing heterogeneous museum collections on the Semantic Web. It is shown how museums with their semantically rich and interrelated collection content can create a large, consolidated semantic collection portal together on the web. By sharing a set of ontologies, it is possible to make collections semantically interoperable, and provide the museum visitors with intelligent content-based search and browsing services to the global collection base. The architecture underlying MuseumFinland separates generic search and browsing services from the underlying application dependent schemas and metadata by a layer of logical rules. As a result, the portal creation framework and software developed has been applied successfully to other domains as well. MuseumFinland got the Semantic Web Challence Award (second prize) in 2004. (c) 2005 Elsevier B. V. All rights reserved.
    BibTeX:
    @article{Hyvonen2005,
      author = {Hyvonen, E and Makela, E and Salminen, M and Valo, A and Viljanen, K and Saarela, S and Junnila, M and Kettula, S},
      title = {MUSEUM FINLAND - Finnish museums on the semantic web},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {2-3},
      pages = {224-241},
      doi = {{10.1016/j.websem.2005.05.008}}
    }
    
    Iannone, L., Palmisano, I. & Fanizzi, N. An algorithm based on counterfactuals for concept learning in the Semantic Web {2007} APPLIED INTELLIGENCE
    Vol. {26}({2}), pp. {139-159} 
    article DOI  
    Abstract: In the line of realizing the Semantic-Web by means of mechanized practices, we tackle the problem of building ontologies, assisting the knowledge engineers' job by means of Machine Learning techniques. In particular, we investigate on solutions for the induction of concept descriptions in a semi-automatic fashion. In particular, we present an algorithm that is able to infer definitions in the ALC. Description Logic (a sub-language of OWL-DL) from instances made available by domain experts. The effectiveness of the method with respect to past algorithms is also empirically evaluated with an experimentation in the document image understanding domain.
    BibTeX:
    @article{Iannone2007,
      author = {Iannone, Luigi and Palmisano, Ignazio and Fanizzi, Nicola},
      title = {An algorithm based on counterfactuals for concept learning in the Semantic Web},
      journal = {APPLIED INTELLIGENCE},
      year = {2007},
      volume = {26},
      number = {2},
      pages = {139-159},
      note = {IEA/AIE 2005 Conference, ITALY, 2005},
      doi = {{10.1007/s10489-006-0011-5}}
    }
    
    Ishida, T. Language grid: An infrastructure for intercultural collaboration {2006} International Symposium on Applications and the Internet , Proceedings, pp. {96-100}  inproceedings  
    Abstract: To increase the accessibility and usability of online language services, this paper proposes the language grid to create composite language services for various communities. The language grid is called ``horizontal, `` when the grid connects the standard languages of nations, or ``vertical, `` when the grid combines the language services generated by communities. Semantic Web service technologies are applied in a human-centered fashion, to create composite language services through the collaboration of users and agents. Three example scenarios are given to illustrate how the language grid will organize standard and community language services for intercultural collaboration activities.
    BibTeX:
    @inproceedings{Ishida2006,
      author = {Ishida, T},
      title = {Language grid: An infrastructure for intercultural collaboration},
      booktitle = {International Symposium on Applications and the Internet , Proceedings},
      year = {2006},
      pages = {96-100},
      note = {International Symposium on Applications and the Internet, Phoenix, AZ, JAN 23-27, 2006}
    }
    
    Jang, M. & Sohn, J. Bossam: An extended rule engine for OWL inferencing {2004}
    Vol. {3323}RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS, pp. {128-138} 
    inproceedings  
    Abstract: In this paper, we describe our effort to build an inference engine for OWL reasoning based on the rule engine paradigm. Rule engines are very practical and effective for their representational simplicity and optimized performance, but their limited expressiveness and web unfriendliness restrict their usability for OWL reasoning. We enumerate and succinctly describe extended features implemented in our rule engine, Bossam, and show that these features are necessary to promote the effectiveness of any ordinary rule engine's OWL reasoning capability. URI referencing and URI-based procedural attachment enhance web-friendliness. OWL importing, support for classical negation and relieved range restrictedness help correctly capture the semantics of OWL. Remote binding enables collaborated reasoning among multiple Bossam engines, which enhances the engine's usability on the distributed semantic web environment. By applying our engine to the W3C's OWL test cases, we got a plausible 70% average success rate for the three OWL species. Our contribution with this paper is to suggest a set of extended features that can enhance the reasoning capabilities of ordinary rule engines on the semantic web.
    BibTeX:
    @inproceedings{Jang2004,
      author = {Jang, M and Sohn, JC},
      title = {Bossam: An extended rule engine for OWL inferencing},
      booktitle = {RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS},
      year = {2004},
      volume = {3323},
      pages = {128-138},
      note = {3rd International Workshop on Rules and Rule Markup Languages for the Semantic Web, Hiroshima, JAPAN, NOV 08, 2004}
    }
    
    Janowicz, K. Sim-DL: Towards a semantic similarity measurement theory for the description logic ALCNR in geographic information retrieval {2006}
    Vol. {4278}On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops, Pt 2, Proceedings, pp. {1681-1692} 
    inproceedings  
    Abstract: Similarity measurement theories play an increasing role in GIScience and especially in information retrieval and integration. Existing feature and geometric models have proven useful in detecting close but not identical concepts and entities. However, until now none of these theories are able to handle the expressivity of description logics for various reasons and therefore are not applicable to the kind of ontologies usually developed for geographic information systems or the upcoming geospatial semantic web. To close the resulting gap between available similarity theories on the one side and existing ontologies on the other, this paper presents ongoing work to develop a context-aware similarity theory for concepts specified in expressive description logics such as ALCNR.
    BibTeX:
    @inproceedings{Janowicz2006,
      author = {Janowicz, Krzysztof},
      title = {Sim-DL: Towards a semantic similarity measurement theory for the description logic ALCNR in geographic information retrieval},
      booktitle = {On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops, Pt 2, Proceedings},
      year = {2006},
      volume = {4278},
      pages = {1681-1692},
      note = {On the Move Federated Workshops, Montpellier, FRANCE, OCT 29-NOV 03, 2006}
    }
    
    Jardim-Goncalves, R., Figay, N. & Steiger-Garcao, A. Enabling interoperability of STEP Application Protocols at meta-data and knowledge level {2006} INTERNATIONAL JOURNAL OF TECHNOLOGY MANAGEMENT
    Vol. {36}({4}), pp. {402-421} 
    article  
    Abstract: Numerous proposals exist worldwide for the representation of data models and services for the main manufacturing activities. The ISO 10303 STEP has developed more than 40 standard Application Protocols for product data representation, and they reflect the consolidated expertise of major industrial worldwide specialists working together for more than 20 years, covering the principal product data management areas for the main industries. However, these standards are focused on product data representation. A framework to enable them to interoperate at meta-model and knowledge levels permits the reuse of this existing expertise, extending its capabilities in complementary application domains, like advanced modelling tools, knowledge management and the emergent semantic web technologies. This paper proposes a framework for the development, usage and extension of integrated data and knowledge models, using as a reference existent standard-based protocols. The work results from the research and development completed by the authors under the umbrella of international projects.
    BibTeX:
    @article{Jardim-Goncalves2006,
      author = {Jardim-Goncalves, Ricardo and Figay, Nicolas and Steiger-Garcao, Adolfo},
      title = {Enabling interoperability of STEP Application Protocols at meta-data and knowledge level},
      journal = {INTERNATIONAL JOURNAL OF TECHNOLOGY MANAGEMENT},
      year = {2006},
      volume = {36},
      number = {4},
      pages = {402-421}
    }
    
    Jiang, H. & Elmagarmid, A. WVTDB - A semantic content-based video database system on the World Wide Web {1998} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {10}({6}), pp. {947-966} 
    article  
    Abstract: This paper describes the design and implementation of a web-based video database system (WVTDB) that demonstrates our research on video data modeling, semantic content-based video query, and video database system architecture. The video data model of WVTDB is based on multilevel video data abstractions and annotation layering, thus allowing dynamic and incremental video annotation and indexing, multiuser view sharing, and video data reuse. Users can query, retrieve, and browse video data based on their semantic content descriptions and temporal constraints on the video segments. WVTDB employs a modular system architecture that supports distributed video query processing and subquery caching. Several techniques, such as video wrappers and lazy delivery, are also proposed specifically to address the network bandwidth limitations for this kind of web-based system. We also address adaptivity, data access control, and user profile issues.
    BibTeX:
    @article{Jiang1998,
      author = {Jiang, HT and Elmagarmid, AK},
      title = {WVTDB - A semantic content-based video database system on the World Wide Web},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {1998},
      volume = {10},
      number = {6},
      pages = {947-966}
    }
    
    Jiang, H. & Elmagarmid, A. Spatial and temporal content-based access to hypervideo databases {1998} VLDB JOURNAL
    Vol. {7}({4}), pp. {226-238} 
    article  
    Abstract: Providing content-based video query, retrieval and browsing is the most important goal of a video database management system (VDBRIS). Video data is unique not only in terms of its spatial and temporal characteristics, but also in the semantic associations manifested by the entities present in the video. This paper introduces a novel video data model called Logical Hypervideo Data,Model. In addition to multilevel video abstractions, the model is capable of representing video entities that users are interested in (defined as hot objects) and their semantic associations with other logical video abstractions, including hot objects themselves. The semantic associations are modeled as video hyperlinks and video data with such property are called hypervideo. Video hyperlinks provide a flexible and effective way of browsing video data. Based on the proposed model, video queries can be specified with both temporal and spatial constraints, as well as with semantic descriptions of the video data. The characteristics of hot objects' spatial and temporal relations and efficient evaluation of them are also discussed. Some query examples are given to demonstrate the expressiveness of the video data model and query language. Finally, we describe a modular video database system architecture that our web-based prototype is based on.
    BibTeX:
    @article{Jiang1998a,
      author = {Jiang, HT and Elmagarmid, AK},
      title = {Spatial and temporal content-based access to hypervideo databases},
      journal = {VLDB JOURNAL},
      year = {1998},
      volume = {7},
      number = {4},
      pages = {226-238}
    }
    
    Jimeno, A., Jimenez-Ruiz, E., Lee, V., Gaudan, S., Berlanga, R. & Rebholz-Schuhmann, D. Assessment of disease named entity recognition on a corpus of annotated sentences {2008} BMC BIOINFORMATICS
    Vol. {9}({Suppl. 3}) 
    article DOI  
    Abstract: Background: In recent years, the recognition of semantic types from the biomedical scientific literature has been focused on named entities like protein and gene names (PGNs) and gene ontology terms (GO terms). Other semantic types like diseases have not received the same level of attention. Different solutions have been proposed to identify disease named entities in the scientific literature. While matching the terminology with language patterns suffers from low recall (e.g., Whatizit) other solutions make use of morpho-syntactic features to better cover the full scope of terminological variability (e.g., MetaMap). Currently, MetaMap that is provided from the National Library of Medicine (NLM) is the state of the art solution for the annotation of concepts from UMLS (Unified Medical Language System) in the literature. Nonetheless, its performance has not yet been assessed on an annotated corpus. In addition, little effort has been invested so far to generate an annotated dataset that links disease entities in text to disease entries in a database, thesaurus or ontology and that could serve as a gold standard to benchmark text mining solutions. Results: As part of our research work, we have taken a corpus that has been delivered in the past for the identification of associations of genes to diseases based on the UMLS Metathesaurus and we have reprocessed and re-annotated the corpus. We have gathered annotations for disease entities from two curators, analyzed their disagreement (0.51 in the kappa-statistic) and composed a single annotated corpus for public use. Thereafter, three solutions for disease named entity recognition including MetaMap have been applied to the corpus to automatically annotate it with UMLS Metathesaurus concepts. The resulting annotations have been benchmarked to compare their performance. Conclusions: The annotated corpus is publicly available at ftp://ftp.ebi.ac.uk/pub/software/textmining/corpora/diseases and can serve as a benchmark to other systems. In addition, we found that dictionary look-up already provides competitive results indicating that the use of disease terminology is highly standardized throughout the terminologies and the literature. MetaMap generates precise results at the expense of insufficient recall while our statistical method obtains better recall at a lower precision rate. Even better results in terms of precision are achieved by combining at least two of the three methods leading, but this approach again lowers recall. Altogether, our analysis gives a better understanding of the complexity of disease annotations in the literature. MetaMap and the dictionary based approach are available through the Whatizit web service infrastructure (Rebholz-Schuhmann D, Arregui M, Gaudan S, Kirsch H, Jimeno A: Text processing through Web services: Calling Whatizit.
    BibTeX:
    @article{Jimeno2008,
      author = {Jimeno, Antonio and Jimenez-Ruiz, Ernesto and Lee, Vivian and Gaudan, Sylvain and Berlanga, Rafael and Rebholz-Schuhmann, Dietrich},
      title = {Assessment of disease named entity recognition on a corpus of annotated sentences},
      journal = {BMC BIOINFORMATICS},
      year = {2008},
      volume = {9},
      number = {Suppl. 3},
      note = {2nd International Symposium on Languages in Biology and Medicine, Singapore, SINGAPORE, DEC 06-07, 2007},
      doi = {{10.1186/1471-2105-9-S3-S3}}
    }
    
    Jorgensen, C., Jaimes, A., Benitez, A. & Chang, S. A conceptual framework and empirical research for classifying visual descriptors {2001} JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY
    Vol. {52}({11}), pp. {938-947} 
    article  
    Abstract: This article presents exploratory research evaluating a conceptual structure for the description of visual content of images. The structure, which was developed from empirical research in several fields (e.g., Computer Science, Psychology, Information Studies, etc.), classifies visual attributes into a ``Pyramid'' containing four syntactic levels (type/technique, global distribution, local structure, composition), and six semantic levels (generic, specific, and abstract levels of both object and scene, respectively). Various experiments are presented, which address the Pyramid's ability to achieve several tasks: (1) classification of terms describing image attributes generated in a formal and an informal description task, (2) classification of terms that result from a structured approach to indexing, and (3) guidance in the indexing process. Several descriptions, generated by naive users and indexers, are used in experiments that include two image collections: a random Web sample, and a set of news images. To test descriptions generated in a structured setting, an Image Indexing Template (developed independently over several years of this project by one of the authors) was also used. The experiments performed suggest that the Pyramid is conceptually robust (i.e., can accommodate a full range of attributes), and that it can be used to organize visual content for retrieval, to guide the indexing process, and to classify descriptions obtained manually and automatically.
    BibTeX:
    @article{Jorgensen2001,
      author = {Jorgensen, C and Jaimes, A and Benitez, AB and Chang, SF},
      title = {A conceptual framework and empirical research for classifying visual descriptors},
      journal = {JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY},
      year = {2001},
      volume = {52},
      number = {11},
      pages = {938-947}
    }
    
    Josephs, K.A. Frontotemporal dementia and related disorders: Deciphering the enigma {2008} ANNALS OF NEUROLOGY
    Vol. {64}({1}), pp. {4-14} 
    article DOI  
    Abstract: In the past century, particularly the last decade, there has been enormous progress in our understanding of frontotemporal dementia, a non-Alzheimer's type dementia. Large clinicopathological series have been published that have clearly demonstrated an overlap between the clinical syndromes subsumed under the term frontotemporal dementia and the progressive supranuclear palsy syndrome, corticobasal syndrome, and motor neuron disease. There have also been significant advancements in brain imaging, neuropathology, and molecular genetics that have led to different approaches to classification. Unfortunately, the field is complicated by a barrage of overlapping clinical syndromes and histopathological diagnoses that does not allow one to easily identify relations between individual clinical syndromic presentations and underlying neuropathology. This review deciphers this web of terminology and highlights consistent, and hence important, associations between individual clinical syndromes and neuropathology. These associations could ultimately allow the identification of appropriate patient phenotypes for future targeted treatments.
    BibTeX:
    @article{Josephs2008,
      author = {Josephs, Keith A.},
      title = {Frontotemporal dementia and related disorders: Deciphering the enigma},
      journal = {ANNALS OF NEUROLOGY},
      year = {2008},
      volume = {64},
      number = {1},
      pages = {4-14},
      doi = {{10.1002/ana.21426}}
    }
    
    Jovanovic, J., Devedzic, V., Gasevic, D., Hatala, M., Eap, T., Richards, G. & Brooks, C. Using semantic web technologies to analyze learning content {2007} IEEE INTERNET COMPUTING
    Vol. {11}({5}), pp. {45-53} 
    article  
    Abstract: The authors demonstrate how to use Semantic Web technologies to improve the state-of-the-art in online learning environments and bridge the gap between students on the one hand, and authors or teachers on the other. The ontological framework presented here helps formalize learning object context as a complex interplay of different learning-related elements and shows how we can use semantic annotation to interrelate diverse learning artifacts. On top of this framework, the authors implemented several feedback channels for educators to improve the delivery of future Web-based courses.
    BibTeX:
    @article{Jovanovic2007,
      author = {Jovanovic, Jelena and Devedzic, Vladan and Gasevic, Dragan and Hatala, Marek and Eap, Timmy and Richards, Griff and Brooks, Christopher},
      title = {Using semantic web technologies to analyze learning content},
      journal = {IEEE INTERNET COMPUTING},
      year = {2007},
      volume = {11},
      number = {5},
      pages = {45-53}
    }
    
    Jovanovic, J. & Gasevic, D. Achieving knowledge interoperability: An XML/XSLT approach {2005} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {29}({3}), pp. {535-553} 
    article DOI  
    Abstract: Development of an intelligent system requires not only profound understanding of the problem under study, but also employment of different knowledge representation techniques and tools often based on a variety of paradigms and technological platforms. In this context automation of knowledge sharing between different systems becomes increasingly important. One solution might be to extend a knowledge modeling tool by implementing a set of new classes or functions for importing other knowledge formats (using, e.g. Java, C+ +, etc.). But, this can be a rather difficult and time consuming task. Since XML is now widely accepted as knowledge representation syntax, we believe that a more suitable solution would be to use eXtensible Stylesheet Language Transformation (XSLT) a WK standard for transforming XML documents. A special advantage of this approach is that even though an XSLT is written independently of any programming language, it can be executed by a program written in almost any up-to-date programming language. We experiment on an XSLT-based infrastructure for sharing knowledge between three knowledge modeling and acquisition tools that use different conceptual models for knowledge representation in order to evaluate cons and pros of the proposed XSLT approach. Two of these tools, JessGUI and JavaDON are ongoing efforts of the GOOD OLD AI research group to develop interoperable development tools for building intelligent systems, while the third one is Protege-2000, a broadly accepted ontology development tool. (c) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Jovanovic2005,
      author = {Jovanovic, J and Gasevic, D},
      title = {Achieving knowledge interoperability: An XML/XSLT approach},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2005},
      volume = {29},
      number = {3},
      pages = {535-553},
      doi = {{10.1016/j.eswa.2005.04.024}}
    }
    
    Jung, J. Collaborative Web browsing based on semantic extraction of user interests with bookmarks {2005} JOURNAL OF UNIVERSAL COMPUTER SCIENCE
    Vol. {11}({2}), pp. {213-228} 
    article  
    Abstract: With the exponentially increasing amount of information available on the World Wide Web, users have been getting more difficult to seek relevant information. Several studies have been conducted on the concept of adaptive approaches, in which the user's personal interests are taken into account. In this paper, we propose a user-support mechanism based on the sharing of knowledge with other users through the collaborative Web browsing, focusing specifically on the user's interests extracted from his or her own bookmarks. Simple URL based bookmarks are endowed with semantic and structural information through the conceptualization based on ontology. In order to deal with the dynamic usage of bookmarks, ontology learning based on a hierarchical clustering method can be exploited. This system is composed of a facilitator agent and multiple personal agents. In experiments conducted with this system, it was found that approximately 53.1% of the total time was saved during collaborative browsing for the purpose of seeking the equivalent set of information, as compared with normal personal Web browsing.
    BibTeX:
    @article{Jung2005,
      author = {Jung, JJ},
      title = {Collaborative Web browsing based on semantic extraction of user interests with bookmarks},
      journal = {JOURNAL OF UNIVERSAL COMPUTER SCIENCE},
      year = {2005},
      volume = {11},
      number = {2},
      pages = {213-228},
      note = {1st Workshop on Modern Technologies for Web-Based Adaptive System, Cracow, POLAND, JUN, 2004}
    }
    
    Jung, J. Semantic preprocessing of Web request streams for Web usage mining {2005} JOURNAL OF UNIVERSAL COMPUTER SCIENCE
    Vol. {11}({8}), pp. {1383-1396} 
    article  
    Abstract: Efficient data preparation needs to discover the underlying knowledge from complicated Web usage data. In this paper, we have focused on two main tasks, semantic outlier detection from online Web request streams and segmentation (or sessionization) of them. We thereby exploit semantic technologies to infer the relationships among Web requests. Web ontologies such as taxonomies and directories can label each Web request as all the corresponding hierarchical topic paths. Our algorithm consists of two steps. The first step is the nested repetition of top-down partitioning for establishing a set of candidates of session boundaries, and the next step is evaluation process of bottom-up merging for reconstructing segmented sequences. In addition, we propose the hybrid approach of this method, as combining with the existing heuristics. Using synthesized dataset and real-world dataset of the access log files of IRCache, we conducted experiments and showed that semantic preprocessing method improves the performance of rule discovery algorithms. It means that we can conceptually track the behavior of users tending to easily change their intentions and interests, or simultaneously try to search various kinds of information on the Web.
    BibTeX:
    @article{Jung2005a,
      author = {Jung, JJ},
      title = {Semantic preprocessing of Web request streams for Web usage mining},
      journal = {JOURNAL OF UNIVERSAL COMPUTER SCIENCE},
      year = {2005},
      volume = {11},
      number = {8},
      pages = {1383-1396}
    }
    
    Jung, J.J. Ontology-based context synchronization for ad hoc social collaborations {2008} KNOWLEDGE-BASED SYSTEMS
    Vol. {21}({7}), pp. {573-580} 
    article DOI  
    Abstract: To efficiently support collaborations between people (agents) in real-time, we propose an ontology-based platform for acquainting the most relevant users (e.g., colleagues and classmates), according to their context. Thereby, we modeled two kinds of contexts with semantic information derived from ontologies; (i) personal context, and (ii) consensual context, integrated from several personal contexts. More importantly, we formulate measurement criteria to compare them. Consequently, groups can be dynamically organized with respect to the similarities among several aspects of personal context. In particular, users can engage in complex collaborations related to multiple semantics. For experimentation, we implemented a social browsing system based on context synchronization. (c) 2008 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Jung2008,
      author = {Jung, Jason J.},
      title = {Ontology-based context synchronization for ad hoc social collaborations},
      journal = {KNOWLEDGE-BASED SYSTEMS},
      year = {2008},
      volume = {21},
      number = {7},
      pages = {573-580},
      doi = {{10.1016/j.knosys.2008.03.015}}
    }
    
    Jung, J.J. Ontological framework based on contextual mediation for collaborative information retrieval {2007} INFORMATION RETRIEVAL
    Vol. {10}({1}), pp. {85-109} 
    article DOI  
    Abstract: On the heterogeneous web information spaces, users have been suffering from efficiently searching for relevant information. This paper proposes a mediator agent system to estimate the semantics of unknown web spaces by learning the fragments gathered during the users' focused crawling. This process is organized as the following three tasks; (i) gathering semantic information about web spaces from personal agents while focused crawling in unknown spaces, (ii) reorganizing the information by using ontology alignment algorithm, and (iii) providing relevant semantic information to personal agents right before focused crawling. It makes the personal agent possible to recognize the corresponding user's behaviors in semantically heterogeneous spaces and predict his searching contexts. For the experiments, we implemented comparison-shopping system with heterogeneous web spaces. As a result, our proposed method efficiently supported the users, and then, network traffic was also reduced.
    BibTeX:
    @article{Jung2007,
      author = {Jung, Jason J.},
      title = {Ontological framework based on contextual mediation for collaborative information retrieval},
      journal = {INFORMATION RETRIEVAL},
      year = {2007},
      volume = {10},
      number = {1},
      pages = {85-109},
      doi = {{10.1007/s10791-006-9013-5}}
    }
    
    Jung, J.J. Exploiting semantic annotation to supporting user browsing on the web {2007} KNOWLEDGE-BASED SYSTEMS
    Vol. {20}({4}), pp. {373-381} 
    article DOI  
    Abstract: The aim of this paper is to support user browsing on semantically heterogeneous information spaces. In advance of a user's explicit actions, his search context should be predicted by the locally annotated resources in his access histories. We thus exploit semantic transcoding method and measure the relevance between the estimated model of user intention and the candidate resources in web spaces. For these experiments, we simulated the scenario of comparison-shopping systems on the testing bed organized by twelve online stores in which images are annotated with semantically heterogeneous metadata. (c) 2006 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Jung2007a,
      author = {Jung, Jason J.},
      title = {Exploiting semantic annotation to supporting user browsing on the web},
      journal = {KNOWLEDGE-BASED SYSTEMS},
      year = {2007},
      volume = {20},
      number = {4},
      pages = {373-381},
      doi = {{10.1016/j.knosys.2006.08.003}}
    }
    
    Kagal, L., Finin, T. & Joshi, A. A policy based approach to security for the Semantic Web {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {402-418} 
    inproceedings  
    Abstract: Along with developing specifications for the description of meta-data and the extraction of information for the Semantic Web, it is important to maximize security in this environment, which is fundamentally dynamic, open and devoid of many of the clues human societies have relied on for security assessment. Our research investigates the marking up of web entities with a semantic policy language and the use of distributed policy management as an alternative to traditional authentication and access control schemes. The policy language allows policies to be described in terms of deontic concepts and models speech acts, which allows the dynamic modification of existing policies, decentralized security control and less exhaustive policies. We present a security framework, based on this policy language, which addresses security issues for web resources, agents and services in the Semantic Web.
    BibTeX:
    @inproceedings{Kagal2003,
      author = {Kagal, L and Finin, T and Joshi, A},
      title = {A policy based approach to security for the Semantic Web},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {402-418},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Kagal, L., Finin, T., Paolucci, M., Srinivasan, N., Sycara, K. & Denker, G. Authorization and privacy for semantic Web services {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({4}), pp. {50-56} 
    article  
    BibTeX:
    @article{Kagal2004,
      author = {Kagal, L and Finin, T and Paolucci, M and Srinivasan, N and Sycara, K and Denker, G},
      title = {Authorization and privacy for semantic Web services},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {4},
      pages = {50-56}
    }
    
    Kalfoglou, Y. & Schorlemmer, M. IF-Map: An ontology-mapping method based on information-flow theory {2003}
    Vol. {2800}JOURNAL ON DATA SEMANTICS I, pp. {98-127} 
    inproceedings  
    Abstract: In order to tackle the need of sharing knowledge within and across organisational boundaries, the last decade has seen researchers both in academia and industry advocating for the use of ontologies as a means for providing a shared understanding of common domains. But with the generalised use of large distributed environments such as the World Wide Web came the proliferation of many different ontologies, even for the same or similar domain, hence setting forth a new need of sharing-that of sharing ontologies. In addition, if visions such as the Semantic Web are ever going to become a reality, it will be necessary to provide as much automated support as possible to the task of mapping different ontologies. Although many efforts in ontology mapping have already been carried out, we have noticed that few of them are based on strong theoretical grounds and on principled methodologies. Furthermore, many of them are based only on syntactical criteria. In this paper we present a theory and method for automated ontology mapping based on channel theory, a mathematical theory of semantic information flow. We successfully applied our method to a large-scale scenario involving the mapping of several different ontologies of computer-science departments from various UK universities.
    BibTeX:
    @inproceedings{Kalfoglou2003a,
      author = {Kalfoglou, Y and Schorlemmer, M},
      title = {IF-Map: An ontology-mapping method based on information-flow theory},
      booktitle = {JOURNAL ON DATA SEMANTICS I},
      year = {2003},
      volume = {2800},
      pages = {98-127},
      note = {21st International Conference on Conceptual Modeling, TAMPERE, FINLAND, OCT 07-11, 2002}
    }
    
    Kalfoglou, Y. & Schorlemmer, M. Ontology mapping: the state of the art {2003} KNOWLEDGE ENGINEERING REVIEW
    Vol. {18}({1}), pp. {1-31} 
    article DOI  
    Abstract: Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.
    BibTeX:
    @article{Kalfoglou2003,
      author = {Kalfoglou, Y and Schorlemmer, M},
      title = {Ontology mapping: the state of the art},
      journal = {KNOWLEDGE ENGINEERING REVIEW},
      year = {2003},
      volume = {18},
      number = {1},
      pages = {1-31},
      doi = {{10.1017/S0269888903000651}}
    }
    
    Kalyanpur, A., Parsia, B., Sirin, E. & Hendler, J. Debugging unsatisfiable classes in OWL ontologies {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({4}), pp. {268-293} 
    article DOI  
    Abstract: As an increasingly large number of OWL ontologies become available on the Semantic Web and the descriptions in the ontologies become more complicated, finding the cause of errors becomes an extremely hard task even for experts. Existing ontology development environments provide some limited support, in conjunction with a reasoner, for reporting errors in OWL ontologies. Typically, these are restricted to the mere detection of, for example, unsatisfiable concepts. However, the diagnosis and resolution of the bug is not supported at all. For example, no explanation is given as to why the error occurs (e. g., by pinpointing the root clash, or axioms in the ontology responsible for the clash) or how dependencies between classes cause the error to propagate (i.e., by distinguishing root from derived unsatisfiable classes). In the former case, information from the internals of a description logic tableaux reasoner can be extracted and presented to the user (glass box approach); while in the latter case, the reasoner can be used as an oracle for a certain set of questions and the asserted structure of the ontology can be used to help isolate the source of the problems (black box approach). Based on the two approaches, we have integrated a number of debugging cues generated from our reasoner, Pellet, in our hypertextual ontology development environment, Swoop. A conducted usability evaluation demonstrates that these debugging cues significantly improve the OWL debugging experience, and point the way to more general improvements in the presentation of an ontology to users. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Kalyanpur2005,
      author = {Kalyanpur, A and Parsia, B and Sirin, E and Hendler, J},
      title = {Debugging unsatisfiable classes in OWL ontologies},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {4},
      pages = {268-293},
      note = {Semantic Web Track held at the World Wide Web Conference, Chiba, JAPAN, MAY, 2005},
      doi = {{10.1016/j.websem.2005.09.005}}
    }
    
    Karampiperis, P. & Sampson, D. Adaptive learning resources sequencing in educational hypermedia systems {2005} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {8}({4}), pp. {128-147} 
    article  
    Abstract: Adaptive learning resources selection and sequencing is recognized as among the most interesting research questions in adaptive educational hypermedia systems (AEHS). In order to adaptively select and sequence learning resources in AEHS, the definition of adaptation rules contained in the Adaptation Model, is required. Although, some efforts have been reported in literature aiming to support the Adaptation Model design by providing AEHS designers direct guidance or semi-automatic mechanisms for making the design process less demanding, still it requires significant effort to overcome the problems of inconsistency, confluence and insufficiency, introduced by the use of rules. Due to the problems of inconsistency and insufficiency of the defined rule sets in the Adaptation Model, conceptual ``holes'' can be generated in the produced learning resource sequences (or learning paths). In this paper, we address the design problem of the Adaptation Model in AEHS proposing an alternative sequencing method that, instead of generating the learning path by populating a concept sequence with available learning resources based on pre-defined adaptation rules, it first generates all possible learning paths that match the learning goal in hand, and then, adaptively selects the desired one, based on the use of a decision model that estimates the suitability of learning resources for a targeted learner. In our simulations we compare the learning paths generated by the proposed methodology with ideal ones produced by a simulated perfect rule-based AEHS. The simulation results provide evidence that the proposed methodology can generate almost accurate learning paths avoiding the need for defining complex rule sets in the Adaptation Model of AEHS.
    BibTeX:
    @article{Karampiperis2005,
      author = {Karampiperis, P and Sampson, D},
      title = {Adaptive learning resources sequencing in educational hypermedia systems},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2005},
      volume = {8},
      number = {4},
      pages = {128-147},
      note = {4th IEEE International Conference on Advanced Learning Technologies, Joensuu, FINLAND, AUG 30-SEP 01, 2004}
    }
    
    Karvounarakis, G., Magganaraki, A., Alexaki, S., Christophides, V., Plexousakis, D., Scholl, M. & Tolle, K. Querying the Semantic Web with RQL {2003} COMPUTER NETWORKS
    Vol. {42}({5}), pp. {617-640} 
    article DOI  
    Abstract: Real-scale Semantic Web applications, such as Knowledge Portals and E-Marketplaces, require the management of voluminous repositories of resource metadata. The Resource Description Framework (RDF) enables the creation and exchange of metadata as any other Web data. Although large volumes of RDF descriptions are already appearing, sufficiently expressive declarative query languages for RDF are still missing. We propose RQL, a new query language adapting the functionality of semistructured or XML query languages to the peculiarities of RDF but also extending this functionality in order to uniformly query both RDF descriptions and schemas. RQL is a typed language, following a functional approach a la OQL and relies on a formal graph model that permits the interpretation of superimposed resource descriptions created using one or more RDF schemas. We illustrate the syntax, semantics and type system of RQL and report on the performance of RSSDB, our persistent RDF Store, for storing and querying voluminous RDF metadata. (C) 2003 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Karvounarakis2003,
      author = {Karvounarakis, G and Magganaraki, A and Alexaki, S and Christophides, V and Plexousakis, D and Scholl, M and Tolle, K},
      title = {Querying the Semantic Web with RQL},
      journal = {COMPUTER NETWORKS},
      year = {2003},
      volume = {42},
      number = {5},
      pages = {617-640},
      note = {11th International World Wide Web Conference, HONOLULU, HI, MAY 07-11, 2002},
      doi = {{10.1016/S1389-1286(03)00227-5}}
    }
    
    Katifori, A., Halatsis, C., Lepouras, G., Vassilakis, C. & Giannopoulou, E. Ontology visualization methods - A survey {2007} ACM COMPUTING SURVEYS
    Vol. {39}({4}) 
    article DOI  
    Abstract: Ontologies, as sets of concepts and their interrelations in a specific domain, have proven to be a useful tool in the areas of digital libraries, the semantic web, and personalized information management. As a result, there is a growing need for effective ontology visualization for design, management and browsing. There exist several ontology visualization methods and also a number of techniques used in other contexts that could be adapted for ontology representation. The purpose of this article is to present these techniques and categorize their characteristics and features in order to assist method selection and promote future research in the area of ontology visualization.
    BibTeX:
    @article{Katifori2007,
      author = {Katifori, Akrivi and Halatsis, Constantin and Lepouras, George and Vassilakis, Costas and Giannopoulou, Eugenia},
      title = {Ontology visualization methods - A survey},
      journal = {ACM COMPUTING SURVEYS},
      year = {2007},
      volume = {39},
      number = {4},
      doi = {{10.1145/1287620.1287621}}
    }
    
    Kell, D.B. Metabolomic biomarkers: search, discovery and validation {2007} EXPERT REVIEW OF MOLECULAR DIAGNOSTICS
    Vol. {7}({4}), pp. {329-333} 
    article DOI  
    BibTeX:
    @article{Kell2007,
      author = {Kell, Douglas B.},
      title = {Metabolomic biomarkers: search, discovery and validation},
      journal = {EXPERT REVIEW OF MOLECULAR DIAGNOSTICS},
      year = {2007},
      volume = {7},
      number = {4},
      pages = {329-333},
      doi = {{10.1586/14737159.7.4.329}}
    }
    
    Khare, R. Microformats - The next (small) thing on the semantic Web? {2006} IEEE INTERNET COMPUTING
    Vol. {10}({1}), pp. {68-75} 
    article  
    BibTeX:
    @article{Khare2006,
      author = {Khare, R},
      title = {Microformats - The next (small) thing on the semantic Web?},
      journal = {IEEE INTERNET COMPUTING},
      year = {2006},
      volume = {10},
      number = {1},
      pages = {68-75}
    }
    
    Kim, H. Predicting how ontologies for the semantic Web will evolve {2002} COMMUNICATIONS OF THE ACM
    Vol. {45}({2}), pp. {48-54} 
    article  
    BibTeX:
    @article{Kim2002,
      author = {Kim, H},
      title = {Predicting how ontologies for the semantic Web will evolve},
      journal = {COMMUNICATIONS OF THE ACM},
      year = {2002},
      volume = {45},
      number = {2},
      pages = {48-54}
    }
    
    Kim, K.-Y., Manley, D.G. & Yang, H. Ontology-based assembly design and information sharing for collaborative product development {2006} COMPUTER-AIDED DESIGN
    Vol. {38}({12}), pp. {1233-1250} 
    article DOI  
    Abstract: To realize a truly collaborative product design and development process, effective communication among design collaborators is a must. In other words, the design intent that is imposed in a product design should be seized and interpreted properly; heterogeneous modeling terms should be semantically processed both by design collaborators and intelligent systems. Ontologies in the Semantic Web can explicitly represent semantics and promote integrated and consistent access to data and services. Thus, if an ontology is used in a heterogeneous and distributed design collaboration, it will explicitly and persistently represent engineering relations that are imposed in an assembly design. Design intent can be captured by reasoning, and, in turn, as reasoned facts, it can be propagated and shared with design collaborators. This paper presents a new paradigm of ontology-based assembly design. In the framework, an assembly design (AsD) ontology serves as a formal, explicit specification of assembly design so that it makes assembly knowledge both machine-interpretable and to be shared. An Assembly Relation Model (ARM) is enhanced using ontologies that represent engineering, spatial, assembly, and joining relations of assembly in a way that promotes collaborative assembly information-sharing environments. In the developed AsD ontology, implicit AsD constraints are explicitly represented using OWL (Web Ontology Language) and SWRL (Semantic Web Rule Language). This paper shows that the ability of the AsD ontology to be reasoned can capture both assembly and joining intents by a demonstration with a realistic mechanical assembly. Finally, this paper presents a new assembly design information-sharing framework and an assembly design browser for a collaborative product development. (C) 2006 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Kim2006,
      author = {Kim, Kyoung-Yun and Manley, David G. and Yang, Hyungjeong},
      title = {Ontology-based assembly design and information sharing for collaborative product development},
      journal = {COMPUTER-AIDED DESIGN},
      year = {2006},
      volume = {38},
      number = {12},
      pages = {1233-1250},
      doi = {{10.1016/j.cad.2006.08.004}}
    }
    
    Kiryakov, A., Popov, B., Ognyanoff, D., Manov, D., Kirilov, A. & Goranov, M. Semantic annotation, indexing, and retrieval {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {484-499} 
    inproceedings  
    Abstract: The Semantic Web realization depends on the availability of critical mass of metadata for the web content, linked to formal knowledge about the world. This paper presents our vision about a holistic system allowing annotation, indexing, and retrieval of documents with respect to real-world entities. A system (called KIM), partially implementing this concept is shortly presented and used for evaluation and demonstration. Our understanding is that a system for semantic annotation should be based upon specific knowledge about the world, rather than indifferent to any ontological commitments and general knowledge. To assure efficiency and reusability of the metadata we introduce a simplistic upper-level ontology which starts with some basic philosophic distinctions and goes down to the most popular entity types (people, companies, cities, etc.), thus providing many of the inter-domain common sense concepts and allowing easy domain-specific extensions. Based on the ontology, an extensive knowledge base of entities descriptions is maintained. Semantically enhanced information extraction system providing automatic annotation with references to classes in the ontology and instances in the knowledge base is presented. Based on these annotations, we perform IR-like indexing and retrieval, further extended using the ontology and knowledge about the specific entities.
    BibTeX:
    @inproceedings{Kiryakov2003,
      author = {Kiryakov, A and Popov, B and Ognyanoff, D and Manov, D and Kirilov, A and Goranov, M},
      title = {Semantic annotation, indexing, and retrieval},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {484-499},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Kitchin, D., Cook, W.R. & Misra, J. A language for task orchestration and its semantic properties {2006}
    Vol. {4137}CONCUR 2006 - CONCURRENCY THEORY, PROCEEDINGS, pp. {477-491} 
    inproceedings  
    Abstract: Orc is a new language for task orchestration, a form of concurrent programming with applications in workflow, business process management, and web service orchestration. Ore provides constructs to orchestrate the concurrent invocation of services - while managing timeouts, priorities, and failure of services or communication. In this paper, we show a trace-based semantic model for Ore, which induces a congruence on Ore programs and facilitates reasoning about them. Despite the simplicity of the language and its semantic model, Ore is able to express a variety of useful orchestration tasks.
    BibTeX:
    @inproceedings{Kitchin2006,
      author = {Kitchin, David and Cook, William R. and Misra, Jayadev},
      title = {A language for task orchestration and its semantic properties},
      booktitle = {CONCUR 2006 - CONCURRENCY THEORY, PROCEEDINGS},
      year = {2006},
      volume = {4137},
      pages = {477-491},
      note = {17th International Conference on Concurrency Theory, Bonn, GERMANY, AUG 27-30, 2006}
    }
    
    Klein, M. & Visser, U. Semantic Web challenge 2003 {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({3}), pp. {31-33} 
    article  
    BibTeX:
    @article{Klein2004,
      author = {Klein, M and Visser, U},
      title = {Semantic Web challenge 2003},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {3},
      pages = {31-33}
    }
    
    Klien, E., Lutz, M. & Kuhn, W. Ontology-based discovery of geographic information services - An application in disaster management {2006} COMPUTERS ENVIRONMENT AND URBAN SYSTEMS
    Vol. {30}({1}), pp. {102-123} 
    article DOI  
    Abstract: Finding suitable information in the open and distributed environment of current geographic information web services is a crucial task. Service brokers (or catalogue services) provide searchable repositories of service descriptions but the mechanisms to support the task of service discovery are still insufficient. One of the main challenges is to overcome semantic heterogeneity caused by synonyms and homonyms during keyword-based search in catalogues. This paper presents a practical case study to what extent ontology-based service discovery can solve these semantic heterogeneity problems. To this end, we apply the Bremen University Semantic Translator for Enhanced Retrieval as a service broker. The approach combines ontology-based metadata with an ontology-based search. Based on a scenario of finding geographic information services for estimating potential storm damage in forests, it is shown that through terminological reasoning the request finds an appropriate match in a service on storm hazard classes. However, the approach reveals some limitations in the context of geographic web service discovery, which are discussed at the end. (c) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Klien2006,
      author = {Klien, E. and Lutz, M. and Kuhn, W.},
      title = {Ontology-based discovery of geographic information services - An application in disaster management},
      journal = {COMPUTERS ENVIRONMENT AND URBAN SYSTEMS},
      year = {2006},
      volume = {30},
      number = {1},
      pages = {102-123},
      doi = {{10.1016/j.compenvurbsys.2005.04.002}}
    }
    
    Knight, C., Gasevic, D. & Richards, G. An ontology-based framework for bridging learning design and learning content {2006} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {9}({1}), pp. {23-37} 
    article  
    Abstract: The paper describes an ontology- based framework for bridging learning design and learning object content. In present solutions, researchers have proposed conceptual models and developed tools for both of those subjects, but without detailed discussions of how they can be used together. In this paper we advocate the use of ontologies to explicitly specify all learning designs, learning objects, and the relations between them, and show how this use of ontologies can result in more effective (semi-) automatic tools and services that increase the level of reusability. We first define a three-part conceptual model that introduces an intermediate level between learning design and learning objects called the learning object context. We then use ontologies to facilitate the representation of these concepts: LOCO is a new ontology based on IMS-LD, ALOCoM is an existing ontology for learning objects, and LOCO-Cite is a new ontology for the learning object contextual model. We conclude by showing the applicability of the proposed framework in a use case study.
    BibTeX:
    @article{Knight2006,
      author = {Knight, C and Gasevic, D and Richards, G},
      title = {An ontology-based framework for bridging learning design and learning content},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2006},
      volume = {9},
      number = {1},
      pages = {23-37}
    }
    
    Knublauch, H., Fergerson, R., Noy, N. & Musen, M. The Protege OWL Plugin: An open development environment for Semantic Web applications {2004}
    Vol. {3298}SEMANTIC WEB - ISWC 2004, PROCEEDINGS, pp. {229-243} 
    inproceedings  
    Abstract: We introduce the OWL Plugin, a Semantic Web extension of the Protege ontology development platform. The OWL Plugin can be used to edit ontologies in the Web Ontology Language (OWL), to access description logic reasoners, and to acquire instances for semantic markup. In many of these features, the OWL Plugin has created and facilitated new practices for building Semantic Web contents, often driven by the needs of and feedback from our users. Furthermore, Protege's flexible open-source platform means that it is easy to integrate custom-tailored components to build real-world applications. This document describes the architecture of the OWL Plugin, walks through its most important features, and discusses some of our design decisions.
    BibTeX:
    @inproceedings{Knublauch2004,
      author = {Knublauch, H and Fergerson, RW and Noy, NF and Musen, MA},
      title = {The Protege OWL Plugin: An open development environment for Semantic Web applications},
      booktitle = {SEMANTIC WEB - ISWC 2004, PROCEEDINGS},
      year = {2004},
      volume = {3298},
      pages = {229-243},
      note = {3rd International Semantic Web Conference, Hiroshima, JAPAN, NOV 07-11, 2004}
    }
    
    Kochut, K.J. & Janik, M. SPARQLeR: Extended sparql for Semantic association discovery {2007}
    Vol. {4519}Semantic Web: Research and Applications, Proceedings, pp. {145-159} 
    inproceedings  
    Abstract: Complex relationships, frequently referred to as semantic associations, are the essence of the Semantic Web. Query and retrieval of semantic associations has been an important task in many analytical and scientific activities, such as detecting money laundering and querying for metabolic pathways in biochemistry. We believe that support for semantic path queries should be an integral component of RDF query languages. In this paper, we present SPARQLeR, a novel extension of the SPARQL query language which adds the support for semantic path queries. The proposed extension fits seamlessly within the overall syntax and semantics of SPARQL and allows easy and natural formulation of queries involving a wide variety of regular path patterns in RDF graphs. SPARQLeR's path patterns can capture many low-level details of the queried associations. We also present an implementation of SPARQLeR and its initial performance results. Our implementation is built over BRAHMS, our own RDF storage system.
    BibTeX:
    @inproceedings{Kochut2007,
      author = {Kochut, Krys J. and Janik, Maciej},
      title = {SPARQLeR: Extended sparql for Semantic association discovery},
      booktitle = {Semantic Web: Research and Applications, Proceedings},
      year = {2007},
      volume = {4519},
      pages = {145-159},
      note = {4th European Semantic Web Conference, Innsbruck, AUSTRIA, JUN 03-07, 2007}
    }
    
    Koehler, J., Philippi, S., Specht, M. & Rueegg, A. Ontology based text indexing and querying for the semantic web {2006} KNOWLEDGE-BASED SYSTEMS
    Vol. {19}({8}), pp. {744-754} 
    article DOI  
    Abstract: This publication shows how the gap between the HTML based internet and the RDF based vision of the semantic web might be bridged, by linking words in texts to concepts of ontologies. Most current search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. However, the indexes do not contain synonyms, cannot differentiate between homonyms ('mouse' as a pointing vs. `mouse' as an animal) and users receive different search results when they use different conjugation forms of the same word. In this publication, we present a system that uses ontologies and Natural Language Processing techniques to index texts, and thus supports word sense disambiguation and the retrieval of texts that contain equivalent words, by indexing them to concepts of ontologies. For this purpose, we developed fully automated methods for mapping equivalent concepts of imported RDF ontologies (for this prototype WordNet, SUMO and OpenCyc). These methods will thus allow the seamless integration of domain specific ontologies for concept based information retrieval in different domains. To demonstrate the practical workability of this approach, a set of web pages that contain synonyms and homonyms were indexed and can be queried via a search engine like query frontend. However, the ontology based indexing approach can also be used for other data mining applications such text clustering, relation mining and for searching free text fields in biological databases. The ontology alignment methods and some of the text mining principles described in this publication are now incorporated into the ONDEX system http://ondex.sourceforge.net/. (c) 2006 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Koehler2006,
      author = {Koehler, Jacob and Philippi, Stephan and Specht, Michael and Rueegg, Alexander},
      title = {Ontology based text indexing and querying for the semantic web},
      journal = {KNOWLEDGE-BASED SYSTEMS},
      year = {2006},
      volume = {19},
      number = {8},
      pages = {744-754},
      doi = {{10.1016/j.knosys.2006.04.015}}
    }
    
    Kokare, M., Chatterji, B. & Biswas, P. A survey on current content based image retrieval methods {2002} IETE JOURNAL OF RESEARCH
    Vol. {48}({3-4}), pp. {261-271} 
    article  
    Abstract: Retrieving information from the Web is becoming a common practice for Internet users. However, the size and heterogeneity of the Web challenge the effectiveness of classical information retrieval techniques. Content-based retrieval of images and video has become a hot research area. The reason for this is the fact that we need effective and efficient techniques that meet user requirements, to access large volumes of digital images and video data. This paper gives a brief survey of current CBIR (Content Based Image Retrieval) methods and technical achievement in this area. The survey includes a large number of papers covering the research aspects of system design and applications of CBIR, image feature representation and extraction, Multidimensional indexing. Furthermore future research directions are suggested.
    BibTeX:
    @article{Kokare2002,
      author = {Kokare, M and Chatterji, BN and Biswas, PK},
      title = {A survey on current content based image retrieval methods},
      journal = {IETE JOURNAL OF RESEARCH},
      year = {2002},
      volume = {48},
      number = {3-4},
      pages = {261-271}
    }
    
    Kopecky, J., Vitvar, T., Bournez, C. & Farrell, J. SAWSDL: Semantic annotations for WSDL and XML schema {2007} IEEE INTERNET COMPUTING
    Vol. {11}({6}), pp. {60-67} 
    article  
    Abstract: Web services are important for creating distributed applications on the Web. In fact, they're a key enabler for service-oriented architectures that focus on service reuse and interoperability. The World Wide Web Consortium (W3C) has recently finished work on two important standards for describing Web services - the Web Services Description Language (WSDL) 2.0 and Semantic Annotations for WSDL and XML Schema (SAWSDL). Here, the authors discuss the latter, which is the first standard for adding semantics to Web service descriptions.
    BibTeX:
    @article{Kopecky2007,
      author = {Kopecky, Jacek and Vitvar, Tomas and Bournez, Carine and Farrell, Joel},
      title = {SAWSDL: Semantic annotations for WSDL and XML schema},
      journal = {IEEE INTERNET COMPUTING},
      year = {2007},
      volume = {11},
      number = {6},
      pages = {60-67}
    }
    
    Kopena, J. & Regli, W. DAMLJessKB: A tool for reasoning with the Semantic Web {2003} IEEE INTELLIGENT SYSTEMS
    Vol. {18}({3}), pp. {74-77} 
    article  
    BibTeX:
    @article{Kopena2003,
      author = {Kopena, J and Regli, WC},
      title = {DAMLJessKB: A tool for reasoning with the Semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2003},
      volume = {18},
      number = {3},
      pages = {74-77}
    }
    
    Koper, R. Current research in learning design {2006} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {9}({1}), pp. {13-22} 
    article  
    Abstract: A ` learning design' is defined as the description of the teaching-learning process that takes place in a unit of learning (eg, a course, a lesson or any other designed learning event). The key principle in learning design is that it represents the learning activities and the support activities that are performed by different persons (learners, teachers) in the context of a unit of learning. The IMS Learning Design specification aims to represent the learning design of units of learning in a semantic, formal and machine interpretable way. Since its release in 2003 various parties have been active to develop tools, to experiment with Learning Design in practice, or to do research on the further advancement of the specification. The aim of this special issue is to provide an overview of current work in the area. This papers introduces Learning Design, analyses the different papers and provides an overview of current research in Learning Design. The major research issues are at the moment: a) the use of ontologies and semantic web principles & tools related to Learning Design; b) the use of learning design patterns; c) the development of learrning design authoring and content management systems, and d) the development of learning design players, including the issues how to use the integrated set of learning design tools in a variety of settings.
    BibTeX:
    @article{Koper2006,
      author = {Koper, R},
      title = {Current research in learning design},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2006},
      volume = {9},
      number = {1},
      pages = {13-22}
    }
    
    Kotis, K. & Vouros, G.A. Human-centered ontology engineering: The HCOME methodology {2006} KNOWLEDGE AND INFORMATION SYSTEMS
    Vol. {10}({1}), pp. {109-131} 
    article DOI  
    Abstract: The fast emergent and continuously evolving areas of the Semantic Web and Knowledge Management make the incorporation of ontology engineering tasks in knowledge-empowered organizations and in the World Wide Web more than necessary. In such environments, the development and evolution of ontologies must be seen as a dynamic process that has to be supported through the entire ontology life cycle, resulting to living ontologies. The aim of this paper is to present the Human-Centered Ontology Engineering Methodology (HCOME) for the development and evaluation of living ontologies in the context of communities of knowledge workers. The methodology aims to empower knowledge workers to continuously manage their formal conceptualizations in their day-to-day activities and shape their information space by being actively involved in the ontology life cycle. The paper also demonstrates the Human Centered ONtology Engineering Environment, HCONE, which can effectively support this methodology.
    BibTeX:
    @article{Kotis2006,
      author = {Kotis, Konstantinos and Vouros, George A.},
      title = {Human-centered ontology engineering: The HCOME methodology},
      journal = {KNOWLEDGE AND INFORMATION SYSTEMS},
      year = {2006},
      volume = {10},
      number = {1},
      pages = {109-131},
      doi = {{10.1007/s10115-005-0227-4}}
    }
    
    Kroetzsch, M., Vrandecic, D. & Voelkel, M. Semantic MediaWiki {2006}
    Vol. {4273}Semantic Web - ISEC 2006, Proceedings, pp. {935-942} 
    inproceedings  
    Abstract: Semantic MediaWiki is an extension of MediaWiki - a widely used wiki-engine that also powers Wikipedia. lts aim is to make semantic technologies available to a broad community by smoothly integrating them with the established usage of MediaWiki. The software is already used on a number of productive installations world-wide, but the main target remains to establish ``Semantic Wikipedia'' as an early adopter of semantic technologies on the web. Thus usability and scalability are as important as powerful semantic features.
    BibTeX:
    @inproceedings{Kroetzsch2006,
      author = {Kroetzsch, Markus and Vrandecic, Denny and Voelkel, Max},
      title = {Semantic MediaWiki},
      booktitle = {Semantic Web - ISEC 2006, Proceedings},
      year = {2006},
      volume = {4273},
      pages = {935-942},
      note = {5th International Semantic Web Conference (ISWC 2006), Athens, GA, NOV 05-09, 2006}
    }
    
    Kroetzsch, M., Vrandecic, D., Voelkel, M., Haller, H. & Studer, R. Semantic Wikipedia {2007} JOURNAL OF WEB SEMANTICS
    Vol. {5}({4}), pp. {251-261} 
    article DOI  
    Abstract: Wikipedia is the world's largest collaboratively edited source of encyclopaedic knowledge. But in spite of its utility, its content is barely machine-interpretable and only weakly structured. With Semantic MediaWiki we provide an extension that enables wiki-users to semantically annotate wiki pages, based on which the wiki contents can be browsed, searched, and reused in novel ways. In this paper, we give an extended overview of Semantic MediaWiki and discuss experiences regarding performance and current applications. (C) 2007 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Kroetzsch2007,
      author = {Kroetzsch, Markus and Vrandecic, Denny and Voelkel, Max and Haller, Heiko and Studer, Rudi},
      title = {Semantic Wikipedia},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2007},
      volume = {5},
      number = {4},
      pages = {251-261},
      note = {15th International World Wide Web Conference, Edinburgh, SCOTLAND, MAY 23-26, 2006},
      doi = {{10.1016/j.websem.2007.09.001}}
    }
    
    Kulikowski, C. The micro-macro spectrum of medical informatics challenges: From molecular medicine to transforming health care in a globalizing society {2002} METHODS OF INFORMATION IN MEDICINE
    Vol. {41}({1}), pp. {20-24} 
    article  
    Abstract: Background: Medical informatics has always encompassed a very broad spectrum of techniques for clinical and biomedical research, education and practice. There has been a concomitant variety of depth of specialization, ranging from the routine application of information processing methods to cuffing-edge research on fundamental problems of computer-based systems and their relations to cognition end perception in biomedicine. Objectives. Challenges for the field can be placed in perspective by considering the scale of each-from the highly detailed scientific problems in bioinformatics and emerging molecular medicine to the broad and complex social problems of introducing medical informatics into web-related global settings. Methods. The scale of an informatics problem is not only determined by the inherent physical space in which it exists, but also by the conceptual complexity that it involves, reinforcing the need to investigate the semantic web within which medical informatics is defined. Results and Conclusion: Bioinformatics, biomedical imaging and language understanding provide examples that anchor research and practice in biomedical informatics at the detailed, scientific end of the spectrum. Traditional concerns of medical informatics in the clinical arena make up the broad mid-range of the spectrum, while novel social interaction models of competition and cooperation will be needed to understand the implications of distributed health information technology for individual and societal change in an increasingly interconnected world.
    BibTeX:
    @article{Kulikowski2002,
      author = {Kulikowski, CA},
      title = {The micro-macro spectrum of medical informatics challenges: From molecular medicine to transforming health care in a globalizing society},
      journal = {METHODS OF INFORMATION IN MEDICINE},
      year = {2002},
      volume = {41},
      number = {1},
      pages = {20-24}
    }
    
    Kwon, O. Meta web service: building web-based open decision support system based on web services {2003} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {24}({4}), pp. {375-389} 
    article DOI  
    Abstract: Web services are currently one of the trends in network-based business services, which intuitively will be applied to build a semantic web-based decision support system (DSS). Since web services are self-contained, modular business process applications, based on open standards, enable integration models for facilitating program-to-program interactions. Decision modules in a semantic web-based DSS can be viewed as a web service. However, according to the current features, web services know only about themselves, they are neither autonomous, nor are they designed to use ontologies; they are passive until invoked, and they do not provide for composing functionalities. These lead to the motivation on building a sophisticated web service to contain these features and to utilize web services on behalf of the user. This paper aims to propose a new concept of Meta Web Service, a web service-based DSS. The meta web service understands the user's problem statement with ontology, performs web service discovery, web service composition, and automatically generates codes for composite web service execution. Case-based reasoning is applied to quickly find past histories of successful service compositions. A prototype of research web service has been developed to show the feasibility of the proposed idea. (C) 2003 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Kwon2003,
      author = {Kwon, OB},
      title = {Meta web service: building web-based open decision support system based on web services},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2003},
      volume = {24},
      number = {4},
      pages = {375-389},
      doi = {{10.1016/S0957-4174(02)00187-2}}
    }
    
    Kwon, O., Choi, S. & Park, G. NAMA: a context-aware multi-agent based web service approach to proactive need identification for personalized reminder systems {2005} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {29}({1}), pp. {17-32} 
    article DOI  
    Abstract: Developing a personalized, user-centric system is one of today's challenging issues in ubiquitous network-based systems, especially personalized reminder systems. Such a personalized reminder system has to identify the user's current needs dynamically and proactively based on the user's current context, such as location and current activity. However, need identification methodologies and their feasible architectures for personalized reminder systems have so far been rare. Hence, this paper aims to propose a proactive need identification mechanism by applying agent and semantic web technologies for a personalized reminder system, which is one of the supporting systems for a robust ubiquitous service support environment. We revisit associationism in order to understand a buyer's need identification process, and we adopt the process as `purchase based on association' to implement a personalized reminder system. Based on this approach, we have shown how an agent-based semantic web service system can be used to realize a personalized reminder system which identifies a buyer's need autonomously. We have created a prototype system, NAMA (Need Aware Multi-Agent), to demonstrate the feasibility of the methodology and of the mobile settings framework that we propose in this paper. NAMA embeds a Bluetooth-based location-tracking module and identifies what users are currently looking at through their mobile devices. Based on these capabilities, NAMA considers the context, user profile with preferences, and information about currently available services to discover the user's current needs and then link the user to a set of services, which are implemented as web services. (c) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Kwon2005,
      author = {Kwon, O and Choi, SC and Park, G},
      title = {NAMA: a context-aware multi-agent based web service approach to proactive need identification for personalized reminder systems},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2005},
      volume = {29},
      number = {1},
      pages = {17-32},
      doi = {{10.1016/j.eswa.2005.01.001}}
    }
    
    Lam, H.Y.K., Marenco, L., Clark, T., Gao, Y., Kinoshita, J., Shepherd, G., Miller, P., Wu, E., Wong, G.T., Liu, N., Crasto, C., Morse, T., Stephens, S. & Cheung, K.-H. Research - AlzPharm: integration of neurodegeneration data using RDF {2007} BMC BIOINFORMATICS
    Vol. {8}({Suppl. 3}) 
    article DOI  
    Abstract: Background: Neuroscientists often need to access a wide range of data sets distributed over the Internet. These data sets, however, are typically neither integrated nor interoperable, resulting in a barrier to answering complex neuroscience research questions. Domain ontologies can enable the querying heterogeneous data sets, but they are not sufficient for neuroscience since the data of interest commonly span multiple research domains. To this end, e-Neuroscience seeks to provide an integrated platform for neuroscientists to discover new knowledge through seamless integration of the very diverse types of neuroscience data. Here we present a Semantic Web approach to building this e-Neuroscience framework by using the Resource Description Framework (RDF) and its vocabulary description language, RDF Schema (RDFS), as a standard data model to facilitate both representation and integration of the data. Results: We have constructed a pilot ontology for BrainPharm ( a subset of SenseLab) using RDFS and then converted a subset of the BrainPharm data into RDF according to the ontological structure. We have also integrated the converted BrainPharm data with existing RDF hypothesis and publication data from a pilot version of SWAN ( Semantic Web Applications in Neuromedicine). Our implementation uses the RDF Data Model in Oracle Database 10g release 2 for data integration, query, and inference, while our Web interface allows users to query the data and retrieve the results in a convenient fashion. Conclusion: Accessing and integrating biomedical data which cuts across multiple disciplines will be increasingly indispensable and beneficial to neuroscience researchers. The Semantic Web approach we undertook has demonstrated a promising way to semantically integrate data sets created independently. It also shows how advanced queries and inferences can be performed over the integrated data, which are hard to achieve using traditional data integration approaches. Our pilot results suggest that our Semantic Web approach is suitable for realizing e-Neuroscience and generic enough to be applied in other biomedical fields.
    BibTeX:
    @article{Lam2007,
      author = {Lam, Hugo Y. K. and Marenco, Luis and Clark, Tim and Gao, Yong and Kinoshita, June and Shepherd, Gordon and Miller, Perry and Wu, Elizabeth and Wong, Gwendolyn T. and Liu, Nian and Crasto, Chiquito and Morse, Thomas and Stephens, Susie and Cheung, Kei-Hoi},
      title = {Research - AlzPharm: integration of neurodegeneration data using RDF},
      journal = {BMC BIOINFORMATICS},
      year = {2007},
      volume = {8},
      number = {Suppl. 3},
      doi = {{10.1186/1471-2105-8-S3-S4}}
    }
    
    Lambrix, P. & Tan, H. SAMBO - A system for aligning and merging biomedical ontologies {2006} JOURNAL OF WEB SEMANTICS
    Vol. {4}({3}), pp. {196-206} 
    article DOI  
    Abstract: Due to the recent explosion of the amount of on- line accessible biomedical data and tools, finding and retrieving the relevant information is not an easy task. The vision of a Semantic Web for life sciences alleviates these difficulties. A key technology for the Semantic Web is ontologies. In recent years many biomedical ontologies have been developed and many of these ontologies contain overlapping information. To be able to use multiple ontologies they have to be aligned or merged. In this paper we propose a framework for aligning and merging ontologies. Further, we developed a system for aligning and merging biomedical ontologies (SAMBO) based on this framework. The framework is also a first step towards a general framework that can be used for comparative evaluations of alignment strategies and their combinations. In this paper we evaluated different strategies and their combinations in terms of quality and processing time and compared SAMBO with two other systems. (C) 2006 Elsevier B. V. All rights reserved.
    BibTeX:
    @article{Lambrix2006,
      author = {Lambrix, Patrick and Tan, He},
      title = {SAMBO - A system for aligning and merging biomedical ontologies},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2006},
      volume = {4},
      number = {3},
      pages = {196-206},
      doi = {{10.1016/j.websem.2006.05.003}}
    }
    
    Lara, R., Roman, D., Polleres, A. & Fensel, D. A conceptual comparison of WSMO and OWL-S {2004}
    Vol. {3250}WEB SERVICES, PROCEEDINGS, pp. {254-269} 
    inproceedings  
    Abstract: Web Services have added a new level of functionality on top of current Web, enabling the use and combination of distributed functional components within and across company boundaries. The addition of semantic information to describe Web Services, in order to enable the automatic location, combination and use of distributed functionalities, is nowadays one of the most relevant research topics due to its potential to achieve dynamic, scalable and cost-effective Enterprise Application Integration and eCommerce. In this context, two major initiatives aim to realize Semantic Web Services by providing appropriate description means that enable the effective exploitation of semantic annotations, namely: WSMO and OWL-S. In this paper, we conduct a conceptual comparison that identifies the overlaps and differences of both initiatives in order to evaluate their applicability in a real setting and their potential to become widely accepted standards.
    BibTeX:
    @inproceedings{Lara2004,
      author = {Lara, R and Roman, D and Polleres, A and Fensel, D},
      title = {A conceptual comparison of WSMO and OWL-S},
      booktitle = {WEB SERVICES, PROCEEDINGS},
      year = {2004},
      volume = {3250},
      pages = {254-269},
      note = {European Conference on Web Services (ECOWS 2004), Erfurt, GERMANY, SEP 27-30, 2004}
    }
    
    Lassila, O. & Adler, M. Semantic gadgets: Ubiquitous computing meets the semantice web {2003} SPINNING THE SEMANTIC WEB - BRINGING THE WORLD WIDE WEB TO ITS FULL POTENTIAL , pp. {363-376}  inproceedings  
    BibTeX:
    @inproceedings{Lassila2003,
      author = {Lassila, O and Adler, M},
      title = {Semantic gadgets: Ubiquitous computing meets the semantice web},
      booktitle = {SPINNING THE SEMANTIC WEB - BRINGING THE WORLD WIDE WEB TO ITS FULL POTENTIAL },
      year = {2003},
      pages = {363-376},
      note = {Seminar on Semantics for the Web, Wadern, GERMANY, MAR 19-24, 2000}
    }
    
    Lee, C.-S., Kao, Y.-F., Kuo, Y.-H. & Wang, M.-H. Automated ontology construction for unstructured text documents {2007} DATA & KNOWLEDGE ENGINEERING
    Vol. {60}({3}), pp. {547-566} 
    article DOI  
    Abstract: Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. (c) 2006 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Lee2007,
      author = {Lee, Chang-Shing and Kao, Yuan-Fang and Kuo, Yau-Hwang and Wang, Mei-Hui},
      title = {Automated ontology construction for unstructured text documents},
      journal = {DATA & KNOWLEDGE ENGINEERING},
      year = {2007},
      volume = {60},
      number = {3},
      pages = {547-566},
      doi = {{10.1016/j.datak.2006.04.001}}
    }
    
    Lee, C.-S., Wang, M.-H. & Chen, J.-J. Ontology-based intelligent decision support agent for CMMI project monitoring and control {2008} INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
    Vol. {48}({1}), pp. {62-76} 
    article DOI  
    Abstract: This paper presents an ontology-based intelligent decision support agent (OIDSA) to apply to project monitoring and control of capability maturity model integration (CMMI). The OIDSA is composed of a natural language processing agent, a fuzzy inference agent, and a performance decision support agent. All the needed information of the OIDSA, including the CMMI ontology and the project personal ontology, is stored in an ontology repository. In addition, the natural language processing agent, based on the Chinese Dictionary, periodically collects the information of the project progress from project members to analyze the features of the Chinese terms for semantic concept clustering. Next, the fuzzy inference agent computes the similarity of the planned progress report and actual progress report, based on the CMMI ontology, the project personal ontology, and natural language processing results. Finally, the performance decision support agent measures the completed percentage of the progress for each project member. The results provided by the OIDSA are sent to the project manager for evaluating the performance of each project member. The experimental results show that the OIDSA can work effectively for project monitoring and control of CMMI. (C) 2007 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Lee2008,
      author = {Lee, Chang-Shing and Wang, Mei-Hui and Chen, Jui-Jen},
      title = {Ontology-based intelligent decision support agent for CMMI project monitoring and control},
      journal = {INTERNATIONAL JOURNAL OF APPROXIMATE REASONING},
      year = {2008},
      volume = {48},
      number = {1},
      pages = {62-76},
      doi = {{10.1016/j.ijar.2007.06.007}}
    }
    
    Lee, J. & Park, M. Integration and composition of web service-based business processes {2003} JOURNAL OF COMPUTER INFORMATION SYSTEMS
    Vol. {44}({1}), pp. {82-92} 
    article  
    Abstract: Technologies for Web services facilitate the creation of business process solutions in an efficient, standard way. However, the automation of process integration with Web service technologies requires the automation of discovery and composition of Web services. In this paper, we focus on two problems of the Web service-based business process integration: (1) the discovery of Web services based on the capabilities and properties of published services, and (2) the composition of business processes based on the business requirements of submitted requests. We propose a solution to these problems, which comprises multiple matching algorithms, a micro-level matching algorithm which matches the capabilities of services with activities in a process requests and macro-level matching algorithms, which are used to compose a business process by identifying services that satisfy the business requirements and constraints of the request The solution from the macro-level matching algorithms is optimal in terms of meeting a certain business objective, e.g., minimizing the cost or execution time, or maximizing the total utility value of business properties of interest. Numerical examples are illustrated to show how to select the best Web service candidate for a chosen business process through the use of proposed macro-level matching algorithms. Furthermore, we show how existing Web service standards, UDDI and BPEL4WS, can be used and extended to specify the capabilities of services and the business requirements of requests, respectively.
    BibTeX:
    @article{Lee2003a,
      author = {Lee, J and Park, MS},
      title = {Integration and composition of web service-based business processes},
      journal = {JOURNAL OF COMPUTER INFORMATION SYSTEMS},
      year = {2003},
      volume = {44},
      number = {1},
      pages = {82-92}
    }
    
    Lee, S., Wang, T.D., Hashmi, N. & Cummings, M.P. Bio-STEER: A Semantic Web workflow tool for Grid computing in the life sciences {2007} FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF GRID COMPUTING-THEORY METHODS AND APPLICATIONS
    Vol. {23}({3}), pp. {497-509} 
    article DOI  
    Abstract: Life science research is becoming evermore computationally intensive. Hence, from a computational resource perspective, Grid computing provides a logical approach to meeting many of the computational needs of life science research. However, there are several barriers to the widespread use of Grid computing in life sciences. In this paper, we attempt to address one particular barrier: the difficulty of using Grid computing by life scientists. Life science research often involves connecting multiple applications together to form a workflow. This process of constructing a workflow is complex. When combined with the difficulty of using Grid services, composing a meaningful workflow using Grid services can present a challenge to life scientists. Our proposed solution is a Semantic Web-enabled computing environment, called Bio-STEER. In BioSTEER, bioinformatics Grid services are mapped to Semantic Web services, described in OWL-S. We also defined an ontology in OWL to model bioinformatics applications. A graphical user interface helps to construct a scientific workflow by showing a list of services that are semantically sound: that is, the output of one service is semantically compatible with the input of the connecting service. Bio-STEER can help users take full advantaue of Grid services through a user-friendly graphical user interface (GUI), which allows them to easily construct the workflows they need. (c) 2006 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Lee2007a,
      author = {Lee, Sung and Wang, Taowei David and Hashmi, Nada and Cummings, Michael P.},
      title = {Bio-STEER: A Semantic Web workflow tool for Grid computing in the life sciences},
      journal = {FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF GRID COMPUTING-THEORY METHODS AND APPLICATIONS},
      year = {2007},
      volume = {23},
      number = {3},
      pages = {497-509},
      doi = {{10.1016/j.future.2006.07.011}}
    }
    
    Lee, W. & Tsai, T. An interactive agent-based system for concept-based web search {2003} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {24}({4}), pp. {365-373} 
    article DOI  
    Abstract: Search engines are useful tools in looking for information from the Internet. However, due to the difficulties of specifying appropriate queries and the problems of keyword-based similarity ranking presently encountered by search engines, general users are still not satisfied with the results retrieved. To remedy the above difficulties and problems, in this paper we present a multi-agent framework in which an interactive approach is proposed to iteratively collect a user's feedback from the pages he has identified. By analyzing the pages gathered, the system can then gradually formulate queries to efficiently describe the content a user is looking for. In our framework, the evolution strategies are employed to evolve critical feature words for concept modeling in query formulation. The experimental results show that the framework developed is efficient and useful to enhance them quality of web search, and the concept-based semantic search can thus be achieved. (C) 2003 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Lee2003,
      author = {Lee, WP and Tsai, TC},
      title = {An interactive agent-based system for concept-based web search},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2003},
      volume = {24},
      number = {4},
      pages = {365-373},
      doi = {{10.1016/S0957-4174(02)00186-0}}
    }
    
    Lehmann, T., Guld, M., Thies, O., Fisher, B., Spitzer, K., Keysers, D., Ney, H., Kohnen, M., Schubert, H. & Wein, B. Content-based image retrieval in medical applications {2004} METHODS OF INFORMATION IN MEDICINE
    Vol. {43}({4}), pp. {354-361} 
    article  
    Abstract: Objectives: To develop a general structure for semantic image analysis that is suitable for content-based image retrieval in medical applications and an architecture for its efficient implementation. Methods: Stepwise content analysis of medical images results in six layers of information modeling incorporating medical expert knowledge (raw data layer, registered data layer, feature layer, scheme layer, object layer, knowledge layer). A reference database with 10,000 images categorized according to the image modality, orientation, body region, and biological system is used. By means of prototypes in each category, identification of objects and their geometrical or temporal relationships are handled in the object and the knowledge layer, respectively. A distributed system designed with only three core elements is implemented: (i) the central database holds program sources, processing scheme descriptions, images, features, and administrative information about the workstation cluster; (ii) the scheduler balances distributed computing; and (iii) the web server provides graphical user interfaces for data entry and retrieval, which can be easily adapted to a variety of applications for content-based image retrieval in medicine. Results: Leaving-one-out experiments were distributed by the scheduler and controlled via corresponding job lists offering transparency regarding the viewpoints of a distributed system and the user. The proposed architecture is suitable for content-based image retrieval in medical applications. It improves current picture archiving and communication systems that still rely on alphanumerical descriptions, which are insufficient for image retrieval of high recall and precision.
    BibTeX:
    @article{Lehmann2004,
      author = {Lehmann, TM and Guld, MO and Thies, O and Fisher, B and Spitzer, K and Keysers, D and Ney, H and Kohnen, M and Schubert, H and Wein, BB},
      title = {Content-based image retrieval in medical applications},
      journal = {METHODS OF INFORMATION IN MEDICINE},
      year = {2004},
      volume = {43},
      number = {4},
      pages = {354-361}
    }
    
    Leifman, G., Meir, R. & Tal, A. Semantic-oriented 3d shape retrieval using relevance feedback {2005} VISUAL COMPUTER
    Vol. {21}({8-10, Sp. Iss. SI}), pp. {865-875} 
    article DOI  
    Abstract: Shape-based retrieval of 3D models has become an important challenge in computer graphics. Object similarity, however, is a subjective matter, dependent on the human viewer, since objects have semantics and are not mere geometric entities. Relevance feedback aims at addressing the subjectivity of similarity. This paper presents a novel relevance feedback algorithm that is based on supervised as well as unsupervised feature extraction techniques. It also proposes a novel signature for 3D models, the sphere projection. A Web search engine that realizes the signature and the relevance feedback algorithm is presented. We show that the proposed approach produces good results and outperforms previous techniques.
    BibTeX:
    @article{Leifman2005,
      author = {Leifman, G and Meir, R and Tal, A},
      title = {Semantic-oriented 3d shape retrieval using relevance feedback},
      journal = {VISUAL COMPUTER},
      year = {2005},
      volume = {21},
      number = {8-10, Sp. Iss. SI},
      pages = {865-875},
      note = {13th Pacific Conference on Computer Graphics and Applications, Macao, PEOPLES R CHINA, OCT 12-14, 2005},
      doi = {{10.1007/s00371-005-0341-z}}
    }
    
    Leontis, N., Altman, R., Berman, H., Brenner, S., Brown, J., Engelke, D., Harvey, S., Holbrook, S., Jossinet, F., Lewis, S., Major, F., Mathews, D., Richardson, J., Williamson, J. & Westhof, E. The RNA Ontology Consortium: An open invitation to the RNA community {2006} RNA-A PUBLICATION OF THE RNA SOCIETY
    Vol. {12}({4}), pp. {533-541} 
    article DOI  
    Abstract: The aim of the RNA Ontology Consortium (ROC) is to create an integrated conceptual framework - an RNA Ontology (RO) - with a common, dynamic, controlled, and structured vocabulary to describe and characterize RNA sequences, secondary structures, three-dimensional structures, and dynamics pertaining to RNA function. The RO should produce tools for clear communication about RNA structure and function for multiple uses, including the integration of RNA electronic resources into the Semantic Web. These tools should allow the accurate description in computer-interpretable form of the coupling between RNA architecture, function, and evolution. The purposes for creating the RO are, therefore, (1) to integrate sequence and structural databases; (2) to allow different computational tools to interoperate; (3) to create powerful software tools that bring advanced computational methods to the bench scientist; and (4) to facilitate precise searches for all relevant information pertaining to RNA. For example, one initial objective of the ROC is to define, identify, and classify RNA structural motifs described in the literature or appearing in databases and to agree on a computer-interpretable definition for each of these motifs. To achieve these aims, the ROC will foster communication and promote collaboration among RNA scientists by coordinating frequent face-to-face workshops to discuss, debate, and resolve difficult conceptual issues. These meeting opportunities will create new directions at various levels of RNA research. The ROC will work closely with the PDB/NDB structural databases and the Gene, Sequence, and Open Biomedical Ontology Consortia to integrate the RO with existing biological ontologies to extend existing content while maintaining interoperability.
    BibTeX:
    @article{Leontis2006,
      author = {Leontis, NB and Altman, RB and Berman, HM and Brenner, SE and Brown, JW and Engelke, DR and Harvey, SC and Holbrook, SR and Jossinet, F and Lewis, SE and Major, F and Mathews, DH and Richardson, JS and Williamson, JR and Westhof, E},
      title = {The RNA Ontology Consortium: An open invitation to the RNA community},
      journal = {RNA-A PUBLICATION OF THE RNA SOCIETY},
      year = {2006},
      volume = {12},
      number = {4},
      pages = {533-541},
      doi = {{10.1261/rna.2343206}}
    }
    
    Letondal, C. A Web interface generator for molecular biology programs in Unix {2001} BIOINFORMATICS
    Vol. {17}({1}), pp. {73-82} 
    article  
    Abstract: Motivation: Almost all users encounter problems using sequence analysis programs. Not only are they difficult to learn because of the parameters, syntax and semantic, but many are different. That is why we have developed a Web interface generator for more than 150 molecular biology command-line driven programs, including: phylogeny, gene prediction. alignment, RNA, DNA and protein analysis, motif discovery, structure analysis and database searching programs. The generator uses XML as a high-level description language of the legacy software parameters. Its aim is to provide users with the equivalent of a basic Unix environment. with program combination, customization and basic scripting through macro registration. Results: The program has been used for three years by about 15 000 users throughout the world; it has recently been installed on other sites and evaluated as a standard user interface for EMBOSS programs.
    BibTeX:
    @article{Letondal2001,
      author = {Letondal, C},
      title = {A Web interface generator for molecular biology programs in Unix},
      journal = {BIOINFORMATICS},
      year = {2001},
      volume = {17},
      number = {1},
      pages = {73-82}
    }
    
    Letsche, T. & Berry, M. Large-scale information retrieval with latent semantic indexing {1997} INFORMATION SCIENCES
    Vol. {100}({1-4}), pp. {105-137} 
    article  
    Abstract: As the amount of electronic information increases, traditional lexical (or Boolean) information retrieval techniques will become less useful. Large, heterogeneous collections will be difficult to search since the sheer volume of unranked documents returned in response to a query will overwhelm the user. Vector-space approaches to information retrieval, on the other hand, allow the user to search for concepts rather than specific words, and rank the results of the search according to their relative similarity to the query. One vector-space approach, Latent Semantic Indexing (LSI), has achieved up to 30% better retrieval performance than lexical searching techniques by employing a reduced-rank model of the term-document space. However, the original implementation of LSI lacked the execution efficiency required to make LSI useful for large data sets. A new implementation of LSI, LSI++, seeks to make LSI efficient, extensible, portable, and maintainable. The LSI++ Application Programming Interface (API) allows applications to immediately use LSI without knowing the implementation details of the underlying system. LSI++ supports both serial and distributed searching of large data sets, providing the same programming interface regardless of the implementation actually executing. In addition, a World Wide Web interface was created to allow simple, intuitive searching of document collections using LSI++. Timing results indicate that the serial implementation of LSI++ searches up to six times faster than the original implementation of LSI, while the parallel implementation searches nearly 180 times faster on large document collections. (C) Elsevier Science Inc. 1997.
    BibTeX:
    @article{Letsche1997,
      author = {Letsche, TA and Berry, MW},
      title = {Large-scale information retrieval with latent semantic indexing},
      journal = {INFORMATION SCIENCES},
      year = {1997},
      volume = {100},
      number = {1-4},
      pages = {105-137}
    }
    
    Leung, H., Chung, F. & Chan, S. On the use of hierarchical information in sequential mining-based XML document similarity computation {2005} KNOWLEDGE AND INFORMATION SYSTEMS
    Vol. {7}({4}), pp. {476-498} 
    article DOI  
    Abstract: Measuring the structural similarity among XML documents is the task of finding their semantic correspondence and is fundamental to many web-based applications. While there exist several methods to address the problem, the data mining approach seems to be a novel, interesting and promising one. It explores the idea of extracting paths from XML documents, encoding them as sequences and finding the maximal frequent sequences using the sequential pattern mining algorithms. In view of the deficiencies encountered by ignoring the hierarchical information in encoding the paths for mining, a new sequential pattern mining scheme for XML document similarity computation is proposed in this paper. It makes use of a preorder tree representation (PTR) to encode the XML tree's paths so that both the semantics of the elements and the hierarchical structure of the document can be taken into account when computing the structural similarity among documents. In addition, it proposes a postprocessing step to reuse the mined patterns to estimate the similarity of unmatched elements so that another metric to qualify the similarity between XML documents can be introduced. Encouraging experimental results were obtained and reported.
    BibTeX:
    @article{Leung2005,
      author = {Leung, HP and Chung, FL and Chan, SCF},
      title = {On the use of hierarchical information in sequential mining-based XML document similarity computation},
      journal = {KNOWLEDGE AND INFORMATION SYSTEMS},
      year = {2005},
      volume = {7},
      number = {4},
      pages = {476-498},
      doi = {{10.1007/s10115-004-0156-7}}
    }
    
    Li, L. & Horrocks, I. A software framework for matchmaking based on semantic Web technology {2004} INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE
    Vol. {8}({4}), pp. {39-60} 
    article  
    Abstract: The semantic Web can make e-commerce interactions more flexible and automated by standardizing ontologies, message content, and message protocols. This paper investigates how semantic and Web Services technologies can be used to support service advertisement and discovery in e-commerce. In particular, it describes the design and implementation of a service matchmaking prototype that uses a DAML-S based ontology and a description logic reasoner to compare ontology-based service descriptions. By representing the semantics of service descriptions, the matchmaker enables the behavior of an intelligent agent to approach more closely that of a human user trying to locate suitable Web services. The performance of this prototype implementation was tested in a realistic agent-based e-commerce scenario.
    BibTeX:
    @article{Li2004,
      author = {Li, L and Horrocks, I},
      title = {A software framework for matchmaking based on semantic Web technology},
      journal = {INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE},
      year = {2004},
      volume = {8},
      number = {4},
      pages = {39-60},
      note = {12th International World Wide Web Conference, Budapest, HUNGARY, MAY 20-24, 2003}
    }
    
    Li, M., van Santen, P., Walker, D., Rana, O. & Baker, M. SGrid: a service-oriented model for the Semantic Grid {2004} FUTURE GENERATION COMPUTER SYSTEMS
    Vol. {20}({1}), pp. {7-18} 
    article DOI  
    Abstract: This paper presents SGrid, a service-oriented model for the Semantic Grid. Each Grid service in SGrid is a Web service with certain domain knowledge. A Web services oriented wrapper generator has been implemented to automatically wrap legacy codes as Grid services exposed as Web services. Each wrapped Grid service is supplemented with domain ontology and registered with a Semantic Grid Service Ontology Repository using a Semantic Services Register. Using the wrapper generator, a finite element based computational fluid dynamics (CFDs) code has been wrapped as a Grid service, which can be published, discovered and reused in SGrid. (C) 2003 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Li2004b,
      author = {Li, M and van Santen, P and Walker, DW and Rana, OF and Baker, MA},
      title = {SGrid: a service-oriented model for the Semantic Grid},
      journal = {FUTURE GENERATION COMPUTER SYSTEMS},
      year = {2004},
      volume = {20},
      number = {1},
      pages = {7-18},
      doi = {{10.1016/S0167-739X(03)00160-2}}
    }
    
    Li, Y., McLean, D., Bandar, Z., O'Shea, J. & Crockett, K. Sentence similarity based on semantic nets and corpus statistics {2006} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {18}({8}), pp. {1138-1150} 
    article  
    Abstract: Sentence similarity measures play an increasingly important role in text-related research and applications in areas such as text mining, Web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper focuses directly on computing the similarity between very short texts of sentence length. It presents an algorithm that takes account of semantic information and word order information implied in the sentences. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition.
    BibTeX:
    @article{Li2006,
      author = {Li, YH and McLean, D and Bandar, ZA and O'Shea, JD and Crockett, K},
      title = {Sentence similarity based on semantic nets and corpus statistics},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2006},
      volume = {18},
      number = {8},
      pages = {1138-1150}
    }
    
    Li, Y. & Zhong, N. Web mining model and its applications for information gathering {2004} KNOWLEDGE-BASED SYSTEMS
    Vol. {17}({5-6}), pp. {207-217} 
    article DOI  
    Abstract: Web mining is used to automatically discover and extract information from Web-related data sources such as documents, log, services, and user profiles. Although standard data mining methods may be applied for mining on the Web, many specific algorithms need to be developed and applied for various purposes of Web based information processing in multiple Web resources, effectively and efficiently. In the paper, we propose an abstract Web mining model for extracting approximate concepts hidden in user profiles on the semantic Web. The abstract Web mining model represents knowledge on user profiles by using an ontology which consists of both `part-of' and `is-a' relations. We also describe the details of using the abstract Web mining model for information gathering. In this application, classes of the ontology are represented as subsets of a list of keywords. An efficient filtering algorithm is also developed to filter out most non-relevant inputs. (C) 2004 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Li2004a,
      author = {Li, YF and Zhong, N},
      title = {Web mining model and its applications for information gathering},
      journal = {KNOWLEDGE-BASED SYSTEMS},
      year = {2004},
      volume = {17},
      number = {5-6},
      pages = {207-217},
      doi = {{10.1016/j.knosys.2004.05.002}}
    }
    
    Lim, S. & Ng, Y. An automated change-detection algorithm for HTML documents based on semantic hierarchies {2001} 17TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS, pp. {303-312}  inproceedings  
    Abstract: Data at many Web sites are changing rapidly, and a significant amount of these data are presented in HTML documents that consist of markups and data contents. Although XML is getting more popular in data exchange, the presentation of data contained in XML documents is given by and large in the HTML format using XSL(T). Since HTML was designed to ``display'' data from the human perspective, it is not trivial for a machine to detect (hierarchical) changes of data in an HTML document In this paper Me propose a heuristic algorithm, called SCD, to detect semantic changes of hierarchical data contents in any two HTML documents automatically. Semantic changes differ from syntactic changes since the latter refer to changes of data contents with respect to markup structures according to the HTML grammar SCD does not require preprocessing nor any knowledge of the internal structure of the source documents beforehand. The time complexity of SCD is O((|X| x |Y|)log(|X| x |Y|)), where |X| and |Y| are the number of unique branches in the syntactic hierarchies of any two given HTML documents, respectively.
    BibTeX:
    @inproceedings{Lim2001,
      author = {Lim, SJ and Ng, YK},
      title = {An automated change-detection algorithm for HTML documents based on semantic hierarchies},
      booktitle = {17TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS},
      year = {2001},
      pages = {303-312},
      note = {17th International Conference on Data Engineering, HEIDELBERG, GERMANY, APR 02-06, 2001}
    }
    
    Lin, H., Harding, J. & Shahbaz, M. Manufacturing system engineering ontology for semantic interoperability across extended project teams {2004} INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH
    Vol. {42}({24}), pp. {5099-5118} 
    article DOI  
    Abstract: Communication, knowledge sharing and awareness of available expertise are complex issues for any multidiscipline team. Complexity increases substantially in extended enterprise environments. The concepts of an MSE Moderator have previously been considered in environments with shared information models and vocabularies. These concepts are now translated to the realm of extended enterprises, where inevitably, individual partners will have their own terminology and information sources. An MSE Ontology is proposed to enable the operation of an extended enterprise MSE Moderator to provide common understanding of manufacturing-related terms, and therefore to enhance the semantic inter-operability and reuse of knowledge resources within globally extended manufacturing teams.
    BibTeX:
    @article{Lin2004,
      author = {Lin, HK and Harding, JA and Shahbaz, M},
      title = {Manufacturing system engineering ontology for semantic interoperability across extended project teams},
      journal = {INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH},
      year = {2004},
      volume = {42},
      number = {24},
      pages = {5099-5118},
      doi = {{10.1080/00207540412331281999}}
    }
    
    Lin, H.K. & Harding, J.A. A manufacturing system engineering ontology model on the semantic web for inter-enterprise collaboration {2007} COMPUTERS IN INDUSTRY
    Vol. {58}({5}), pp. {428-437} 
    article DOI  
    Abstract: This paper investigates ontology-based approaches for representing information semantics and in particular the World Wide Web. A general manufacturing system engineering (MSE) knowledge representation scheme, called an MSE ontology model, to facilitate communication and information exchange in inter-enterprise, multi-disciplinary engineering design teams has been developed and encoded in the standard semantic web language. The proposed approach focuses on how to support information autonomy that allows the individual team members to keep their own preferred languages or information models rather than requiring them all to adopt standardized terminology. The MSE ontology model provides efficient access by common mediated meta-models across all engineering design teams through semantic matching. This paper also shows how the primitives of Web Ontology Language (OWL) can be used for expressing simple mappings between the mediated MSE ontology model and individual ontologies. (C) 2007 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Lin2007,
      author = {Lin, H. K. and Harding, J. A.},
      title = {A manufacturing system engineering ontology model on the semantic web for inter-enterprise collaboration},
      journal = {COMPUTERS IN INDUSTRY},
      year = {2007},
      volume = {58},
      number = {5},
      pages = {428-437},
      doi = {{10.1016/j.compind.2006.09.015}}
    }
    
    Lin, T. & Chiang, I. A simplicial complex, a hypergraph, structure in the latent semantic space of document clustering {2005} INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
    Vol. {40}({1-2}), pp. {55-80} 
    article DOI  
    Abstract: This paper presents a novel approach to document clustering based on some geometric structure in Combinatorial Topology. Given a set of documents, the set of associations among frequently co-occurring terms in documents forms naturally a simplicial complex. Our general thesis is each connected component of this simplicial complex represents a concept in the collection. Based on these concepts, documents can be clustered into meaningful classes. However, in this paper, we attack a softer notion, instead of connected components, we use maximal simplexes of highest dimension as representative of connected components, the concept so defined is called maximal primitive concepts. Experiments with three different data sets from Web pages and medical literature have shown that the proposed unsupervised clustering approach performs significantly better than traditional clustering algorithms, such as k-means, AutoClass and Hierarchical Clustering (HAG). This abstract geometric model seems have captured the latent semantic structure of documents. (c) 2005 Published by Elsevier Inc.
    BibTeX:
    @article{Lin2005,
      author = {Lin, TY and Chiang, IJ},
      title = {A simplicial complex, a hypergraph, structure in the latent semantic space of document clustering},
      journal = {INTERNATIONAL JOURNAL OF APPROXIMATE REASONING},
      year = {2005},
      volume = {40},
      number = {1-2},
      pages = {55-80},
      doi = {{10.1016/j.ijar.2004.11.005}}
    }
    
    Liu, H., Bao, H., Yu, J. & Xu, D. An ontology-based architecture for distributed digital museums {2005} Proceedings of 2005 International Conference on Machine Learning and Cybernetics, Vols 1-9, pp. {19-26}  inproceedings  
    Abstract: This paper describes the design and implementation through prototyping of the architecture of a system for browsing and retrieving museum information based on concepts, keywords and contents. The work is part of a project funded by the Chinese Education Ministry and the aim is to integrate information from a large number of local museum resources and to provide versatile access to digitalized information on antique collections of the museums. Ontology for the museum domain, based on the CIDOC Concept Reference Model, is being developed as a semantic layer of the architecture. The challenges of developing such a global ontology model and a mapping mechanism between the global schema and data sources of local museums are discussed. Web Services technology is employed to integrate heterogeneous and distributed data sources of local museums. An ontology and view based approach is adopted to the design and implementation of an user interface. This allows navigation through the semantic layer, display of multimedia representations and textual information of antiques in appropriate viewers and the invocation of conventional keyword and content based searching. Experimentations with our prototype have demonstrated that this architecture is stable and efficient.
    BibTeX:
    @inproceedings{Liu2005,
      author = {Liu, HZ and Bao, H and Yu, JH and Xu, D},
      title = {An ontology-based architecture for distributed digital museums},
      booktitle = {Proceedings of 2005 International Conference on Machine Learning and Cybernetics, Vols 1-9},
      year = {2005},
      pages = {19-26},
      note = {4th International Conference on Machine Learning and Cybernetics, Canton, PEOPLES R CHINA, AUG 18-21, 2005}
    }
    
    Liu, H. & Singh, P. ConceptNet - a practical commonsense reasoning tool-kit {2004} BT TECHNOLOGY JOURNAL
    Vol. {22}({4}), pp. {211-226} 
    article  
    Abstract: ConceptNet is a freely available commonsense knowledge base and natural-language-processing tool-kit which supports many practical textual-reasoning task over real-world documents including topic-gisting, analogy-making, and other context oriented inferences. The knowledge base is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. ConceptNet is generated automatically from the 700 000 sentences of the Open Mind common Sense Project-a World Wide Web based collaboration with over 14 000 authors.
    BibTeX:
    @article{Liu2004,
      author = {Liu, H and Singh, P},
      title = {ConceptNet - a practical commonsense reasoning tool-kit},
      journal = {BT TECHNOLOGY JOURNAL},
      year = {2004},
      volume = {22},
      number = {4},
      pages = {211-226}
    }
    
    Liu, J., Cui, J. & Gu, N. Composing Web services Dynamically and Semantically {2004} PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON E-COMMERCE TECHNOLOGY FOR DYNAMIC E-BUSINESS, pp. {234-241}  inproceedings  
    Abstract: Web services technology is emerging as a promising approach of integration and interaction for applications within and across organizational boundaries. But it is difficult to meet the practical requirements if only with the individual web services. As a result, federating existing single web services into composite web services is not only necessary but also indispensable. This paper proposes a dynamic and semantic composition approach for web services. In this approach, web services are modeled with some rules whose heads and bodied are related with a semantic ontology used for eliminating the semantic conflicts in composition. Moreover, a Non-backtrace backward chaining algorithm is addressed to compose the existing web services in a more efficient and automatic way. Given the inputs and expected outputs, the approach will automatically and dynamically generate a composition plan and convert it into BPEL4WS that can be executed and returns the results. The whole composition process can be done automatically and dynamically.
    BibTeX:
    @inproceedings{Liu2004a,
      author = {Liu, JM and Cui, JT and Gu, N},
      title = {Composing Web services Dynamically and Semantically},
      booktitle = {PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON E-COMMERCE TECHNOLOGY FOR DYNAMIC E-BUSINESS},
      year = {2004},
      pages = {234-241},
      note = {IEEE International Conference on E-Commerce Technology for Dynamic E-Business, Beijing, PEOPLES R CHINA, SEP 13-15, 2004}
    }
    
    Liu, L., Halper, M., Geller, J. & Perl, Y. Controlled vocabularies in OODBs: Modeling issues and implementation {1999} DISTRIBUTED AND PARALLEL DATABASES
    Vol. {7}({1}), pp. {37-65} 
    article  
    Abstract: A major problem that arises in many large application domains is the discrepancy among terminologies of different information systems. The terms used by the information systems of one organization may not agree with the terms used by another organization even when they are in the same domain. Such a situation clearly impedes communication and the sharing of information, and decreases the efficiency of doing business. Problems of this nature can be overcome using a controlled vocabulary (CV), a system of concepts that consolidates and unifies the terminologies of a domain. However, CVs are large and complex and difficult to comprehend. This paper presents a methodology for representing a semantic network-based CV as an object-oriented database (OODB). We call such a representation an Object-Oriented Vocabulary Repository (OOVR). The methodology is based on a structural analysis and partitioning of the source CV. The representation of a CV as an OOVR offers both the level of support typical of database management systems and an abstract view which promotes comprehension of the CV's structure and content. After discussing the theoretical aspects of the methodology, we apply it to the MED and InterMED, two existing CVs from the medical field. A program, called the OOVR Generator, for automatically carrying out our methodology is described. Both the MED-OOVR and the InterMED-OOVR have been created using the OOVR Generator, and each exists on top of ONTOS, a commercial OODBMS. The OOVR derived from the InterMED is presently available on the Web.
    BibTeX:
    @article{Liu1999,
      author = {Liu, LM and Halper, M and Geller, J and Perl, Y},
      title = {Controlled vocabularies in OODBs: Modeling issues and implementation},
      journal = {DISTRIBUTED AND PARALLEL DATABASES},
      year = {1999},
      volume = {7},
      number = {1},
      pages = {37-65}
    }
    
    Lloyd, C., Halstead, M. & Nielsen, P. CeIIML: its future, present and past {2004} PROGRESS IN BIOPHYSICS & MOLECULAR BIOLOGY
    Vol. {85}({2-3}), pp. {433-450} 
    article DOI  
    Abstract: Advances in biotechnology and experimental techniques have lead to the elucidation of vast amounts of biological data. Mathematical models provide a method of analysing this data; however, there are two issues that need to be addressed: (1) the need for standards for defining cell models so they can, for example, be exchanged across the World Wide Web, and also read into simulation software in a consistent format and (2) eliminating the errors which arise with the current method of model publication. CellML has evolved to meet these needs of the modelling community. CelIML is a free, open-source, eXtensible markup language based standard for defining mathematical models of cellular function. In this paper we summarise the structure of CellML, its current applications (including biological pathway and electrophysiological models), and its future development-in particular, the development of toolsets and the integration of ontologies. (C) 2004 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Lloyd2004,
      author = {Lloyd, CM and Halstead, MDB and Nielsen, PF},
      title = {CeIIML: its future, present and past},
      journal = {PROGRESS IN BIOPHYSICS & MOLECULAR BIOLOGY},
      year = {2004},
      volume = {85},
      number = {2-3},
      pages = {433-450},
      note = {Conference on Modelling Cellular and Tissue Function, Auckland, NEW ZEALAND, JUL, 2003},
      doi = {{10.1016/j.pbiomolbio.2004.01.004}}
    }
    
    Lord, P., Alper, P., Wroe, C. & Goble, C. Feta: A light-weight architecture for user oriented semantic service discovery {2005}
    Vol. {3532}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {17-31} 
    inproceedings  
    Abstract: Semantic Web Services offer the possibility of highly flexible web service architectures, where new services can be quickly discovered, orchestrated and composed into workflows. Most existing work has, however, focused on complex service descriptions for automated composition. In this paper, we describe the requirements from the bioinformatics domain which demand technically simpler descriptions, involving the user community at all levels. We describe our data model and lightweight semantic discovery architecture. We explain how this fits in the larger architecture of the (my)Grid project, which overall enables interoperability and composition across, disparate, autonomous, third-party services. Our contention is that such light-weight service discovery provides a good fit for user requirements of bioinformatics and possibly other domains.
    BibTeX:
    @inproceedings{Lord2005,
      author = {Lord, P and Alper, P and Wroe, C and Goble, C},
      title = {Feta: A light-weight architecture for user oriented semantic service discovery},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2005},
      volume = {3532},
      pages = {17-31},
      note = {2nd European Semantic Web Conference, Iraklion, GREECE, MAY 29-JUN 01, 2005}
    }
    
    Lord, P., Bechhofer, S., Wilkinson, M., Schiltz, G., Gessler, D., Hull, D., Goble, C. & Stein, L. Applying Semantic Web services to bioinformatics: Experiences gained, lessons learnt {2004}
    Vol. {3298}SEMANTIC WEB - ISWC 2004, PROCEEDINGS, pp. {350-364} 
    inproceedings  
    Abstract: We have seen an increasing amount of interest in the application of Semantic Web technologies to Web services. The aim is to support automated discovery and composition of the services allowing seamless and transparent interoperability. In this paper we discuss three projects that are applying such technologies to bioinformatics: (my)Grid, MOBY-Services and Semantic-MOBY. Through an examination of the differences and similarities between the solutions produced, we highlight some of the practical difficulties in developing Semantic Web services and suggest that the experiences with these projects have implications for the development of Semantic Web services as a whole.
    BibTeX:
    @inproceedings{Lord2004,
      author = {Lord, P and Bechhofer, S and Wilkinson, MD and Schiltz, G and Gessler, D and Hull, D and Goble, C and Stein, L},
      title = {Applying Semantic Web services to bioinformatics: Experiences gained, lessons learnt},
      booktitle = {SEMANTIC WEB - ISWC 2004, PROCEEDINGS},
      year = {2004},
      volume = {3298},
      pages = {350-364},
      note = {3rd International Semantic Web Conference, Hiroshima, JAPAN, NOV 07-11, 2004}
    }
    
    Lozano-Tello, A. & Gomez-Perez, A. ONTOMETRIC: A method to choose the appropriate ontology {2004} JOURNAL OF DATABASE MANAGEMENT
    Vol. {15}({2}), pp. {1-18} 
    article  
    Abstract: In the last years, the development of ontology-based applications has increased considerably, mainly related to the semantic web. Users currently looking for ontologies in order to incorporate them into their systems, just use their experience and intuition. This makes it difficult for them to justify their choices. Mainly, this is due to the lack of methods that help the user to determine which are the most appropriate ontologies for the new system. To solve this deficiency, the present work proposes a method, ONTOMETRIC, which allows the users to measure the suitability of existing ontologies, regarding the requirements of their systems.
    BibTeX:
    @article{Lozano-Tello2004,
      author = {Lozano-Tello, A and Gomez-Perez, A},
      title = {ONTOMETRIC: A method to choose the appropriate ontology},
      journal = {JOURNAL OF DATABASE MANAGEMENT},
      year = {2004},
      volume = {15},
      number = {2},
      pages = {1-18}
    }
    
    Luciano, J.S. & Stevens, R.D. Research - e-Science and biological pathway semantics {2007} BMC BIOINFORMATICS
    Vol. {8}({Suppl. 3}) 
    article DOI  
    Abstract: Background: The development of e-Science presents a major set of opportunities and challenges for the future progress of biological and life scientific research. Major new tools are required and corresponding demands are placed on the high-throughput data generated and used in these processes. Nowhere is the demand greater than in the semantic integration of these data. Semantic Web tools and technologies afford the chance to achieve this semantic integration. Since pathway knowledge is central to much of the scientific research today it is a good test-bed for semantic integration. Within the context of biological pathways, the BioPAX initiative, part of a broader movement towards the standardization and integration of life science databases, forms a necessary prerequisite for its successful application of e-Science in health care and life science research. This paper examines whether BioPAX, an effort to overcome the barrier of disparate and heterogeneous pathway data sources, addresses the needs of e-Science. Results: We demonstrate how BioPAX pathway data can be used to ask and answer some useful biological questions. We find that BioPAX comes close to meeting a broad range of e-Science needs, but certain semantic weaknesses mean that these goals are missed. We make a series of recommendations for re-modeling some aspects of BioPAX to better meet these needs. Conclusion: Once these semantic weaknesses are addressed, it will be possible to integrate pathway information in a manner that would be useful in e-Science.
    BibTeX:
    @article{Luciano2007,
      author = {Luciano, Joanne S. and Stevens, Robert D.},
      title = {Research - e-Science and biological pathway semantics},
      journal = {BMC BIOINFORMATICS},
      year = {2007},
      volume = {8},
      number = {Suppl. 3},
      doi = {{10.1186/1471-2105-8-S3-S3}}
    }
    
    Lukasiewicz, T. A novel combination of answer set programming with description logics for the Semantic Web {2007}
    Vol. {4519}Semantic Web: Research and Applications, Proceedings, pp. {384-398} 
    inproceedings  
    Abstract: We present a novel combination of disjunctive logic programs under the answer set semantics with description logics for the Semantic Web. The combination is based on a well-balanced interface between disjunctive logic programs and description logics, which guarantees the decidability of the resulting formalism without assuming syntactic restrictions. We show that the new fon-nalism has very nice semantic properties. In particular, it faithfully extends both disjunctive programs and description logics. Furthermore, we describe algorithms for reasoning in the new formalism, and we give a precise picture of its computational complexity. We also provide a special case with polynomial data complexity.
    BibTeX:
    @inproceedings{Lukasiewicz2007a,
      author = {Lukasiewicz, Thomas},
      title = {A novel combination of answer set programming with description logics for the Semantic Web},
      booktitle = {Semantic Web: Research and Applications, Proceedings},
      year = {2007},
      volume = {4519},
      pages = {384-398},
      note = {4th European Semantic Web Conference, Innsbruck, AUSTRIA, JUN 03-07, 2007}
    }
    
    Lukasiewicz, T. Expressive probabilistic description logics {2008} ARTIFICIAL INTELLIGENCE
    Vol. {172}({6-7}), pp. {852-883} 
    article DOI  
    Abstract: The work in this paper is directed towards sophisticated formalisms for reasoning under probabilistic uncertainty in ontologies in the Semantic Web. Ontologies play a central role in the development of the Semantic Web, since they provide a precise definition of shared terms in web resources. They are expressed in the standardized web ontology language OWL, which consists of the three increasingly expressive sublanguages OWL Lite, OWL DL, and OWL Full. The sublanguages OWL Lite and OWL DL have a formal semantics and a reasoning support through a mapping to the expressive description logics SHIF(D) and SHOIN(D), respectively. In this paper, we present the expressive probabilistic description logics P-SHIF(D) and P-SHOIN(D), which are probabilistic extensions of these description logics. They allow for expressing rich terminological probabilistic knowledge about concepts and roles as well as assertional probabilistic knowledge about instances of concepts and roles. They are semantically based on the notion of probabilistic lexicographic entailment from probabilistic default reasoning, which naturally interprets this terminological and assertional probabilistic knowledge as knowledge about random and concrete instances, respectively. As an important additional feature, they also allow for expressing terminological default knowledge, which is semantically interpreted as in Lehmann's lexicographic entailment in default reasoning from conditional knowledge bases. Another important feature of this extension of SHIF(D) and SHOIN(D) by probabilistic uncertainty is that it can be applied to other classical description logics as well. We then present sound and complete algorithms for the main reasoning problems in the new probabilistic description logics, which are based on reductions to reasoning in their classical counterparts, and to solving linear optimization problems. In particular, this shows the important result that reasoning in the new probabilistic description logics is decidable/computable. Furthermore, we also analyze the computational complexity of the main reasoning problems in the new probabilistic description logics in the general as well as restricted cases. (c) 2007 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Lukasiewicz2008,
      author = {Lukasiewicz, Thomas},
      title = {Expressive probabilistic description logics},
      journal = {ARTIFICIAL INTELLIGENCE},
      year = {2008},
      volume = {172},
      number = {6-7},
      pages = {852-883},
      doi = {{10.1016/j.artint.2007.10.017}}
    }
    
    Lukasiewicz, T. Probabilistic description logic programs {2007} INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
    Vol. {45}({2}), pp. {288-307} 
    article DOI  
    Abstract: Towards sophisticated representation and reasoning techniques that allow for probabilistic uncertainty in the Rules, Logic, and Proof layers of the Semantic Web, we present probabilistic description logic programs (or pdl-programs), which are a combination of description logic programs (or di-programs) under the answer set semantics and the well-founded semantics with Poole's independent choice logic. We show that query processing in such pdl-programs can be reduced to computing all answer sets of dl-programs and solving linear optimization problems, and to computing the well founded model of dl-programs, respectively. Moreover, we show that the answer set semantics of pdl-programs is a refinement of the well-founded semantics of pdl-programs. Furthermore, we also present an algorithm for query processing in the special case of stratified pdl-programs, which is based on a reduction to computing the canonical model of stratified dl-programs. (C) 2006 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Lukasiewicz2007,
      author = {Lukasiewicz, Thomas},
      title = {Probabilistic description logic programs},
      journal = {INTERNATIONAL JOURNAL OF APPROXIMATE REASONING},
      year = {2007},
      volume = {45},
      number = {2},
      pages = {288-307},
      note = {8th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Barcelona, SPAIN, JUL 06-08, 2005},
      doi = {{10.1016/j.ijar.2006.06.012}}
    }
    
    Lukasiewicz, T. & Straccia, U. Managing uncertainty and vagueness in description logics for the Semantic Web {2008} JOURNAL OF WEB SEMANTICS
    Vol. {6}({4}), pp. {291-308} 
    article DOI  
    Abstract: Ontologies play a crucial role in the development of the Semantic Web as a means for de. ning shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently directed towards representing and reasoning with uncertainty and vagueness in ontologies for the Semantic Web. In this paper, we give an overview of approaches in this context to managing probabilistic uncertainty, possibilistic uncertainty, and vagueness in expressive description logics for the Semantic Web. (C) 2008 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Lukasiewicz2008a,
      author = {Lukasiewicz, Thomas and Straccia, Umberto},
      title = {Managing uncertainty and vagueness in description logics for the Semantic Web},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2008},
      volume = {6},
      number = {4},
      pages = {291-308},
      doi = {{10.1016/j.websem.2008.04.001}}
    }
    
    Maechling, P., Chalupsky, H., Dougherty, M., Deelman, E., Gil, Y., Gullapalli, S., Gupta, V., Kesselman, C., Kim, J., Mehta, G., Mendenhall, B., Russ, T., Singh, G., Spraragen, M., Staples, G. & Vahi, K. Simplifying construction of complex workflows for non-expert users of the Southern California Earthquake Center Community Modeling Environment {2005} SIGMOD RECORD
    Vol. {34}({3}), pp. {24-30} 
    article  
    Abstract: Workflow systems often present the user with rich interfaces that express all the capabilities and complexities of the application programs and the computing environments that they support. However, non-expert users are better served with simple interfaces that abstract away system complexities and still enable them to construct and execute complex workflows. To explore this idea, we have created a set of tools and interfaces that simplify the construction of workflows. Implemented as part of the Community Modeling Environment developed by the Southern California Earthquake Center, these tools, are integrated into a comprehensive workflow system that supports both domain experts as well as non expert users.
    BibTeX:
    @article{Maechling2005,
      author = {Maechling, P and Chalupsky, H and Dougherty, M and Deelman, E and Gil, Y and Gullapalli, S and Gupta, V and Kesselman, C and Kim, J and Mehta, G and Mendenhall, B and Russ, T and Singh, G and Spraragen, M and Staples, G and Vahi, K},
      title = {Simplifying construction of complex workflows for non-expert users of the Southern California Earthquake Center Community Modeling Environment},
      journal = {SIGMOD RECORD},
      year = {2005},
      volume = {34},
      number = {3},
      pages = {24-30}
    }
    
    Maedche, A., Motik, B., Silva, N. & Volz, R. MAFRA - A MApping FRAmework for distributed ontologies {2002}
    Vol. {2473}KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT, PROCEEDINGS - ONTOLOGIES AND THE SEMANTIC WEB , pp. {235-250} 
    inproceedings  
    Abstract: Ontologies as means for conceptualizing and structuring domain knowledge within a community of interest are seen as a key to realize the Semantic Web vision. However, the decentralized nature of the Web makes achieving this consensus across communities difficult, thus, hampering efficient knowledge sharing between them. In order to balance the autonomy of each community with the need for interoperability, mapping mechanisms between distributed ontologies in the Semantic Web are required. In this paper we present MAFRA, an interactive, incremental and dynamic framework for mapping distributed ontologies.
    BibTeX:
    @inproceedings{Maedche2002,
      author = {Maedche, A and Motik, B and Silva, N and Volz, R},
      title = {MAFRA - A MApping FRAmework for distributed ontologies},
      booktitle = {KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT, PROCEEDINGS - ONTOLOGIES AND THE SEMANTIC WEB },
      year = {2002},
      volume = {2473},
      pages = {235-250},
      note = {13th International Conference on Knowledge Engineering and Knowledge Management (EKAW 2002), Siguenza, SPAIN, OCT 01-04, 2002}
    }
    
    Maedche, A., Motik, B. & Stojanovic, L. Managing multiple and distributed ontologies on the Semantic Web {2003} VLDB JOURNAL
    Vol. {12}({4}), pp. {286-302} 
    article DOI  
    Abstract: In traditional software systems, significant attention is devoted to keeping modules well separated and coherent with respect to functionality, thus ensuring that changes in the system are localized to a handful of modules. Reuse is seen as the key method in reaching that goal. Ontology-based systems on the Semantic Web are just a special class of software systems, so the same principles apply. In this article, we present an integrated framework for managing multiple and distributed ontologies on the Semant ic Web. It is based on the representation model for ontologies, trading off between expressivity and tractability. In our framework, we provide features for reusing existing ontologies and for evolving them while retaining the consistency. The approach is implemented within KAON, the Karlsruhe Ontology and Semantic Web tool suite.
    BibTeX:
    @article{Maedche2003,
      author = {Maedche, A and Motik, B and Stojanovic, L},
      title = {Managing multiple and distributed ontologies on the Semantic Web},
      journal = {VLDB JOURNAL},
      year = {2003},
      volume = {12},
      number = {4},
      pages = {286-302},
      doi = {{10.1007/s00778-003-0102-4}}
    }
    
    Maedche, A. & Staab, S. Ontology learning for the Semantic Web {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({2}), pp. {72-79} 
    article  
    BibTeX:
    @article{Maedche2001,
      author = {Maedche, A and Staab, S},
      title = {Ontology learning for the Semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {2},
      pages = {72-79}
    }
    
    Magliano, J., Wiemer-Hastings, K., Millis, K., Munoz, B. & McNamara, D. Using latent semantic analysis to assess reader strategies {2002} BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS
    Vol. {34}({2}), pp. {181-188} 
    article  
    Abstract: We tested a computer-based procedure for assessing reader strategies that was based on verbal protocols that utilized latent semantic analysis (LSA). Students were given self-explanation-reading training (SERT), which teaches strategies that facilitate self-explanation during reading, such as elaboration based on world knowledge and bridging between text sentences. During a computerized version of SERT practice, students read texts and typed self-explanations into a computer after each sentence. The use of SERT strategies during this practice was assessed by determining the extent to which students used the information in the current sentence versus the prior text or world knowledge in their self-explanations. This assessment was made on the basis of human judgments and LSA. Both human judgments and LSA were remarkably similar and indicated that students who were not complying with SERT tended to paraphrase the text sentences, whereas students who were compliant with SERT tended to explain the sentences in terms of what they knew about the world and of information provided in the prior text context. The similarity between human judgments and LSA indicates that LSA will be useful in accounting for reading strategies in a Web-based version of SERT.
    BibTeX:
    @article{Magliano2002,
      author = {Magliano, JP and Wiemer-Hastings, K and Millis, KK and Munoz, BD and McNamara, D},
      title = {Using latent semantic analysis to assess reader strategies},
      journal = {BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS},
      year = {2002},
      volume = {34},
      number = {2},
      pages = {181-188},
      note = {31st Annual Meeting of the Society-for-Computers-in-Psychology (SCiP), ORLANDO, FL, NOV 14, 2001}
    }
    
    Magnini, B. & Strapparava, C. User modelling for news web sites with word sense based techniques {2004} USER MODELING AND USER-ADAPTED INTERACTION
    Vol. {14}({2-3}), pp. {239-257} 
    article  
    Abstract: SiteIF is a personal agent for a bilingual news web site that learns user's interests from the requested pages. In this paper we propose to use a word sense based document representation as a starting point to build a model of the user's interests. Documents passed over are processed and relevant senses (disambiguated over WordNet) are extracted and then combined to form a semantic network. A filtering procedure dynamically predicts new documents on the basis of the semantic network. There are two main advantages of a sense-based approach: first, the model predictions, being based on senses rather than words, are more accurate; second, the model is language independent, allowing navigation in multilingual sites. We report the results of a comparative experiment that has been carried out to give a quantitative estimation of these improvements.
    BibTeX:
    @article{Magnini2004,
      author = {Magnini, B and Strapparava, C},
      title = {User modelling for news web sites with word sense based techniques},
      journal = {USER MODELING AND USER-ADAPTED INTERACTION},
      year = {2004},
      volume = {14},
      number = {2-3},
      pages = {239-257}
    }
    
    Maguitman, A.G., Menczer, F., Erdinc, F., Roinestad, H. & Vespignani, A. Algorithmic computation and approximation of semantic similarity {2006} WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS
    Vol. {9}({4}), pp. {431-456} 
    article DOI  
    Abstract: Automatic extraction of semantic information from text and links in Web pages is key to improving the quality of search results. However, the assessment of automatic semantic measures is limited by the coverage of user studies, which do not scale with the size, heterogeneity, and growth of the Web. Here we propose to leverage human-generated metadata-namely topical directories-to measure semantic relationships among massive numbers of pairs of Web pages or topics. The Open Directory Project classifies millions of URLs in a topical ontology, providing a rich source from which semantic relationships between Web pages can be derived. While semantic similarity measures based on taxonomies (trees) are well studied, the design of well-founded similarity measures for objects stored in the nodes of arbitrary ontologies (graphs) is an open problem. This paper defines an information-theoretic measure of semantic similarity that exploits both the hierarchical and non-hierarchical structure of an ontology. An experimental study shows that this measure improves significantly on the traditional taxonomy-based approach. This novel measure allows us to address the general question of how text and link analyses can be combined to derive measures of relevance that are in good agreement with semantic similarity. Surprisingly, the traditional use of text similarity turns out to be ineffective for relevance ranking.
    BibTeX:
    @article{Maguitman2006,
      author = {Maguitman, Ana G. and Menczer, Filippo and Erdinc, Fulya and Roinestad, Heather and Vespignani, Alessandro},
      title = {Algorithmic computation and approximation of semantic similarity},
      journal = {WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS},
      year = {2006},
      volume = {9},
      number = {4},
      pages = {431-456},
      note = {14th International World Wide Web Conference (WWW2005), Chiba, JAPAN, MAY 10-14, 2005},
      doi = {{10.1007/s11280-006-8562-2}}
    }
    
    Mandell, D. & McIlraith, S. Adapting BPEL4WS for the semantic web: The bottom-up approach to web service interoperation {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {227-241} 
    inproceedings  
    Abstract: Towards the ultimate goal of seamless interaction among networked programs and devices, industry has developed orchestration and process modeling languages such as XLANG, WSFL, and recently BPEL4WS. Unfortunately, these efforts leave us a long way from seamless interoperation. Researchers in the Semantic Web community have taken up this challenge proposing top-down approaches to achieve aspects of Web Service interoperation. Unfortunately, many of these efforts have been disconnected from emerging industry standards, particularly in process modeling. In this paper we take a bottom-up approach to integrating Semantic Web technology into Web services. Building on BPEL4WS, we present integrated Semantic Web technology for automating customized, dynamic binding of Web services together with interoperation through semantic translation. We discuss the value of semantically enriched service interoperation and demonstrate how our framework accounts for user-defined constraints while gaining potentially successful execution pathways in a practically motivated example. Finally, we provide an analysis of the forward-looking limitations of frameworks like BPEL4WS, and suggest how such specifications might embrace semantic technology at a fundamental level to work towards fully automated Web service interoperation.
    BibTeX:
    @inproceedings{Mandell2003,
      author = {Mandell, DJ and McIlraith, SA},
      title = {Adapting BPEL4WS for the semantic web: The bottom-up approach to web service interoperation},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {227-241},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Maojo, V., Garcia-Remesal, M., Billhardt, H., Alonso-Calvo, R., Perez-Rey, D. & Martin-Sanchez, F. Designing new methodologies for integrating biomedical information in clinical trials {2006} METHODS OF INFORMATION IN MEDICINE
    Vol. {45}({2}), pp. {180-185} 
    article  
    Abstract: Objectives. To propose a modification to current methodologies for clinical trials, improving data collection and cost-efficiency. To describe a system to integrate distributed and heterogeneous medical and genetic databases for improving information access, retrieval and analysis of biomedical information. Methods. Data for clinical trials can be collected from remote, distributed and heterogeneous data sources. In this distributed scenario, we propose an ontology based approach, with two basic operations: mapping and unification. Mapping outputs the semantic model of a virtual repository with the information model of a specific database. Unification provides a single schema for two or more previously available virtual repositories. In both processes, domain ontologies can improve other traditional approaches. Results: Private clinical databases and public genomic and disease databases (e.g., OMIM, Prosite and others) were integrated. We successfully tested the system using thirteen databases containing clinical and biological information and biomedical vocabularies. Conclusions. We present a domain-independent approach to biomedical database integration, used in this paper as a reference for the design of future models of clinico-genomic trials where information will be integrated, retrieved and analyzed. Such an approach to biomedical data integration has been one of the goals of the IST INFOBIOMED Network of Excellence in Biomedical Informatics, funded by the European Commission, and the new ACGT (Advanced Clinico-Genomic Trials on Cancer) project, where the authors will apply these methods to research experiments.
    BibTeX:
    @article{Maojo2006,
      author = {Maojo, V and Garcia-Remesal, M and Billhardt, H and Alonso-Calvo, R and Perez-Rey, D and Martin-Sanchez, F},
      title = {Designing new methodologies for integrating biomedical information in clinical trials},
      journal = {METHODS OF INFORMATION IN MEDICINE},
      year = {2006},
      volume = {45},
      number = {2},
      pages = {180-185},
      note = {Conference on Statistical Methodology in Bioinformatics and Clinical Trials, Prague, CZECH REPUBLIC, APR, 2004}
    }
    
    Marenco, L., Tosches, N., Crasto, C., Shepherd, G., Miller, P. & Nadkarni, P. Achieving evolvable Web-database bioscience applications using the EAV/CR framework: Recent advances {2003} JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION
    Vol. {10}({5}), pp. {444-453} 
    article DOI  
    Abstract: The EAV/CR framework, designed for database support of rapidly evolving scientific domains, utilizes metadata to facilitate schema maintenance and automatic generation of Web-enabled browsing interfaces to the data. EAV/CR is used in SenseLab, a neuroscience database that is part of the national Human Brain Project. This report describes various enhancements to the framework. These include (1) the ability to create ``portals'' that present different subsets of the schema to users with a particular research focus, (2) a generic XML-based protocol to assist data extraction and population of the database by external agents, (3) a limited form of ad hoc data query, and (4) semantic descriptors for interclass relationships and links to controlled vocabularies such as the UMLS.
    BibTeX:
    @article{Marenco2003,
      author = {Marenco, L and Tosches, N and Crasto, C and Shepherd, G and Miller, PL and Nadkarni, PM},
      title = {Achieving evolvable Web-database bioscience applications using the EAV/CR framework: Recent advances},
      journal = {JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION},
      year = {2003},
      volume = {10},
      number = {5},
      pages = {444-453},
      doi = {{10.1197/jamia.M1303}}
    }
    
    Martin, D., Burstein, M., McDermott, D., McIlraith, S., Paolucci, M., Sycara, K., McGuinness, D.L., Sirin, E. & Srinivasan, N. Bringing semantics to web services with OWL-S {2007} WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS
    Vol. {10}({3}), pp. {243-277} 
    article DOI  
    Abstract: Current industry standards for describing Web Services focus on ensuring interoperability across diverse platforms, but do not provide a good foundation for automating the use of Web Services. Representational techniques being developed for the Semantic Web can be used to augment these standards. The resulting Web Service specifications enable the development of software programs that can interpret descriptions of unfamiliar Web Services and then employ those services to satisfy user goals. OWL-S (''OWL for Services'') is a set of notations for expressing such specifications, based on the Semantic Web ontology language OWL. It consists of three interrelated parts: a profile ontology, used to describe what the service does; a process ontology and corresponding presentation syntax, used to describe how the service is used; and a grounding ontology, used to describe how to interact with the service. OWL-S can be used to automate a variety of service-related activities involving service discovery, interoperation, and composition. A large body of research on OWL-S has led to the creation of many open-source tools for developing, reasoning about, and dynamically utilizing Web Services.
    BibTeX:
    @article{Martin2007,
      author = {Martin, David and Burstein, Mark and McDermott, Drew and McIlraith, Sheila and Paolucci, Massimo and Sycara, Katia and McGuinness, Deborah L. and Sirin, Evren and Srinivasan, Naveen},
      title = {Bringing semantics to web services with OWL-S},
      journal = {WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS},
      year = {2007},
      volume = {10},
      number = {3},
      pages = {243-277},
      doi = {{10.1007/s11280-007-0033-x}}
    }
    
    Martin, D., Paolucci, M., McIlraith, S., Burstein, M., McDermott, D., McGuinness, D., Parsia, B., Payne, T., Sabou, M., Solanki, M., Srinivasan, N. & Sycara, K. Bringing semantics to web services: The OWL-S approach {2005}
    Vol. {3387}SEMANTIC WEB SERVICES AND WEB PROCESS COMPOSITION, pp. {26-42} 
    inproceedings  
    Abstract: Service interface description languages such as WSDL, and related standards, are evolving rapidly to provide a foundation for interoperation between Web services. At the same time, Semantic Web service technologies, such as the Ontology Web Language for Services (OWL-S), are developing the means by which services can be given richer semantic specifications. Richer semantics can enable fuller, more flexible automation of service provision and use, and support the construction of more powerful tools and methodologies. Both sets of technologies can benefit from complementary uses and cross-fertilization of ideas. This paper shows how to use OWL-S in conjunction with Web service standards, and explains and illustrates the value added by the semantics expressed in OWL-S.
    BibTeX:
    @inproceedings{Martin2005,
      author = {Martin, D and Paolucci, M and McIlraith, S and Burstein, M and McDermott, D and McGuinness, D and Parsia, B and Payne, T and Sabou, M and Solanki, M and Srinivasan, N and Sycara, K},
      title = {Bringing semantics to web services: The OWL-S approach},
      booktitle = {SEMANTIC WEB SERVICES AND WEB PROCESS COMPOSITION},
      year = {2005},
      volume = {3387},
      pages = {26-42},
      note = {1st International Workshop on Semantic Web Services and Web Process Composition, San Diego, CA, JUL 06, 2004}
    }
    
    Martin, T. & Azvine, B. Acquisition of soft taxonomies for intelligent personal hierarchies and the soft semantic Web {2003} BT TECHNOLOGY JOURNAL
    Vol. {21}({4}), pp. {113-122} 
    article  
    Abstract: Information overload is a problem at an individual and a corporate level. Many solutions have been proposed, including knowledge management, data warehouses, service directories and digital libraries. The semantic Web aims to unify many of these approaches by appropriate markup and agreement on the meaning of the markup. At the individual's level, these techniques partially solve the problem by classifying documents within hierarchical structures and enabling searching and browsing of the documents. However, they also contribute to the problem as there is no unique categorisation and access structure that suits every individual. Finding the right document becomes a two-stage process - first find the right place in the categorisation scheme, then find the document within that class. In addition to enterprise-wide sources, individual information sources include e-mails, electronic documents in many formats, personal and group filespaces, notes, diary entries, etc. These are unlikely to conform to the enterprise categorisation but form useful resources nevertheless. The idea of an intelligent personal hierarchy for information (iPHI) is to auto-configure access to multiple sources of information based on personal categories. This entails fuzzy matching of meta-data structure as well as content. Metadata is a powerful tool in intelligent information management; however, it is not necessarily uniform, either in label or in content. One document's `author' is another's `creator'; `John Smith', `Smith, John' and `J. Smith' all refer to the same individual but are syntactically different. Fusion ( or intelligent integration) of information takes place in an environment where the data may be of varying quality, and some may be incomplete or inconsistent. Combining metadata ( and the associated data) is not possible without knowing ( or learning) the mappings between their ontologies. Such mappings are likely to be soft, i.e. approximate - different sources arise from different designers with different world views. Soft computing is vital to tackle these problems. Frequently, data sources are organised implicitly, according to an internal ontology or taxonomy. Knowing this ontology or taxonomy is a necessary first step to using it in the fusion process. The work described in this paper extracts the implicit taxonomy and enables a user's interaction with the data ( e. g. searching) to be expressed in their preferred terms rather than those used by the system.
    BibTeX:
    @article{Martin2003,
      author = {Martin, TP and Azvine, B},
      title = {Acquisition of soft taxonomies for intelligent personal hierarchies and the soft semantic Web},
      journal = {BT TECHNOLOGY JOURNAL},
      year = {2003},
      volume = {21},
      number = {4},
      pages = {113-122}
    }
    
    Martinez Lastra, J.L. & Delamer, I.M. Semantic Web Services in factory automation: Fundamental insights and research roadmap {2006} IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
    Vol. {2}({1}), pp. {1-11} 
    article DOI  
    Abstract: One of the significant challenges for current and future manufacturing systems is that of providing rapid reconfigurability in order to evolve and adapt to mass customization. This challenge is aggravated if new types of processes and components are introduced, as existing components are expected to interact with the novel entities but have no previous knowledge on how to collaborate. This statement not only applies to innovative processes and devices, but is also due to the impossibility to incorporate knowledge in a single device about all types of available system components. This paper proposes the use of Semantic Web Services in order to overcome this challenge. The use of ontologies and explicit semantics enable performing logical reasoning to infer sufficient knowledge on the classification of processes that machines offer, and on how to execute and compose those processes to carry out manufacturing orchestration autonomously. A series of motivating utilization scenarios are illustrated, and a research roadmap is presented.
    BibTeX:
    @article{MartinezLastra2006,
      author = {Martinez Lastra, Jose L. and Delamer, Ivan M.},
      title = {Semantic Web Services in factory automation: Fundamental insights and research roadmap},
      journal = {IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS},
      year = {2006},
      volume = {2},
      number = {1},
      pages = {1-11},
      doi = {{10.1109/TII.2005.862144}}
    }
    
    Masuoka, R., Parsia, B. & Labrou, Y. Task computing - The semantic web meets pervasive computing {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {866-881} 
    inproceedings  
    Abstract: Non-expert users have to accomplish non-trivial tasks in application and device-rich computing environments. The increasing complexity of such environments is detrimental to user productivity (and occasionally, sanity). We propose to reduce these difficulties by shifting focus to what users want to do (i.e., on the tasks at hand) rather than on the specific means for doing those tasks. We call this shift in focus ``task computing''; we argue that ``task computing'' offers an incentive to device manufacturers to incorporate semantic web technologies into their devices in order to get the benefits of easier and more flexible use of their devices' features by end-users. To support task computing, we developed an environment called a ``Task Computing Environment'' (TCE), which we have implemented using standard Semantic Web (RDF, OWL, DAML-S), Web Services (SOAP, WSDL) and pervasive computing (UPnP) technologies, we describe and evaluate our TCE implementation, and we discuss how it has been used to realize various device-usage scenarios.
    BibTeX:
    @inproceedings{Masuoka2003,
      author = {Masuoka, R and Parsia, B and Labrou, Y},
      title = {Task computing - The semantic web meets pervasive computing},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {866-881},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    May, W., Alferes, J. & Amador, R. Active rules in the Semantic Web: Dealing with language heterogeneity {2005}
    Vol. {3791}RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS, pp. {30-44} 
    inproceedings  
    Abstract: In the same way as the ``static'' Semantic Web deals with data model and language heterogeneity and semantics that lead to RDF and OWL, there is language heterogeneity and the need for a semantical account concerning Web dynamics. Thus, generic rule markup has to bridge these discrepancies, i.e., allow for composition of component languages, retaining their distinguished semantics and making them accessible e.g. for reasoning about rules. In this paper we analyze the basic concepts for a general language for evolution and reactivity in the Semantic Web. We propose an ontology based on the paradigm of Event-Condition-Action (ECA) rules including an XML markup. In this framework, different languages for events (including languages for composite events), conditions (queries and tests) and actions (including complex actions) can be composed to define high-level rules for describing behavior in the Semantic Web.
    BibTeX:
    @inproceedings{May2005,
      author = {May, W and Alferes, JJ and Amador, R},
      title = {Active rules in the Semantic Web: Dealing with language heterogeneity},
      booktitle = {RULES AND RULE MARKUP LANGUAGES FOR THE SEMANTIC WEB, PROCEEDINGS},
      year = {2005},
      volume = {3791},
      pages = {30-44},
      note = {1st International Conference on Rules and Rule Markup Languages for the Semantic Web, Galway, IRELAND, NOV 10-12, 2005}
    }
    
    McBride, B. Jena: A semantic Web toolkit {2002} IEEE INTERNET COMPUTING
    Vol. {6}({6}), pp. {55-59} 
    article  
    BibTeX:
    @article{McBride2002,
      author = {McBride, B},
      title = {Jena: A semantic Web toolkit},
      journal = {IEEE INTERNET COMPUTING},
      year = {2002},
      volume = {6},
      number = {6},
      pages = {55-59}
    }
    
    McGuinness, D. Question answering on the semantic web {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({1}), pp. {82-85} 
    article  
    BibTeX:
    @article{McGuinness2004,
      author = {McGuinness, DL},
      title = {Question answering on the semantic web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {1},
      pages = {82-85}
    }
    
    McGuinness, D., Fikes, R., Hendler, J. & Stein, L. DAML+OIL: An ontology language for the semantic Web {2002} IEEE INTELLIGENT SYSTEMS
    Vol. {17}({5}), pp. {72-80} 
    article  
    BibTeX:
    @article{McGuinness2002,
      author = {McGuinness, DL and Fikes, R and Hendler, J and Stein, LA},
      title = {DAML+OIL: An ontology language for the semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2002},
      volume = {17},
      number = {5},
      pages = {72-80}
    }
    
    McIlraith, S., Son, T. & Zeng, H. Semantic Web services {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({2}), pp. {46-53} 
    article  
    BibTeX:
    @article{McIlraith2001,
      author = {McIlraith, SA and Son, TC and Zeng, HL},
      title = {Semantic Web services},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {2},
      pages = {46-53}
    }
    
    McMahon, C., Lowe, A. & Culley, S. Knowledge management in engineering design: personalization and codification {2004} JOURNAL OF ENGINEERING DESIGN
    Vol. {15}({4}), pp. {307-325} 
    article DOI  
    Abstract: Knowledge management is one of the key enabling technologies of distributed engineering enterprises. It encompasses a wide range of organizational, management and technologically orientated approaches that promote the exploitation of an organizations' intellectual assets. Knowledge management approaches may be divided into personalization approaches that emphasize human resources and communication, and codification approaches that emphasize the collection and organization of knowledge. This distinction is used to explore the application of knowledge management in engineering design, after first outlining the engineering circumstances that have led to the current emphasis on the application. The paper then gives an overview of approaches to knowledge management through personalization, including human and organizational approaches, concentrating on the establishment of communities of practice. The role of information technology is explained both in terms of personalization (communication and team support through computer-supported cooperative work) and of codification through information management, knowledge structuring and knowledge-based engineering. The paper concludes with a discussion of the match of knowledge management approach to engineering circumstance, and of the current challenges of knowledge management.
    BibTeX:
    @article{McMahon2004,
      author = {McMahon, C and Lowe, A and Culley, S},
      title = {Knowledge management in engineering design: personalization and codification},
      journal = {JOURNAL OF ENGINEERING DESIGN},
      year = {2004},
      volume = {15},
      number = {4},
      pages = {307-325},
      note = {Symposium on Tools and Methods of Competitive Engineering, WUHAN, PEOPLES R CHINA, APR 22-26, 2002},
      doi = {{10.1080/09544820410001697154}}
    }
    
    McMahon, C., Lowe, A., Culley, S., Corderoy, M., Crossland, R., Shah, T. & Stewart, D. Waypoint: An integrated search and retrieval system for engineering documents {2004} JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING
    Vol. {4}({4}), pp. {329-338} 
    article DOI  
    Abstract: This paper describes the architecture and technical capabilities of an integrated engineering information search and retrieval system. The system is designed with a flexible architecture that allows it to be incorporated in other software systems and also to itself incorporate a variety of different software components. It provides uniform access to multiple heterogeneous information collections and an integrated access mechanism allowing both keyword searching and browsing of classification schemes interchangeably in a single information access session. Browsing using the system is based on a faceted classification approach in which continual feedback is given to the user on how the results of a search task may be refined, by updating browsable classifications to reflect previous user selections. The classification scheme is populated using an automatic constraint-based classifier: The article describes the rationale behind the choice of the system architecture and the incorporated technologies, and also describes three examples developed using the system.
    BibTeX:
    @article{McMahon2004a,
      author = {McMahon, C and Lowe, A and Culley, S and Corderoy, M and Crossland, R and Shah, T and Stewart, D},
      title = {Waypoint: An integrated search and retrieval system for engineering documents},
      journal = {JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING},
      year = {2004},
      volume = {4},
      number = {4},
      pages = {329-338},
      doi = {{10.1115/1.1812557}}
    }
    
    McNamara, D., Levinstein, I. & Boonthum, C. iSTART: Interactive strategy training for active reading and thinking {2004} BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS
    Vol. {36}({2}), pp. {222-233} 
    article  
    Abstract: Interactive Strategy Training for Active Reading and Thinking (iSTART) is a Web-based application that provides young adolescent to college-age students with high-level reading strategy training to improve comprehension of science texts. iSTART is modeled after an effective, human-delivered intervention called self-explanation reading training (SERT), which trains readers to use active reading strategies to self-explain difficult texts more effectively. To make the training more widely available, the Web-based trainer has been developed. Transforming the training from a human-delivered application to a computer-based one has resulted in a highly interactive trainer that adapts its methods to the performance of the students. The iSTART trainer introduces the strategies in a simulated classroom setting with interaction between three animated characters-an instructor character and two student characters-and the human trainee. Thereafter, the trainee identifies the strategies in the explanations of a student character who is guided by an instructor character. Finally, the trainee practices self-explanation under the guidance of an instructor character. We describe this system and discuss how appropriate feedback is generated.
    BibTeX:
    @article{McNamara2004,
      author = {McNamara, DS and Levinstein, IB and Boonthum, C},
      title = {iSTART: Interactive strategy training for active reading and thinking},
      journal = {BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS},
      year = {2004},
      volume = {36},
      number = {2},
      pages = {222-233},
      note = {33rd Annual Meeting of the Society-for-Computers-in-Psychology, Vancouver, CANADA, NOV 06, 2003}
    }
    
    Medjahed, B. & Bouguettaya, A. A multilevel composability model for semantic Web services {2005} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {17}({7}), pp. {954-968} 
    article  
    Abstract: We propose a composability model to ascertain that Web services can safely be combined, hence avoiding unexpected failures at runtime. Composability is checked through a set of rules organized into four levels: syntactic, static semantic, dynamic semantic, and qualitative levels. We introduce the concepts of composability degree and &tau;- composability to cater for partial and total composability. We also propose a set of algorithms for checking composability. Finally, we conduct a performance study ( analytical and experimental) of the proposed algorithms.
    BibTeX:
    @article{Medjahed2005,
      author = {Medjahed, B and Bouguettaya, A},
      title = {A multilevel composability model for semantic Web services},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2005},
      volume = {17},
      number = {7},
      pages = {954-968}
    }
    
    Medjahed, B., Bouguettaya, A. & Elmagarmid, A. Composing Web services on the Semantic Web {2003} VLDB JOURNAL
    Vol. {12}({4}), pp. {333-351} 
    article DOI  
    Abstract: Service composition is gaining momentum as the potential silver bullet for the envisioned Semantic Web. It purports to take the Web to unexplored efficiencies and provide a flexible approach for promoting all types of activities in tomorrow's Web. Applications expected to heavily take advantage of Web service composition include B2B E-commerce and E-government. To date, enabling composite services has largely been an ad hoc, time-consuming, and error-prone process involving repetitive low-level programming. In this paper, we propose an ontology-based framework for the automatic composition of Web services. We present a technique to generate composite services from high-level declarative descriptions. We define formal safeguards for meaningful composition through the use of composability rules. These rules compare the syntactic and semantic features of Web services to determine whether two services are composable. We provide an implementation using an E-government application offering customized services to indigent citizens. Finally, we present an exhaustive performance experiment to assess the scalability of our approach.
    BibTeX:
    @article{Medjahed2003,
      author = {Medjahed, B and Bouguettaya, A and Elmagarmid, AK},
      title = {Composing Web services on the Semantic Web},
      journal = {VLDB JOURNAL},
      year = {2003},
      volume = {12},
      number = {4},
      pages = {333-351},
      doi = {{10.1007/s00778-003-0101-5}}
    }
    
    Melis, E., Goguadze, G., Homik, M., Libbrecht, P., Ullrich, C. & Winterstein, S. Semantic-aware components and services of ActiveMath {2006} BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY
    Vol. {37}({3}), pp. {405-423} 
    article  
    Abstract: ActiveMath is a complex web-based adaptive learning environment with a number of components and interactive learning tools. The basis for handling semantics of learning content is provided by its semantic (mathematics) content markup, which is additionally annotated with educational metadata. Several components, tools and external services can make use of that content markup, eg, a course generator, a semantic search engine and user input evaluation services. The components and services have to communicate, pass content and state changes, actions, etc including mathematical semantics and educational markup. The novel event infrastructure supports this communication. This paper focuses on the usage of the content's semantics by selected novel components and sketches the communication.
    BibTeX:
    @article{Melis2006,
      author = {Melis, E and Goguadze, G and Homik, M and Libbrecht, P and Ullrich, C and Winterstein, S},
      title = {Semantic-aware components and services of ActiveMath},
      journal = {BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY},
      year = {2006},
      volume = {37},
      number = {3},
      pages = {405-423}
    }
    
    Menczer, F. Lexical and semantic clustering by web links {2004} JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY
    Vol. {55}({14}), pp. {1261-1269} 
    article DOI  
    Abstract: Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correlation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are often simply assumed to hold, and Web search tools are built on such assumptions. The present quantitative confirmation sheds light on the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    BibTeX:
    @article{Menczer2004,
      author = {Menczer, F},
      title = {Lexical and semantic clustering by web links},
      journal = {JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY},
      year = {2004},
      volume = {55},
      number = {14},
      pages = {1261-1269},
      doi = {{10.1002/asi.20081}}
    }
    
    Menczer, F. Correlated topologies in citation networks and the Web {2004} EUROPEAN PHYSICAL JOURNAL B
    Vol. {38}({2}), pp. {211-221} 
    article DOI  
    Abstract: Information networks such as the scientific literature and the Web have been studied extensively by different communities focusing on alternative topological properties induced by citation links, textual content, and semantic relationships. This paper reviews work that brings such different perspectives together in order to build better search tools and to understand how the Web's scale free topology emerges from author behavior. I describe three topologies induced by different classes of similarity measures, and outline empirical data that allows us to quantify and map their correlations. The data is also used to study a power law relationship between the content similarity between two documents and the probability that they are connected by citations or hyperlinks. Such finding has led to a remarkably powerful growth model for information networks, which simultaneously predicts the distribution of degree and the distribution of content similarity across pairs of documents -- Web pages connected by links and scientific articles connected by citations.
    BibTeX:
    @article{Menczer2004a,
      author = {Menczer, F},
      title = {Correlated topologies in citation networks and the Web},
      journal = {EUROPEAN PHYSICAL JOURNAL B},
      year = {2004},
      volume = {38},
      number = {2},
      pages = {211-221},
      doi = {{10.1140/epjb/e2004-00114-1}}
    }
    
    Menczer, F. Growing and navigating the small world Web by local content {2002} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {99}({22}), pp. {14014-14019} 
    article DOI  
    Abstract: Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.
    BibTeX:
    @article{Menczer2002,
      author = {Menczer, F},
      title = {Growing and navigating the small world Web by local content},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2002},
      volume = {99},
      number = {22},
      pages = {14014-14019},
      doi = {{10.1073/pnas.212348399}}
    }
    
    Merelli, E., Armano, G., Cannata, N., Corradini, F., d'Inverno, M., Doms, A., Lord, P., Martin, A., Milanesi, L., Moeller, S., Schroeder, M. & Luck, M. Agents in bioinformatics, computational and systems biology {2007} BRIEFINGS IN BIOINFORMATICS
    Vol. {8}({1}), pp. {45-59} 
    article DOI  
    Abstract: The adoption of agent technologies and multi-agent systems constitutes an emerging area in bioinformatics. In this article, we report on the activity of the Working Group on Agents in Bioinformatics (BIOAGENTS) founded during the first AgentLink III Technical Forum meeting on the 2nd of July, 2004, in Rome. The meeting provided an opportunity for seeding collaborations between the agent and bioinformatics communities to develop a different (agent-based) approach of computational frameworks both for data analysis and management in bioinformatics and for systems modelling and simulation in computational and systems biology. The collaborations gave rise to applications and integrated tools that we summarize and discuss in context of the state of the art in this area. We investigate on future challenges and argue that the field should still be explored from many perspectives ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages to be used by information agents, and to the adoption of agents for computational grids.
    BibTeX:
    @article{Merelli2007,
      author = {Merelli, Emanuela and Armano, Giuliano and Cannata, Nicola and Corradini, Flavio and d'Inverno, Mark and Doms, Andreas and Lord, Phillip and Martin, Andrew and Milanesi, Luciano and Moeller, Steffen and Schroeder, Michael and Luck, Michael},
      title = {Agents in bioinformatics, computational and systems biology},
      journal = {BRIEFINGS IN BIOINFORMATICS},
      year = {2007},
      volume = {8},
      number = {1},
      pages = {45-59},
      doi = {{10.1093/bib/bbl014}}
    }
    
    Micarelli, A. & Sciarrone, F. Anatomy and empirical evaluation of an adaptive Web-based information filtering system {2004} USER MODELING AND USER-ADAPTED INTERACTION
    Vol. {14}({2-3}), pp. {159-200} 
    article  
    Abstract: A case study in adaptive information filtering systems for the Web is presented. The described system comprises two main modules, named HUMOS and WIFS. HUMOS is a user modeling system based on stereotypes. It builds and maintains long term models of individual Internet users, representing their information needs. The user model is structured as a frame containing informative words, enhanced with semantic networks. The proposed machine learning approach for the user modeling process is based on the use of an artificial neural network for stereotype assignments. WIFS is a content-based information filtering module, capable of selecting html/text documents on computer science collected from the Web according to the interests of the user. It has been created for the very purpose of the structure of the user model utilized by HUMOS. Currently, this system acts as an adaptive interface to the Web search engine ALTA VISTA(TM). An empirical evaluation of the system has been made in experimental settings. The experiments focused on the evaluation, by means of a non-parametric statistics approach, of the added value in terms of system performance given by the user modeling component; it also focused on the evaluation of the usability and user acceptance of the system. The results of the experiments are satisfactory and support the choice of a user model-based approach to information filtering on the Web.
    BibTeX:
    @article{Micarelli2004,
      author = {Micarelli, A and Sciarrone, F},
      title = {Anatomy and empirical evaluation of an adaptive Web-based information filtering system},
      journal = {USER MODELING AND USER-ADAPTED INTERACTION},
      year = {2004},
      volume = {14},
      number = {2-3},
      pages = {159-200}
    }
    
    Mika, P. Flink: Semantic Web technology for the extraction and analysis of social networks {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({2-3}), pp. {211-223} 
    article DOI  
    Abstract: We present the Flink system for the extraction, aggregation and visualization of online social networks. Flink employs semantic technology for reasoning with personal information extracted from a number of electronic information sources including web pages, emails, publication archives and FOAF profiles. The acquired knowledge is used for the purposes of social network analysis and for generating a web-based presentation of the community. We demonstrate our novel method to social science based on electronic data using the example of the Semantic Web research community. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Mika2005,
      author = {Mika, P},
      title = {Flink: Semantic Web technology for the extraction and analysis of social networks},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {2-3},
      pages = {211-223},
      doi = {{10.1016/j.websem.2005.05.006}}
    }
    
    Miller, J., Baramidze, G., Sheth, A. & Fishwick, P. Investigating ontologies for simulation modeling {2004} 37TH ANNUAL SIMULATION SYMPOSIUM, PROCEEDINGS, pp. {55-63}  inproceedings  
    Abstract: Many fields have or are developing ontologies for their subdomains. The Gene Ontology (GO) is now considered to be a great success in biology, a field that has already developed several extensive ontologies. Similar advantages could accrue to the simulation and modeling community. Ontologies provide a way to establish common vocabularies and capture domain knowledge for organizing the domain with a community wide agreement or with the context of agreement between leading domain experts. They can be used to deliver significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved analytics and knowledge discovery capabilities. Such knowledge can be used to establish common vocabularies, nomenclatures and taxonomies with links to detailed information sources. This paper investigates the use, the benefits and the development requirements of Web-accessible ontologies for discrete-event simulation and modeling. As a case study, the development of a prototype OWL-based ontology for modeling and simulation called the Discrete-event Modeling Ontology (DeMO) is also discussed. Prototype ontologies such as DeMO can serve as a basis for achieving broader community agreement and adoption of ontologies for this field.
    BibTeX:
    @inproceedings{Miller2004,
      author = {Miller, JA and Baramidze, GT and Sheth, AP and Fishwick, PA},
      title = {Investigating ontologies for simulation modeling},
      booktitle = {37TH ANNUAL SIMULATION SYMPOSIUM, PROCEEDINGS},
      year = {2004},
      pages = {55-63},
      note = {37th Annual Simulation Symposium, Arlington, VA, APR 18-22, 2004}
    }
    
    Miller, L., Seaborne, A. & Reggiori, A. Three implementations of SquishQL, a simple RDF query language {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {423-435} 
    inproceedings  
    Abstract: RDF provides a basic way to represent data for the Semantic Web. We have been experimenting with the query paradigm for working with RDF data in semantic web applications. Query of RDF data provides a declarative access mechanism that is suitable for application usage and remote access. We describe work on a conceptual model for querying RDF data that refines ideas first presented in at the W3C workshop on Query Languages [14] and the design of one possible syntax, derived from [7], that is suitable. for application programmers. Further, we present experience gained in three implementations of the query language.
    BibTeX:
    @inproceedings{Miller2002,
      author = {Miller, L and Seaborne, A and Reggiori, A},
      title = {Three implementations of SquishQL, a simple RDF query language},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {423-435},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Mobasher, B., Dai, H., Luo, T., Sun, Y. & Zhu, J. Integrating web usage and content mining for more effective personalization {2000}
    Vol. {1875}ELECTRONIC COMMERCE AND WEB TECHNOLOGIES, PROCEEDINGS, pp. {165-176} 
    inproceedings  
    Abstract: Recent proposals have suggested Web usage mining as an enabling mechanism to overcome the problems associated with more traditional Web personalization techniques such as collaborative or content-based filtering. These problems include lack of scalability, reliance on subjective user ratings or static profiles, and the inability to capture a richer set of semantic relationships among objects (in content-based systems). Yet, usage-based personalization can be problematic when little usage data is available pertaining to some objects or when the site content changes regularly. For more effective personalization, both usage and content attributes of a site must be integrated into a Web mining framework and used by the recommendation engine in a uniform manner. In this paper we present such a framework, distinguishing between the offline tasks of data preparation and mining, and the online process of customizing Web pages based on a user's active session. We describe effective techniques based on clustering to obtain a uniform representation for both site usage and site content profiles, and we show how these profiles can be used to perform real-time personalization.
    BibTeX:
    @inproceedings{Mobasher2000,
      author = {Mobasher, B and Dai, HH and Luo, T and Sun, YQ and Zhu, J},
      title = {Integrating web usage and content mining for more effective personalization},
      booktitle = {ELECTRONIC COMMERCE AND WEB TECHNOLOGIES, PROCEEDINGS},
      year = {2000},
      volume = {1875},
      pages = {165-176},
      note = {1st International Conference on Electronic Commerce and Web Technologies, LONDON, ENGLAND, SEP 04-06, 2000}
    }
    
    Mobasher, B., Jin, X. & Zhou, Y. Semantically enhanced Collaborative Filtering on the web {2004}
    Vol. {3209}WEB MINING: FROM WEB TO SEMANTIC WEB, pp. {57-76} 
    inproceedings  
    Abstract: Item-based Collaborative Filtering (CF) algorithms have been designed to deal with the scalability problems associated with traditional user-based CF approaches without sacrificing recommendation or prediction accuracy. Item-based algorithms avoid the bottleneck in computing user-user correlations by first considering the relationships among items and performing similarity computations in a reduced space. Because the computation of item similarities is independent of the methods used for generating predictions, multiple knowledge sources, including structured semantic information about items, can be brought to bear in determining similarities among items. The integration of semantic similarities for items with rating- or usage-based similarities allows the system to make inferences based on the underlying reasons for which a user may or may not be interested in a particular item. Furthermore, in cases where little or no rating (or usage) information is available (such as in the case of newly added items, or in very sparse data sets), the system can still use the semantic similarities to provide reasonable recommendations for users. In this paper, we introduce an approach for semantically enhanced collaborative filtering in which structured semantic knowledge about items, extracted automatically from the Web based on domain-specific reference ontologies, is used in conjunction with user-item mappings to create a combined similarity measure and generate predictions. Our experimental results, demonstrate that the integrated approach yields significant advantages both in terms of improving accuracy, as well as in dealing with very sparse data sets or new items.
    BibTeX:
    @inproceedings{Mobasher2004,
      author = {Mobasher, B and Jin, X and Zhou, Y},
      title = {Semantically enhanced Collaborative Filtering on the web},
      booktitle = {WEB MINING: FROM WEB TO SEMANTIC WEB},
      year = {2004},
      volume = {3209},
      pages = {57-76},
      note = {1st European Web Mining Forum, Cavtat Dubrovnik, CROATIA, SEP 22, 2003}
    }
    
    Motik, B., Horrocks, I., Rosati, R. & Sattler, U. Can OWL and logic programming live together happily ever after? {2006}
    Vol. {4273}Semantic Web - ISEC 2006, Proceedings, pp. {501-514} 
    inproceedings  
    Abstract: Logic programming (LP) is often seen as a way to overcome several shortcomings of the Web Ontology Language (OWL), such as the inability to model integrity constraints or perform closed-world querying. However, the open-world semantics of OWL seems to be fundamentally incompatible with the closed-world semantics of LP. This has sparked a heated debate in the Semantic Web community, resulting in proposals for alternative ontology languages based entirely on logic programming. To help resolving this debate, we investigate the practical use cases which seem to be addressed by logic programming. In fact, many of these requirements have already been addressed outside the Semantic Web. By drawing inspiration from these existing formalisms, we present a novel logic of hybrid MKNF knowledge bases, which seamlessly integrates OWL with LP. We are thus capable of addressing the identified use cases without a radical change in the architecture of the Semantic Web.
    BibTeX:
    @inproceedings{Motik2006,
      author = {Motik, Boris and Horrocks, Ian and Rosati, Riccardo and Sattler, Ulrike},
      title = {Can OWL and logic programming live together happily ever after?},
      booktitle = {Semantic Web - ISEC 2006, Proceedings},
      year = {2006},
      volume = {4273},
      pages = {501-514},
      note = {5th International Semantic Web Conference (ISWC 2006), Athens, GA, NOV 05-09, 2006}
    }
    
    Motik, B., Maedche, A. & Volz, R. A conceptual modeling approach for semantics-driven enterprise applications {2002}
    Vol. {2519}ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2002: COOPLS, DOA, AND ODBASE, pp. {1082-1099} 
    inproceedings  
    Abstract: In recent years ontologies - shared conceptualizations of some domain - are increasingly seen as the key to further automation of information processing. Although many approaches for representing and applying ontologies have already been devised, they haven't found their way into enterprise applications. In this paper we argue that ontology-based systems lack critical technical features, such as scalability, reliability, concurrency and integration with existing data sources, as well as the support for modularization and meta-concept modeling from the conceptual modeling perspective. We present a conceptual modeling approach that balances some of the trade-offs to more easily integrate into existing enterprise information infrastructure. Our approach is implemented within KAON, the Karlsruhe Ontology and Semantic Web tool suite.
    BibTeX:
    @inproceedings{Motik2002,
      author = {Motik, B and Maedche, A and Volz, R},
      title = {A conceptual modeling approach for semantics-driven enterprise applications},
      booktitle = {ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2002: COOPLS, DOA, AND ODBASE},
      year = {2002},
      volume = {2519},
      pages = {1082-1099},
      note = {Confederated Conferences CoopIS, DOA and ODBASE, IRVINE, CA, OCT 28-NOV 01, 2002}
    }
    
    Motik, B., Sattler, U. & Studer, R. Query answering for OWL-DL with rules {2004}
    Vol. {3298}SEMANTIC WEB - ISWC 2004, PROCEEDINGS, pp. {549-563} 
    inproceedings  
    Abstract: Both OWL-DL and function-free Horn rules(1) are decidable logics with interesting, yet orthogonal expressive power: from the rules perspective, OWL-DL is restricted to tree-like rules, but provides both existentially and universally quantified variables and full, monotonic negation. From the description logic perspective, rules are restricted to universal quantification, but allow for the interaction of variables in arbitrary ways. Clearly, a combination of OWL-DL and rules is desirable for building Semantic Web ontologies, and several such combinations have already been discussed. However, such a combination. might easily lead to the undecidability of interesting reasoning problems. Here, we present a decidable such combination which is, to the best of our knowledge, more general than similar decidable combinations proposed so far. Decidability is obtained by restricting rules to so-called DL-safe ones, requiring each variable in a rule to occur in a non-DL-atom in the rule body. We show that query answering in such a combined logic is decidable, and we discuss its expressive,power by means of a non-trivial example. Finally, we present an algorithm for query answering in SHIQ(D) extended with DL-safe rules based on the reduction to disjunctive datalog.
    BibTeX:
    @inproceedings{Motik2004,
      author = {Motik, B and Sattler, U and Studer, R},
      title = {Query answering for OWL-DL with rules},
      booktitle = {SEMANTIC WEB - ISWC 2004, PROCEEDINGS},
      year = {2004},
      volume = {3298},
      pages = {549-563},
      note = {3rd International Semantic Web Conference, Hiroshima, JAPAN, NOV 07-11, 2004}
    }
    
    Motik, B., Shearer, R. & Horrocks, I. Optimized reasoning in description logics using hypertableaux {2007}
    Vol. {4603}Automated Deduction - CADE-21, Proceedings, pp. {67-83} 
    inproceedings  
    Abstract: We present a novel reasoning calculus for Description Logics (DLs)-knowledge representation formalisms with applications in areas such as the Semantic Web. In order to reduce the nondeterminism due to general inclusion axioms, we base our calculus on hypertableau and hyperresolution calculi, which we extend with a blocking condition to ensure termination. To prevent the calculus from generating large models, we introduce ``anywhere'' pairwise blocking. Our preliminary implementation shows significant performance improvements on several well-known ontologies. To the best of our knowledge, our reasoner is currently the only one that can classify the original version of the GALEN terminology.
    BibTeX:
    @inproceedings{Motik2007,
      author = {Motik, Boris and Shearer, Rob and Horrocks, Ian},
      title = {Optimized reasoning in description logics using hypertableaux},
      booktitle = {Automated Deduction - CADE-21, Proceedings},
      year = {2007},
      volume = {4603},
      pages = {67-83},
      note = {21st International Conference on Automated Deduction (CADE-21), Bremen, GERMANY, JUL 17-20, 2007}
    }
    
    Motik, B., Studer, R. & Sattler, U. Query answering for OWL-DL with rules {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({1}), pp. {41-60} 
    article DOI  
    Abstract: Both OWL-DL and function-free Horn rules are decidable fragments of first-order logic with interesting, yet orthogonal expressive power. A combination of OWL-DL and rules is desirable for the Semantic Web; however, it might easily lead to the undecidability of interesting reasoning problems. Here, we present a decidable such combination where rules are required to be DL-safe: each variable in the rule is required to occur in a non-DL-atom in the rule body. We discuss the expressive power of such a combination and present an algorithm for query answering in the related logic extended with DL-safe rules, based on a reduction to disjunctive programs. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Motik2005,
      author = {Motik, B and Studer, R and Sattler, U},
      title = {Query answering for OWL-DL with rules},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {1},
      pages = {41-60},
      doi = {{10.1016/j.websem.2005.05.001}}
    }
    
    Motta, E., Domingue, J., Cabral, L. & Gaspari, M. IRS-II: A framework and infrastructure for semantic web services {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {306-318} 
    inproceedings  
    Abstract: In this paper we describe IRS-II (Internet Reasoning Service) a framework and implemented infrastructure, whose main goal is to support the publication, location, composition and execution of heterogeneous web services, augmented with semantic descriptions of their functionalities. IRS-II has three main classes of features which distinguish it from other work on semantic web services. Firstly, it supports one-click publishing of standalone software: IRS-II automatically creates the appropriate wrappers, given pointers to the standalone code. Secondly, it explicitly distinguishes between tasks (what to do) and methods (how to achieve tasks) and as a result supports capability-driven service invocation; flexible mappings between services and problem specifications; and dynamic, knowledge-based service selection. Finally, IRS-II services are web service compatible - standard web services can be trivially published through the IRS-II and any IRS-II service automatically appears as a standard web service to other web service infrastructures. In the paper we illustrate the main functionalities of IRS-II through a scenario involving a distributed application in the healthcare domain.
    BibTeX:
    @inproceedings{Motta2003,
      author = {Motta, E and Domingue, J and Cabral, L and Gaspari, M},
      title = {IRS-II: A framework and infrastructure for semantic web services},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {306-318},
      note = {2nd International Semantic Web Conference, SANIBEL, FL, OCT 20-23, 2003}
    }
    
    Motta, E., Shum, S. & Domingue, J. Ontology-driven document enrichment: principles, tools and applications {2000} INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
    Vol. {52}({6}), pp. {1071-1109} 
    article  
    Abstract: In this paper, we present an approach to document enrichment, which consists of developing and integrating formal knowledge models with archives of documents, to provide intelligent knowledge retrieval and (possibly) additional knowledge-intensive services, beyond what is currently available using ``standard'' information retrieval and search facilities. Our approach is ontology-driven, in the sense that the construction of the knowledge model is carried out in a top-down fashion, by populating a given ontology, rather than in a bottom-up fashion, by annotating a particular document. In this paper, we give an overview of the approach and we examine the various types of issues (e.g. modelling, organizational and user interface issues) which need to be tackled to effectively deploy our approach in the workplace. In addition, we also discuss a number of technologies we have developed to support ontology-driven document enrichment and we illustrate our ideas in the domains of electronic news publishing, scholarly discourse and medical guidelines. (C) 2000 Academic Press.
    BibTeX:
    @article{Motta2000,
      author = {Motta, E and Shum, SB and Domingue, J},
      title = {Ontology-driven document enrichment: principles, tools and applications},
      journal = {INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES},
      year = {2000},
      volume = {52},
      number = {6},
      pages = {1071-1109}
    }
    
    Murray-Rust, P., Rzepa, H., Tyrrell, S. & Zhang, Y. Representation and use of chemistry in the global electronic age {2004} ORGANIC & BIOMOLECULAR CHEMISTRY
    Vol. {2}({22}), pp. {3192-3203} 
    article DOI  
    Abstract: We present an overview of the current state of public semantic chemistry and propose new approaches at a strategic and a detailed level. We show by example how a model for a Chemical Semantic Web can be constructed using machine-processed data and information from journal articles.
    BibTeX:
    @article{Murray-Rust2004a,
      author = {Murray-Rust, P and Rzepa, HS and Tyrrell, SM and Zhang, Y},
      title = {Representation and use of chemistry in the global electronic age},
      journal = {ORGANIC & BIOMOLECULAR CHEMISTRY},
      year = {2004},
      volume = {2},
      number = {22},
      pages = {3192-3203},
      doi = {{10.1039/b410732b}}
    }
    
    Murray-Rust, P., Rzepa, H., Williamson, M. & Willighagen, E. Chemical Markup, XML, and the World Wide Web. 5. Applications of chemical metadata in RSS aggregators {2004} JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES
    Vol. {44}({2}), pp. {462-469} 
    article DOI  
    Abstract: Examples of the use of the RSS 1.0 (RDF Site Summary) specification together with CML (Chemical Markup Language) to create a metadata based alerting service termed CMLRSS for molecular content are presented. CMLRSS can be viewed either using generic software or with modular opensource chemical viewers and editors enhanced with CMLRSS modules. We discuss the more automated use of CMLRSS as a component of a World Wide Molecular Matrix of semantically rich chemical information.
    BibTeX:
    @article{Murray-Rust2004,
      author = {Murray-Rust, P and Rzepa, HS and Williamson, MJ and Willighagen, EL},
      title = {Chemical Markup, XML, and the World Wide Web. 5. Applications of chemical metadata in RSS aggregators},
      journal = {JOURNAL OF CHEMICAL INFORMATION AND COMPUTER SCIENCES},
      year = {2004},
      volume = {44},
      number = {2},
      pages = {462-469},
      doi = {{10.1021/ci034244p}}
    }
    
    Nack, F., van Ossenbruggen, J. & Hardman, L. That obscure object of desire: Multimedia metadata on the Web, Part 2 {2005} IEEE MULTIMEDIA
    Vol. {12}({1}), pp. {54+} 
    article  
    Abstract: The World Wide Web Consortium (W3C) and the International Standards Organization (ISO) have developed technologies that define structures for describing media semantics. Although both approaches are based on XML, a number of syntactic and semantic problems hinder their interoperability. In Part 2 we discuss these problems as well as ontological issues for media semantics and the problems of applying theoretical concepts to real-world applications.
    BibTeX:
    @article{Nack2005,
      author = {Nack, F and van Ossenbruggen, J and Hardman, L},
      title = {That obscure object of desire: Multimedia metadata on the Web, Part 2},
      journal = {IEEE MULTIMEDIA},
      year = {2005},
      volume = {12},
      number = {1},
      pages = {54+}
    }
    
    Nagypal, G. & Motik, B. A fuzzy model for representing uncertain, subjective, and vague temporal knowledge in ontologies {2003}
    Vol. {2888}ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2003: COOPIS, DOA, AND ODBASE, pp. {906-923} 
    inproceedings  
    Abstract: Time modeling is a crucial feature in many application domains. However, temporal information often is not crisp, but is uncertain, subjective and vague. This is particularly true when representing historical information, as historical accounts are inherently imprecise. Similarly, we conjecture that in the Semantic Web representing uncertain temporal information will be a common requirement. Hence, existing approaches for temporal modeling based on crisp representation of time cannot be applied to these advanced modeling tasks. To overcome these difficulties, in this paper we present fuzzy interval-based temporal model capable of representing imprecise temporal knowledge. Our approach naturally subsumes existing crisp temporal models, i.e. crisp temporal relationships are intuitively represented in our system. Apart from presenting the fuzzy temporal model, we discuss how this model is integrated with the ontology model to allow annotating ontology definitions with time specifications.
    BibTeX:
    @inproceedings{Nagypal2003,
      author = {Nagypal, G and Motik, B},
      title = {A fuzzy model for representing uncertain, subjective, and vague temporal knowledge in ontologies},
      booktitle = {ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2003: COOPIS, DOA, AND ODBASE},
      year = {2003},
      volume = {2888},
      pages = {906-923},
      note = {OTM Confederated International Conference CoopIS, DOA and ODBASE, CATANIA, ITALY, NOV 03-07, 2003}
    }
    
    Nahum, E., Barzilai, T. & Kandlur, D. Performance issues in WWW servers {2002} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {10}({1}), pp. {2-11} 
    article  
    Abstract: This paper evaluates techniques for improving operating system and network protocol software support for high-performance World Wide Web servers. We study approaches in three categories: i.e., new socket functions, per-byte optimizations, and per-connection optimizations. We examine two proposed socket functions, i.e., acceptex() and send-file(), comparing sendfile()'s effectiveness with a combination of mmap() and writev(). We show how sendfile() provides the necessary semantic support to eliminate copies and checksums in the kernel, and quantify the benefit of the function's header and close options. We also present mechanisms to reduce the number of packets exchanged in an HTTP transaction, both increasing server performance and reducing network utilization, without compromising interoperability. Results using WebStone show that our combination of mechanisms can improve server throughput by up to 64 and can eliminate up to 33% of the packets in an HTTP exchange. Results with SURGE show an aggregate increase in server throughput of 25
    BibTeX:
    @article{Nahum2002,
      author = {Nahum, E and Barzilai, T and Kandlur, DD},
      title = {Performance issues in WWW servers},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2002},
      volume = {10},
      number = {1},
      pages = {2-11}
    }
    
    Nanda, J., Simpson, T.W., Kumara, S.R.T. & Shooter, S.B. A methodology for product family ontology development using formal concept analysis and Web ontology language {2006} JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING
    Vol. {6}({2}), pp. {103-113} 
    article DOI  
    Abstract: The use of ontologies for information sharing is well documented in the literature, but the lack of a comprehensive and systematic methodology for constructing product ontologies has limited the process of developing ontologies for design artifacts. In this paper we introduce the Product Family Ontology Development Methodology (PFODM), a novel methodology to develop formal product ontologies using the Semantic Web paradigm. Within PFODM, Formal Concept Analysis (FCA) is used first to identify similarities among a finite set-of design artifacts based on their properties and then to develop and refine a product family ontology rising Web Ontology Language (OWL). A family of seven one-time-use cameras is used to demonstrate the steps of the PFODM to construct such an ontology The benefit of PFODM lies in providing a systematic and consistent methodology for constructing ontologies to support product family design. The resulting ontologies provide a hierarchical conceptual clustering of related design artifacts, which is particularly advantageous for product family design where parts, processes, and most important, information is intentionally shared and reused to reduce complexity, lead-time, and development costs. Potential uses of the resulting ontologies and FCA representations within product family design are also discussed.
    BibTeX:
    @article{Nanda2006,
      author = {Nanda, Jyotirmaya and Simpson, Timothy W. and Kumara, Soundar R. T. and Shooter, Steven B.},
      title = {A methodology for product family ontology development using formal concept analysis and Web ontology language},
      journal = {JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING},
      year = {2006},
      volume = {6},
      number = {2},
      pages = {103-113},
      doi = {{10.1115/1.2190237}}
    }
    
    Narayanan, S. & McIlraith, S. Analysis and simulation of Web services {2003} COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING
    Vol. {42}({5}), pp. {675-693} 
    article DOI  
    Abstract: Web services-Web-accessible programs and devices-are a key application area for the Semantic Web. With the proliferation of Web services and the evolution towards the Semantic Web comes the opportunity to automate various Web services tasks. Our objective is to enable markup and automated reasoning technology to describe, simulate, compose, test, and verify compositions of Web services. We take as our starting point the DAML-S DAML + OIL ontology for describing the capabilities of Web services. We define the semantics for a relevant subset of DAML-S in terms of a first-order logical language. With the semantics in hand, we encode our service descriptions in a Petri Net formalism and provide decision procedures for Web service simulation, verification and composition. We also provide an analysis of the complexity of these tasks under different restrictions to the DAML-S composite services we can describe. Finally, we present an implementation of our analysis techniques. This implementation takes as input a DAML-S description of a Web service, automatically generates a Petri Net and performs the desired analysis. Such a tool has broad applicability both as a back end to existing manual Web service composition tools, and as a stand-alone tool for Web service developers. (C) 2003 Published by Elsevier Science B.V.
    BibTeX:
    @article{Narayanan2003,
      author = {Narayanan, S and McIlraith, S},
      title = {Analysis and simulation of Web services},
      journal = {COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING},
      year = {2003},
      volume = {42},
      number = {5},
      pages = {675-693},
      note = {11th International World Wide Web Conference, HONOLULU, HAWAII, MAY 07-11, 2002},
      doi = {{10.1016/S1389-1286(03)00228-7}}
    }
    
    Nasraoui, O., Soliman, M., Saka, E., Badia, A. & Germain, R. A web usage mining framework for mining evolving user profiles in dynamic Web sites {2008} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {20}({2}), pp. {202-215} 
    article DOI  
    Abstract: In this paper, we present a complete framework and findings in mining Web usage patterns from Web log files of a real Web site that has all the challenging aspects of real-life Web usage mining, including evolving user profiles and external data describing an ontology of the Web content. Even though the Web site under study is part of a nonprofit organization that does not ``sell'' any products, it was crucial to understand ``who'' the users were, ``what'' they looked at, and ``how their interests changed with time,'' all of which are important questions in Customer Relationship Management (CRM). Hence, we present an approach for discovering and tracking evolving user profiles. We also describe how the discovered user profiles can be enriched with explicit information need that is inferred from search queries extracted from Web log data. Profiles are also enriched with other domain-specific information facets that give a panoramic view of the discovered mass usage modes. An objective validation strategy is also used to assess the quality of the mined profiles, in particular their adaptability in the face of evolving user behavior.
    BibTeX:
    @article{Nasraoui2008,
      author = {Nasraoui, Olfa and Soliman, Maha and Saka, Esin and Badia, Antonio and Germain, Richard},
      title = {A web usage mining framework for mining evolving user profiles in dynamic Web sites},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2008},
      volume = {20},
      number = {2},
      pages = {202-215},
      note = {International Workshop on Customer Relationship Management - Data Mining Meets Marketing, New York, NY, NOV 18-19, 2005},
      doi = {{10.1109/TKDE.2007.190667}}
    }
    
    Navigli, R. & Velardi, P. Structural semantic interconnections: A knowledge-based approach to word sense disambiguation {2005} IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
    Vol. {27}({7}), pp. {1075-1086} 
    article  
    Abstract: Word Sense Disambiguation ( WSD) is traditionally considered an AI- hard problem. A break- through in this field would have a significant impact on many relevant Web- based applications, such as Web information retrieval, improved access to Web services, information extraction, etc. Early approaches to WSD, based on knowledge representation techniques, have been replaced in the past few years by more robust machine learning and statistical techniques. The results of recent comparative evaluations of WSD systems, however, show that these methods have inherent limitations. On the other hand, the increasing availability of large- scale, rich lexical knowledge resources seems to provide new challenges to knowledge- based approaches. In this paper, we present a method, called structural semantic interconnections ( SSI), which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications. Sense specifications are created from several available lexical resources that we integrated in part manually, in part with the help of automatic procedures. The SSI algorithm has been applied to different semantic disambiguation problems, like automatic ontology population, disambiguation of sentences in generic texts, disambiguation of words in glossary definitions. Evaluation experiments have been performed on specific knowledge domains ( e. g., tourism, computer networks, enterprise interoperability), as well as on standard disambiguation test sets.
    BibTeX:
    @article{Navigli2005,
      author = {Navigli, R and Velardi, P},
      title = {Structural semantic interconnections: A knowledge-based approach to word sense disambiguation},
      journal = {IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE},
      year = {2005},
      volume = {27},
      number = {7},
      pages = {1075-1086}
    }
    
    Navigli, R. & Velardi, P. Learning domain ontologies from document warehouses and dedicated web sites {2004} COMPUTATIONAL LINGUISTICS
    Vol. {30}({2}), pp. {151-179} 
    article  
    Abstract: We present a method and a tool, OntoLearn, aimed at the extraction of domain ontologies from Web sites, and more generally from documents shared among the members of virtual organizations. OntoLearn first extracts a domain terminology from available documents. Then, complex domain terms are semantically interpreted and arranged in a hierarchical fashion. Finally, a general-purpose ontology, WordNet, is trimmed and enriched with the detected domain concepts. The major novel aspect of this approach is semantic interpretation, that is, the association of a complex concept with a complex term. This involves finding the appropriate WordNet concept for each word of a terminological string and the appropriate conceptual relations that hold among the concept components. Semantic interpretation is based on a new word sense disambiguation algorithm, called structural semantic interconnections.
    BibTeX:
    @article{Navigli2004,
      author = {Navigli, R and Velardi, P},
      title = {Learning domain ontologies from document warehouses and dedicated web sites},
      journal = {COMPUTATIONAL LINGUISTICS},
      year = {2004},
      volume = {30},
      number = {2},
      pages = {151-179}
    }
    
    Navigli, R., Velardi, P. & Gangemi, A. Ontology learning and its automated terminology translation {2003} IEEE INTELLIGENT SYSTEMS
    Vol. {18}({1}), pp. {22-31} 
    article  
    BibTeX:
    @article{Navigli2003,
      author = {Navigli, R and Velardi, P and Gangemi, A},
      title = {Ontology learning and its automated terminology translation},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2003},
      volume = {18},
      number = {1},
      pages = {22-31}
    }
    
    Nejdl, W., Olmedilla, D. & Winslett, M. PeerTrust: Automated trust negotiation for peers on the semantic web {2004}
    Vol. {3178}SECURE DATA MANAGEMENT, PROCEEDINGS, pp. {118-132} 
    inproceedings  
    Abstract: Researchers have recently begun to develop and investigate policy languages to describe trust and security requirements on the Semantic Web. Such policies will be one component of a run-time system that can negotiate to establish trust on the Semantic Web. In this paper, we show how to express different kinds of access control policies and control their use at run time using PeerTrust, a new approach to trust establishment. We show how to use distributed logic programs as the basis for PeerTrust's simple yet expressive policy and trust negotiation language, built upon the rule layer of the Semantic Web layer cake. We describe the PeerTrust language based upon distributed logic programs, and compare it to other approaches to implementing policies and trust negotiation. Through examples, we show how PeerTrust can be used to support delegation, policy protection and negotiation strategies in the ELENA distributed eLearning environment. Finally, we discuss related work and identify areas for further research.
    BibTeX:
    @inproceedings{Nejdl2004,
      author = {Nejdl, W and Olmedilla, D and Winslett, M},
      title = {PeerTrust: Automated trust negotiation for peers on the semantic web},
      booktitle = {SECURE DATA MANAGEMENT, PROCEEDINGS},
      year = {2004},
      volume = {3178},
      pages = {118-132},
      note = {Secure Data Management Workshop, Toronto, CANADA, AUG, 2004}
    }
    
    Noy, N., Sintek, M., Decker, S., Crubezy, M., Fergerson, R. & Musen, M. Creating Semantic Web contents with Protege-2000 {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({2}), pp. {60-71} 
    article  
    BibTeX:
    @article{Noy2001,
      author = {Noy, NF and Sintek, M and Decker, S and Crubezy, M and Fergerson, RW and Musen, MA},
      title = {Creating Semantic Web contents with Protege-2000},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {2},
      pages = {60-71}
    }
    
    Noy, N.F., Chugh, A., Liu, W. & Musen, M.A. A framework for ontology evolution in collaborative environments {2006}
    Vol. {4273}Semantic Web - ISEC 2006, Proceedings, pp. {544-558} 
    inproceedings  
    Abstract: With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily.
    BibTeX:
    @inproceedings{Noy2006,
      author = {Noy, Natalya F. and Chugh, Abhita and Liu, William and Musen, Mark A.},
      title = {A framework for ontology evolution in collaborative environments},
      booktitle = {Semantic Web - ISEC 2006, Proceedings},
      year = {2006},
      volume = {4273},
      pages = {544-558},
      note = {5th International Semantic Web Conference (ISWC 2006), Athens, GA, NOV 05-09, 2006}
    }
    
    O'Connor, M., Knublauch, H., Tu, S., Grosof, B., Dean, M., Grosso, W. & Musen, M. Supporting rule system interoperability on the semantic web with SWRL {2005}
    Vol. {3729}SEMANTIC WEB - ISWC 2005, PROCEEDINGS, pp. {974-986} 
    inproceedings  
    Abstract: Rule languages and rule systems are widely used in business applications including computer-aided training, diagnostic fact finding, compliance monitoring, and process control. However, there is little interoperability between current rule-based systems. Interoperation is one of the main goals of the Semantic Web, and developing a language for sharing rules is often seen as a key step in reaching this goal. The Semantic Web Rule Language (SWRL) is an important first Step in defining such a rule language. This paper describes the development of a configurable interoperation environment for SWRL built in Protege-OWL, the most widely-used OWL development platform. This environment supports both a highly-interactive, full-featured editor for SWRL and a plugin mechanism for integrating third party rule engines. We have integrated the popular Jess rule engine into this environment, thus providing one of the first steps on the path to rule integration on the Web.
    BibTeX:
    @inproceedings{O'Connor2005,
      author = {O'Connor, MT and Knublauch, H and Tu, S and Grosof, B and Dean, M and Grosso, W and Musen, M},
      title = {Supporting rule system interoperability on the semantic web with SWRL},
      booktitle = {SEMANTIC WEB - ISWC 2005, PROCEEDINGS},
      year = {2005},
      volume = {3729},
      pages = {974-986},
      note = {4th International Semantic Web Conference (ISWC 2005), Galway, IRELAND, NOV 06-10, 2005}
    }
    
    OBRACZKA, K., DANZIG, P. & LI, S. INTERNET RESOURCE DISCOVERY SERVICES {1993} COMPUTER
    Vol. {26}({9}), pp. {8-22} 
    article  
    Abstract: This article presents an overview of resource discovery services currently available on the Internet. The authors concentrate on the following discovery tools: the Wide Area Information Servers (WAIS) project, Archie, Pros pero, Gopher, the World-Wide Web (WWW), Netfind, the X.500 directory Indie, the Knowbot Information Service (KIS), Alex, Semantic File Systems, and Nomenclator. These resource discovery tools specialize in browsing, searching, and organizing information distributed throughout the Internet. Browsing tools let users navigate the information space and find the specific data they need. Indexing search tools automatically locate relevant data on the basis of user interest. Independent of the approach used, resource discovery services can also help users organize newfound information so that they can refer to it without having to repeat the entire discovery process. The authors summarize the surveyed tools by presenting a taxonomy of their characteristics and design decisions. They also describe where to find and how to access several of the surveyed discovery services. They conclude with a discussion of future directions in the area of resource discovery and retrieval.
    BibTeX:
    @article{OBRACZKA1993,
      author = {OBRACZKA, K and DANZIG, PB and LI, SH},
      title = {INTERNET RESOURCE DISCOVERY SERVICES},
      journal = {COMPUTER},
      year = {1993},
      volume = {26},
      number = {9},
      pages = {8-22}
    }
    
    Oinn, T., Greenwood, M., Addis, M., Alpdemir, M.N., Ferris, J., Glover, K., Goble, C., Goderis, A., Hull, D., Marvin, D., Li, P., Lord, P., Pocock, M.R., Senger, M., Stevens, R., Wipat, A. & Wroe, C. Taverna: lessons in creating a workflow environment for the life sciences {2006} CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
    Vol. {18}({10}), pp. {1067-1100} 
    article DOI  
    Abstract: Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The (my)Grid project has developed the Taverna Workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists' experimental context. The lessons reflect an evolving understanding of life scientists' requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science. Copyright (c) 2005 John Wiley & Sons, Ltd.
    BibTeX:
    @article{Oinn2006,
      author = {Oinn, Tom and Greenwood, Mark and Addis, Matthew and Alpdemir, M. Nedim and Ferris, Justin and Glover, Kevin and Goble, Carole and Goderis, Antoon and Hull, Duncan and Marvin, Darren and Li, Peter and Lord, Phillip and Pocock, Matthew R. and Senger, Martin and Stevens, Robert and Wipat, Anil and Wroe, Chris},
      title = {Taverna: lessons in creating a workflow environment for the life sciences},
      journal = {CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE},
      year = {2006},
      volume = {18},
      number = {10},
      pages = {1067-1100},
      note = {GGF Workshop on Workflow in Grid Systems, Berlin, GERMANY, MAR 09, 2004},
      doi = {{10.1002/cpe.993}}
    }
    
    Oren, E., Delbru, R. & Decker, S. Extending faceted navigation for RDF data {2006}
    Vol. {4273}Semantic Web - ISEC 2006, Proceedings, pp. {559-572} 
    inproceedings  
    Abstract: Data on the Semantic Web is semi-structured and does not follow one fixed schema. Faceted browsing [23] is a natural technique for navigating such data, partitioning the information space into orthogonal conceptual dimensions. Current faceted interfaces are manually constructed and have limited query expressiveness. We develop an expressive faceted interface for semi-structured data and formally show the improvement over existing interfaces. Secondly, we develop metrics for automatic ranking of facet quality, bypassing the need for manual construction of the interface. We develop a prototype for faceted navigation of arbitrary RDF data. Experimental evaluation shows improved usability over current interfaces.
    BibTeX:
    @inproceedings{Oren2006,
      author = {Oren, Eyal and Delbru, Renaud and Decker, Stefan},
      title = {Extending faceted navigation for RDF data},
      booktitle = {Semantic Web - ISEC 2006, Proceedings},
      year = {2006},
      volume = {4273},
      pages = {559-572},
      note = {5th International Semantic Web Conference (ISWC 2006), Athens, GA, NOV 05-09, 2006}
    }
    
    van Ossenbruggen, J., Nack, F. & Hardman, L. That obscure object of desire: Multimedia metadata on the Web, part 1 {2004} IEEE MULTIMEDIA
    Vol. {11}({4}), pp. {38-48} 
    article  
    Abstract: The Semantic Web and the Multimedia Content Description Interface (MPEG-7) are the two most widely known approaches toward machine-processable and semantic-based content description. The concepts and technologies behind the approaches are essential for the next step in multimedia development-that, is, providing multimedia metadata on the Web. Unfortunately, as this article discusses, many practical obstacles block their widespread use.
    BibTeX:
    @article{Ossenbruggen2004,
      author = {van Ossenbruggen, J and Nack, F and Hardman, L},
      title = {That obscure object of desire: Multimedia metadata on the Web, part 1},
      journal = {IEEE MULTIMEDIA},
      year = {2004},
      volume = {11},
      number = {4},
      pages = {38-48}
    }
    
    Pahl, C. An ontology for software component matching {2003}
    Vol. {2621}FUNDAMENTAL APPROACHES TO SOFTWARE ENGINEERING, PROCEEDINGS, pp. {6-21} 
    inproceedings  
    Abstract: The Web is likely to be a central platform for software development in the future. We investigate how Semantic Web technologies, in particular ontologies, can be utilised to support software component development in a Web environment. We use description logics, which underlie Semantic Web ontology languages such as DAML+OIL, to develop an ontology for matching requested and provided components. A link between modal logic and description logics will prove invaluable for the provision of reasoning support for component and service behaviour.
    BibTeX:
    @inproceedings{Pahl2003,
      author = {Pahl, C},
      title = {An ontology for software component matching},
      booktitle = {FUNDAMENTAL APPROACHES TO SOFTWARE ENGINEERING, PROCEEDINGS},
      year = {2003},
      volume = {2621},
      pages = {6-21},
      note = {Joint European Conference on Theory and Practice of Software (ETAPS 2003), WARSAW, POLAND, APR 05-13, 2003}
    }
    
    Palopoli, L., Terracina, G. & Ursino, D. A graph-based approach for extracting terminological properties of elements of XML documents {2001} 17TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS, pp. {330-337}  inproceedings  
    Abstract: XML is rapidly becoming a standard for information exchange over the Web. Web providers and applications using XML for representing and exchanging their data make their information available in such a way that interoperability can be easily reached. However, in order to guarantee both the exchange of XML documents and the interoperability between information providers, it is often needed to single out semantic similarity properties relating concepts of different XML documents. This paper gives a contribution in this framework by proposing a technique for extracting synonymies and homonymies. The derivation technique is based on a rich conceptual model (called SDR-Network) which is used to represent concepts expressed in XML documents as well as the semantic relationships holding among them.
    BibTeX:
    @inproceedings{Palopoli2001,
      author = {Palopoli, L and Terracina, G and Ursino, D},
      title = {A graph-based approach for extracting terminological properties of elements of XML documents},
      booktitle = {17TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS},
      year = {2001},
      pages = {330-337},
      note = {17th International Conference on Data Engineering, HEIDELBERG, GERMANY, APR 02-06, 2001}
    }
    
    Pan, J.Z. A flexible ontology reasoning architecture for the Semantic Web {2007} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {19}({2}), pp. {246-260} 
    article  
    Abstract: Knowledge-based systems in the Semantic Web era can make use of the power of the Semantic Web languages and technologies, in particular those related to ontologies. Recent research has shown that user-defined data types are very useful for Semantic Web and ontology applications. The W3C Semantic Web Best Practices and Development Working Group has set up a task force to address this issue. Very recently, OWL-Eu and OWL-E, two decidable extensions of the W3C standard ontology language OWL DL, have been proposed to support customized data types and customized data type predicates, respectively. In this paper, we propose a flexible reasoning architecture for these two expressive Semantic Web ontology languages and describe our prototype implementation of the reasoning architecture, based on the well-known FaCT DL reasoner, which witnesses the two key flexibility features of our proposed architecture: 1) It allows users to define their own data types and data type predicates based on built-in ones and 2) new data type reasoners can be added into the architecture without having to change the concept reasoner.
    BibTeX:
    @article{Pan2007,
      author = {Pan, Jeff Z.},
      title = {A flexible ontology reasoning architecture for the Semantic Web},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2007},
      volume = {19},
      number = {2},
      pages = {246-260}
    }
    
    Pan, R., Ding, Z., Yu, Y. & Peng, Y. A Bayesian network approach to ontology mapping {2005}
    Vol. {3729}SEMANTIC WEB - ISWC 2005, PROCEEDINGS, pp. {563-577} 
    inproceedings  
    Abstract: This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web. In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologics are treated as evidential reasoning between the two translated BNs. Probabilities needed for constructing conditional probability tables (CPT) during translation and for measuring semantic similarity during mapping are learned using text classification techniques where each concept in an ontology is associated with a set of semantically relevant text documents, which are obtained by ontology guided web mining. The basic ideas of this approach are validated by positive results from computer experiments on two small real-world ontologies.
    BibTeX:
    @inproceedings{Pan2005,
      author = {Pan, R and Ding, ZL and Yu, Y and Peng, Y},
      title = {A Bayesian network approach to ontology mapping},
      booktitle = {SEMANTIC WEB - ISWC 2005, PROCEEDINGS},
      year = {2005},
      volume = {3729},
      pages = {563-577},
      note = {4th International Semantic Web Conference (ISWC 2005), Galway, IRELAND, NOV 06-10, 2005}
    }
    
    Paolucci, M., Kawamura, T., Payne, T. & Sycara, K. Importing the semantic web in UDDI {2002}
    Vol. {2512}WEB SERVICES, E-BUSINESS, AND THE SEMANTIC WEB, pp. {225-236} 
    inproceedings  
    Abstract: The web is moving from being a collection of pages toward a collection of services that interoperate through the Internet. A fundamental step toward this interoperation is the ability of automatically locating services on the bases of the functionalities that they provide. Such a functionality would allow services to locate each other and automatically interoperate. Location of web services is inherently a semantic problem, because it has to abstract from the superficial differences between representations of the services provided, and the services requested to recognize semantic similarities between the two. Current Web Services technology based on UDDI and WSDL does not make any use of semantic information and therefore fails to address the problem of matching between capabilities of services and allowing service location on the bases of what functionalities are sought, failing therefore to address the problem of locating web services. Nevertheless, previous work within DAML-S, a DAML-based language for service description, shows how ontological information collected through the semantic web can be used to match service capabilities. This work expands on previous. work by showing how DAML-S Service Profiles, that describe service capabilities within DAML-S, can be mapped into UDDI records providing therefore a way to record semantic information within UDDI records. Furthermore we show how this encoded information can be used within the UDDI registry to perform semantic matching.
    BibTeX:
    @inproceedings{Paolucci2002a,
      author = {Paolucci, M and Kawamura, T and Payne, TR and Sycara, K},
      title = {Importing the semantic web in UDDI},
      booktitle = {WEB SERVICES, E-BUSINESS, AND THE SEMANTIC WEB},
      year = {2002},
      volume = {2512},
      pages = {225-236},
      note = {Workshop on Web Services, E-Business and the Semantic Web held in conjunction with the 14th International Conference on Advanced Information Systems Engineering (CAiSE02), TORONTO, CANADA, MAY 27-28, 2002}
    }
    
    Paolucci, M., Kawamura, T., Payne, T. & Sycara, K. Semantic matching of Web services capabilities {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {333-347} 
    inproceedings  
    Abstract: The Web is moving from being a collection of pages toward a collection of services that interoperate through the Internet. The first step toward this interoperation is the location of other services that can help toward the solution of a problem. In this paper we claim that location of web services should be based on the semantic match between a declarative description of the service being sought, and a description of the service being offered. Furthermore, we claim that this match is outside the representation capabilities of registries such as UDDI and languages such as WSDL. We propose a solution based on DAML-S, a DAML-based language for service description, and we show how service capabilities are presented in the Profile section of a DAML-S description and how a semantic match between advertisements and requests is performed.
    BibTeX:
    @inproceedings{Paolucci2002,
      author = {Paolucci, M and Kawamura, T and Payne, TR and Sycara, K},
      title = {Semantic matching of Web services capabilities},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {333-347},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Paolucci, M. & Sycara, K. Autonomous semantic web services {2003} IEEE INTERNET COMPUTING
    Vol. {7}({5}), pp. {34-41} 
    article  
    BibTeX:
    @article{Paolucci2003,
      author = {Paolucci, M and Sycara, K},
      title = {Autonomous semantic web services},
      journal = {IEEE INTERNET COMPUTING},
      year = {2003},
      volume = {7},
      number = {5},
      pages = {34-41}
    }
    
    Parsons, T., Rogers, S., Braaten, A., Woods, S. & Troster, A. Cognitive sequelae of subthalamic nucleus deep brain stimulation in Parkinson's disease: a meta-analysis {2006} LANCET NEUROLOGY
    Vol. {5}({7}), pp. {578-588} 
    article DOI  
    Abstract: Background Deep brain stimulation of the subthalamic nucleus (STN DBS) is an increasingly common treatment for Parkinson's disease. Qualitative reviews have concluded that diminished verbal fluency is common after STN DBS, but that changes in global cognitive abilities, attention, executive functions, and memory are only inconsistently observed and, when present, often nominal or transient. We did a quantitative meta-analysis to improve understanding of the variability and clinical significance of cognitive dysfunction after STN DBS. Methods We searched MedLine, PsycLIT, and ISI Web of Science electronic databases for articles published between 1990 and 2006, and extracted information about number of patients, exclusion criteria, confirmation of target by microelectrode recording, verification of electrode placement via radiographic means, stimulation parameters, assessment time points, assessment measures, whether patients were on levodopa or dopaminomimetics, and summary statistics needed for computation of effect sizes. We used the random-effects meta-analytical model to assess continuous outcomes before and after STN DBS. Findings Of 40 neuropsychological studies identified, 28 cohort studies (including 612 patients) were eligible for inclusion in the meta-analysis. After adjusting for heterogeneity of variance in study effect sizes, the random effects meta-analysis revealed significant, albeit small, declines in executive functions and verbal learning and memory. Moderate declines were only reported in semantic (Cohen's d 0.73) and phonemic verbal fluency (0.51). Changes in verbal fluency were not related to patient age, disease duration, stimulation parameters, or change in dopaminomimetic dose after surgery. Interpretation STN DBS, in selected patients, seems relatively safe from a cognitive standpoint. However, difficulty in identification of factors underlying changes in verbal fluency draws attention to the need for uniform and detailed reporting of patient selection, demographic, disease, treatment, surgical, stimulation, and clinical outcome parameters.
    BibTeX:
    @article{Parsons2006,
      author = {Parsons, TD and Rogers, SA and Braaten, AJ and Woods, SP and Troster, AI},
      title = {Cognitive sequelae of subthalamic nucleus deep brain stimulation in Parkinson's disease: a meta-analysis},
      journal = {LANCET NEUROLOGY},
      year = {2006},
      volume = {5},
      number = {7},
      pages = {578-588},
      doi = {{10.1016/S1474-4422(06)70475-6}}
    }
    
    Payne, T. & Lassila, O. Semantic Web services {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({4}), pp. {14-15} 
    article  
    BibTeX:
    @article{Payne2004,
      author = {Payne, T and Lassila, O},
      title = {Semantic Web services},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {4},
      pages = {14-15}
    }
    
    Payne, T., Singh, R. & Sycara, K. Calendar agents on the Semantic Web {2002} IEEE INTELLIGENT SYSTEMS
    Vol. {17}({3}), pp. {84-86} 
    article  
    BibTeX:
    @article{Payne2002,
      author = {Payne, TR and Singh, R and Sycara, K},
      title = {Calendar agents on the Semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2002},
      volume = {17},
      number = {3},
      pages = {84-86}
    }
    
    Petridis, K., Bloehdorn, S., Saathoff, C., Simou, N., Dasiopoulou, S., Tzouvaras, V., Handschuh, S., Avrithis, Y., Kompatsiaris, Y. & Staab, S. Knowledge representation and semantic annotation of multimedia content {2006} IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING
    Vol. {153}({3}), pp. {255-262} 
    article DOI  
    Abstract: Knowledge representation and semantic annotation of multimedia documents typically have been pursued in two different directions. Previous approaches have focused either on low-level descriptors, such as dominant colour, or on the semantic content dimension and corresponding manual annotations, such as person or vehicle. Here, a knowledge infrastructure and an experimentation platform for semantic annotation to bridge the two directions are presented. Ontologies are being extended and enriched to include low-level audiovisual features and descriptors. Additionally, a tool that allows for linking low-level MPEG-7 visual descriptions to ontologies and annotations is presented. Thus, ontologies that include prototypical instances of high-level domain concepts together with a formal specification of the corresponding visual descriptors are constructed. This infrastructure is exploited by a knowledge-assisted analysis framework that may handle problems such as segmentation, tracking, feature extraction and matching in order to classify scenes, identify and label objects and thus automatically create the associated semantic metadata.
    BibTeX:
    @article{Petridis2006,
      author = {Petridis, K. and Bloehdorn, S. and Saathoff, C. and Simou, N. and Dasiopoulou, S. and Tzouvaras, V. and Handschuh, S. and Avrithis, Y. and Kompatsiaris, Y. and Staab, S.},
      title = {Knowledge representation and semantic annotation of multimedia content},
      journal = {IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING},
      year = {2006},
      volume = {153},
      number = {3},
      pages = {255-262},
      doi = {{10.1049/ip-vis:20050059}}
    }
    
    Philippi, S. & Koehler, J. Addressing the problems with life-science databases for traditional uses and systems biology {2006} NATURE REVIEWS GENETICS
    Vol. {7}({6}), pp. {482-488} 
    article DOI  
    Abstract: A prerequisite to systems biology is the integration of heterogeneous experimental data, which are stored in numerous life-science databases. However, a wide range of obstacles that relate to access, handling and integration impede the efficient use of the contents of these databases. Addressing these issues will not only be essential for progress in systems biology, it will also be crucial for sustaining the more traditional uses of life-science databases.
    BibTeX:
    @article{Philippi2006,
      author = {Philippi, Stephan and Koehler, Jacob},
      title = {Addressing the problems with life-science databases for traditional uses and systems biology},
      journal = {NATURE REVIEWS GENETICS},
      year = {2006},
      volume = {7},
      number = {6},
      pages = {482-488},
      doi = {{10.1038/nrg1872}}
    }
    
    Pinto, H. & Martins, J. Ontologies: How can they be built? {2004} KNOWLEDGE AND INFORMATION SYSTEMS
    Vol. {6}({4}), pp. {441-464} 
    article DOI  
    Abstract: Ontologies are an important component in many areas, such as knowledge management and organization, electronic commerce and information retrieval and extraction. Several methodologies for ontology building have been proposed. In this article, we provide an overview of ontology building. We start by characterizing the ontology building process and its life cycle. We present the most representative methodologies for building ontologies from scratch, and the proposed techniques, guidelines and methods to help in the construction task. We analyze and compare these methodologies. We describe current research issues in ontology reuse. Finally, we discuss the current trends in ontology building and its future challenges, namely, the new issues for building ontologies for the Semantic Web.
    BibTeX:
    @article{Pinto2004,
      author = {Pinto, HS and Martins, JP},
      title = {Ontologies: How can they be built?},
      journal = {KNOWLEDGE AND INFORMATION SYSTEMS},
      year = {2004},
      volume = {6},
      number = {4},
      pages = {441-464},
      doi = {{10.1007/s10115-003-0138-1}}
    }
    
    Popov, B., Kiryakov, A., Kirilov, A., Manov, D., Ognyanoff, D. & Goranov, M. KIM - Semantic annotation platform {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {834-849} 
    inproceedings  
    Abstract: The KIM platform provides a novel Knowledge and Information Management infrastructure and services for automatic semantic annotation, indexing, and retrieval of documents. It provides mature infrastructure for scaleable and customizable information extraction (IE1) as well as annotation and document management, based on GATE(2). In order to provide basic level of performance and allow easy bootstrapping of applications, KIM is equipped with an upper-level ontology and a knowledge base providing extensive coverage of entities of general importance. The ontologies and knowledge bases involved are handled using cutting edge Semantic Web technology and standards, including RDF(S) repositories, ontology middleware and reasoning. From technical point of view, the platform allows KIM-based applications to use it for automatic semantic annotation, content retrieval based on semantic restrictions, and querying and modifying the underlying ontologies and knowledge bases. This paper presents the KIM platform, with emphasize on its architecture, interfaces, tools, and other technical issues.
    BibTeX:
    @inproceedings{Popov2003,
      author = {Popov, B and Kiryakov, A and Kirilov, A and Manov, D and Ognyanoff, D and Goranov, M},
      title = {KIM - Semantic annotation platform},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {834-849},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Poshyvanyk, D., Gueheneuc, Y.-G., Marcus, A., Antoniol, G. & Rajlich, V. Feature location using probabilistic ranking of methods based on execution scenarios and information retrieval {2007} IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
    Vol. {33}({6}), pp. {420-432} 
    article DOI  
    Abstract: This paper recasts the problem of feature location in source code as a decision-making problem in the presence of uncertainty. The solution to the problem is formulated as a combination of the opinions of different experts. The experts in this work are two existing techniques for feature location: a scenario-based probabilistic ranking of events and an information-retrieval-based technique that uses Latent Semantic Indexing. The combination of these two experts is empirically evaluated through several case studies, which use the source code of the Mozilla Web browser and the Eclipse integrated development environment. The results show that the combination of experts significantly improves the effectiveness of feature location as compared to each of the experts used independently.
    BibTeX:
    @article{Poshyvanyk2007,
      author = {Poshyvanyk, Denys and Gueheneuc, Yann-Gael and Marcus, Andrian and Antoniol, Giuliano and Rajlich, Vaclav},
      title = {Feature location using probabilistic ranking of methods based on execution scenarios and information retrieval},
      journal = {IEEE TRANSACTIONS ON SOFTWARE ENGINEERING},
      year = {2007},
      volume = {33},
      number = {6},
      pages = {420-432},
      doi = {{10.1109/TSE.2007.1016}}
    }
    
    Proctor, R., Vu, K., Salvendy, G., Degen, H., Fang, X., Flach, J., Gott, S., Herrmann, D., Kromker, H., Lightner, N., Lubin, K., Najjar, L., Reeves, L., Rudorfer, A., Stanney, K., Stephanidis, C., Strybel, T., Vaughan, M., Wang, H., Weber, H., Yang, Y. & Zhu, W. Content preparation and management for web design: Eliciting, structuring, searching, and displaying information {2002} INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION
    Vol. {14}({1}), pp. {25-92} 
    article  
    Abstract: The vast amount of information available through the Web has made it difficult to retrieve information relevant to a specific task. To help ensure that users' interactions with a system are successful, preparation of content and its presentation to users must take into account (a) what information needs to be extracted, (b) the way in which this information should be stored and organized, (c) the methods for retrieving the information, and (d) how the information should be displayed. The goal of this article is to discuss the generic problems facing content preparation and evaluate the current methods available to help remedy them, as well as identify areas in which more research is needed. The material presented in this article was a result of the collective efforts of the participants of a special ``white paper'' session that was part of the 9th International Conference on Human-Computer Interaction (HCI International 2001).
    BibTeX:
    @article{Proctor2002,
      author = {Proctor, RW and Vu, KPL and Salvendy, G and Degen, H and Fang, XW and Flach, JM and Gott, SP and Herrmann, D and Kromker, H and Lightner, NJ and Lubin, K and Najjar, L and Reeves, L and Rudorfer, A and Stanney, K and Stephanidis, C and Strybel, TZ and Vaughan, M and Wang, HF and Weber, H and Yang, YX and Zhu, WL},
      title = {Content preparation and management for web design: Eliciting, structuring, searching, and displaying information},
      journal = {INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION},
      year = {2002},
      volume = {14},
      number = {1},
      pages = {25-92}
    }
    
    Pundt, H. & Bishr, Y. Domain ontologies for data sharing-an example from environmental monitoring using field GIS {2002} COMPUTERS & GEOSCIENCES
    Vol. {28}({1}), pp. {95-102} 
    article  
    Abstract: Different geospatial information communities, public authorities as well as private institutions. recognize increasingly the World Wide Web as a medium to distribute their data. With the occurrence of national laws that push authorities to make environmental data accessible, Internet-based services have to be developed to enable the public to obtain information digitally. Dissemination of data is only one side of the coin. The other side is the use of such data. The use requires mechanisms to share data via networks. Lack of semantic interoperability has been identified as the main obstacle for data sharing. Research, however, must develop methods to overcome the problems of sharing data considering their semantics. Ontologies are considered to be one approach to support data sharing. This paper describes the use of ontologies via the Internet based on an example from field GIS supported environmental monitoring. The basic idea is that the members of different information communities get access to the meaning of data if they can approach the ontologies that have been developed by those who collected the data. This might be possible by applying the resource definition framework (RDF) and RDF/Schema. RDF can be used to define and structure terms and vocabulary used in a specific information community. The goal of the paper is to examine the role of ontologies based on the study of a particular application domain, namely stream surveying, The use of RDF/Schema is described related to the example. (C) 2002 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Pundt2002,
      author = {Pundt, H and Bishr, Y},
      title = {Domain ontologies for data sharing-an example from environmental monitoring using field GIS},
      journal = {COMPUTERS & GEOSCIENCES},
      year = {2002},
      volume = {28},
      number = {1},
      pages = {95-102},
      note = {Workshop on Intelligent Methods for Processing Space and Time-Related Data, MAGDEBURG, GERMANY, 1999}
    }
    
    Quilitz, B. & Leser, U. Querying distributed RDF data sources with SPARQL {2008}
    Vol. {5021}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {524-538} 
    inproceedings  
    Abstract: Integrated access to multiple distributed and autonomous RDF data sources is a key challenge for many semantic web applications. As a reaction to this challenge, SPARQL, the W3C Recommendation for an RDF query language, supports querying of multiple RDF graphs. However, the current standard does not provide transparent query federation, which makes query formulation hard and lengthy. Furthermore, current implementations of SPARQL load all RDF graphs mentioned in a query to the local machine. This usually incurs a large overhead in network traffic, and sometimes is simply impossible for technical or legal reasons. To overcome these problems we present DARQ an engine for federated SPARQL queries. DARQ provides transparent query access to multiple SPARQL services, i.e., it gives the user the impression to query one single RDF graph despite the real data being distributed on the web. A service description language enables the query engine to decompose a query into sub-queries, each of which can be answered by an individual service. DARQ also uses query rewriting and cost-based query optimization to speed-up query execution. Experiments show that these optimizations significantly improve query performance even when only a very limited amount of statistical information is available. DARQ is available under GPL License at http://darq.sf.net/.
    BibTeX:
    @inproceedings{Quilitz2008,
      author = {Quilitz, Bastian and Leser, Ulf},
      title = {Querying distributed RDF data sources with SPARQL},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2008},
      volume = {5021},
      pages = {524-538},
      note = {5th European Semantic Web Conference, Temerife, SPAIN, JUN 01, 2008}
    }
    
    Rahwan, I., Zablith, F. & Reed, C. Laying the foundations for a world wide argument web {2007} ARTIFICIAL INTELLIGENCE
    Vol. {171}({10-15}), pp. {897-921} 
    article DOI  
    Abstract: This paper lays theoretical and software foundations for a World Wide Argument Web (WWAW): a large-scale Web of interconnected arguments posted by individuals to express their opinions in a structured manner. First, we extend the recently proposed Argument Interchange Format (AIF) to express arguments with a structure based on Walton's theory of argumentation schemes. Then, we describe an implementation of this ontology using the RDF Schema Semantic Web-based ontology language, and demonstrate how our ontology enables the representation of networks of arguments on the Semantic Web. Finally, we present a pilot Semantic Web-based system, ArgDF, through which users can create arguments using different argumentation schemes and can query arguments using a Semantic Web query language. Manipulation of existing arguments is also handled in ArgDF: users can attack or support parts of existing arguments, or use existing parts of an argument in the creation of new arguments. ArgDF also enables users to create new argumentation schemes. As such, ArgDF is an open platform not only for representing arguments, but also for building interlinked and dynamic argument networks on the Semantic Web. This initial public-domain tool is intended to seed a variety of future applications for authoring, linking, navigating, searching, and evaluating arguments on the Web. (c) 2007 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Rahwan2007,
      author = {Rahwan, Iyad and Zablith, Fouad and Reed, Chris},
      title = {Laying the foundations for a world wide argument web},
      journal = {ARTIFICIAL INTELLIGENCE},
      year = {2007},
      volume = {171},
      number = {10-15},
      pages = {897-921},
      doi = {{10.1016/j.artint.2007.04.015}}
    }
    
    Ralph, M.A.L. & Patterson, K. Generalization and differentiation in semantic memory - Insights from semantic dementia {2008}
    Vol. {1124}YEAR IN COGNITIVE NEUROSCIENCE 2008, pp. {61-76} 
    incollection DOI  
    Abstract: According to many theories, semantic representations reflect the parallel activation of information coded across a distributed set of modality-specific association brain cortices. This view is challenged by the neurodegenerative condition known as semantic dementia (SD), in which relatively circumscribed, bilateral atrophy of the anterior temporal lobes results in selective degradation of core semantic knowledge, affecting all types of concept, irrespective of the modality of testing. Research on SD suggests a major revision in our understanding of the neural basis of semantic memory. Specifically, it is proposed that the anterior temporal lobes form amodal semantic representations through the distillation of the multimodal information that is projected to this region from the modality-specific association cortices. Although cross-indexing of modality-specific information could be achieved by a web of direct connections between pairs of these regions, amodal semantic representations enable semantic generalization and inference on the basis of conceptual structure rather than modality-specific features. As expected from this hypothesis, SD is characterized by impaired semantic generalization, both clinically and in formal assessment. The article describes a comprehensive array of under- and overgeneralization errors by patients with SD when engaged in receptive and expressive verbal and nonverbal tasks and everyday behaviors.
    BibTeX:
    @incollection{Ralph2008,
      author = {Ralph, Matthew A. Lambon and Patterson, Karalyn},
      title = {Generalization and differentiation in semantic memory - Insights from semantic dementia},
      booktitle = {YEAR IN COGNITIVE NEUROSCIENCE 2008},
      year = {2008},
      volume = {1124},
      pages = {61-76},
      doi = {{10.1196/annals.1440.006}}
    }
    
    Rao, J., Kungas, P. & Matskin, M. Composition of semantic web services using linear logic theorem proving {2006} INFORMATION SYSTEMS
    Vol. {31}({4-5}), pp. {340-360} 
    article DOI  
    Abstract: This paper introduces a method for automatic composition of Semantic Web services using Linear Logic (LL) theorem proving. The method uses a Semantic Web service language (DAML-S) for external presentation of Web services, while, internally, the services are presented by extralogical axioms and proofs in LL. LL, as a resource conscious logic, enables us to capture the concurrent features of Web services formally (including parameters, states and non-functional attributes). We use a process calculus to present the process model of the composite service. The process calculus is attached to the LL inference rules in the style of type theory. Thus, the process model for a composite service can be generated directly from the complete proof. We introduce a set of subtyping rules that defines a valid dataflow for composite services. The subtyping rules that are used for semantic reasoning are presented with LL inference figures. We propose a system architecture where the DAML-S Translator, LL Theorem Prover and Semantic Reasoner can operate together. This architecture has been implemented in Java. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Rao2006,
      author = {Rao, JH and Kungas, P and Matskin, M},
      title = {Composition of semantic web services using linear logic theorem proving},
      journal = {INFORMATION SYSTEMS},
      year = {2006},
      volume = {31},
      number = {4-5},
      pages = {340-360},
      doi = {{10.1016/j.is.2005.02.005}}
    }
    
    Raskin, R. & Pan, M. Knowledge representation in the semantic web for Earth and environmental terminology (SWEET) {2005} COMPUTERS & GEOSCIENCES
    Vol. {31}({9}), pp. {1119-1125} 
    article DOI  
    Abstract: The semantic web for Earth and environmental terminology (SWEET) is an investigation in improving discovery and use of Earth science data, through software understanding of the semantics of web resources. Semantic understanding is enabled through the use of ontologics, or formal representations of technical concepts and their interrelations in a form that supports domain knowledge. The ultimate vision of the semantic web consists of web pages with XML namespace tags around terms, enabling search tools to ascertain their meanings by following the link to the defining ontologies. Such a scenario both reduces the number of false hits (where a search returns alternative, unintended meanings of a term) and increases the number of successful hits (where searcher and information provider have a syntax mismatch of the same concept). For SWEET, we developed a collection of ontologies using the web ontology language (OWL) that include both orthogonal concepts (space, time, Earth realms, physical quantities, etc.) and integrative science knowledge concepts (phenomena, events, etc.). This paper describes the development of a knowledge space for Earth system science and related concepts (such as data properties). Some of the ontology contents are ``virtual'' by means of an OWL wrapper associated with terms in large external databases (including gazetteers and Earthquake databases). We developed a search tool that finds alternative search terms (based on the semantics) and redirects the expanded set of terms to a search engine. (C) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Raskin2005,
      author = {Raskin, RG and Pan, MJ},
      title = {Knowledge representation in the semantic web for Earth and environmental terminology (SWEET)},
      journal = {COMPUTERS & GEOSCIENCES},
      year = {2005},
      volume = {31},
      number = {9},
      pages = {1119-1125},
      doi = {{10.1016/j.cageo.2004.12.004}}
    }
    
    Ravasz, E. & Barabasi, A. Hierarchical organization in complex networks {2003} PHYSICAL REVIEW E
    Vol. {67}({2, Part 2}) 
    article DOI  
    Abstract: Many real networks in nature and society share two generic properties: they are scale-free and they display a high degree of clustering. We show that these two features are the consequence of a hierarchical organization, implying that small groups of nodes organize in a hierarchical manner into increasingly large groups, while maintaining a scale-free topology. In hierarchical networks, the degree of clustering characterizing the different groups follows a strict scaling law, which can be used to identify the presence of a hierarchical organization in real networks. We find that several real networks, such as the Worldwideweb, actor network, the Internet at the domain level, and the semantic web obey this scaling law, indicating that hierarchy is a fundamental characteristic of many complex systems.
    BibTeX:
    @article{Ravasz2003,
      author = {Ravasz, E and Barabasi, AL},
      title = {Hierarchical organization in complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2003},
      volume = {67},
      number = {2, Part 2},
      doi = {{10.1103/PhysRevE.67.026112}}
    }
    
    Razmerita, L., Angehrn, A. & Maedche, A. Ontology-based user modeling for knowledge management systems {2003}
    Vol. {2702}USER MODELING 2003, PROCEEDINGS, pp. {213-217} 
    inproceedings  
    Abstract: This paper is presenting a generic ontology-based user modeling architecture, (OntobUM), applied in the context of a Knowledge Management System (KMS). Due to their powerful knowledge representation formalism and associated inference mechanisms, ontology-based systems are emerging as a natural choice for the next generation of KMSs operating in organizational, interorganizational as well as community contexts. User models, often addressed as user profiles, have been included in KMSs mainly as simple ways of capturing the user preferences and/or competencies. We extend this view by including other characteristics of the users relevant in the KM context and we explain the reason for doing this. The proposed user modeling system relies on a user ontology, using Semantic Web technologies, based on the IMS LIP specifications, and it is integrated in an ontology-based KMS called Ontologging. We are presenting a generic framework for implicit and explicit ontology-based user modeling.
    BibTeX:
    @inproceedings{Razmerita2003,
      author = {Razmerita, L and Angehrn, A and Maedche, A},
      title = {Ontology-based user modeling for knowledge management systems},
      booktitle = {USER MODELING 2003, PROCEEDINGS},
      year = {2003},
      volume = {2702},
      pages = {213-217},
      note = {9th International Conference on User Modeling, JOHNSTOWN, PENNSYLVANIA, JUN 22-26, 2003}
    }
    
    Rezgui, Y. Ontology-centered knowledge management using information retrieval techniques {2006} JOURNAL OF COMPUTING IN CIVIL ENGINEERING
    Vol. {20}({4}), pp. {261-270} 
    article DOI  
    Abstract: The paper argues that an effective solution to information and knowledge management (KM) needs of practitioners in the construction industry can be found in the provision of an adapted knowledge environment that makes use of user profiling and document summarization techniques based on information retrieval sciences. The conceptualization of the domain through ontology takes a pivotal role in the proposed knowledge environment and provides a semantic referential to ensure relevance, accuracy, and completeness of information. A set of KM services articulated around the selected ontology have been developed, using the Web services model, tested, and validated in real organizational settings. This provided the basis for formulating recommendations and key success factors for any KM project development.
    BibTeX:
    @article{Rezgui2006,
      author = {Rezgui, Y},
      title = {Ontology-centered knowledge management using information retrieval techniques},
      journal = {JOURNAL OF COMPUTING IN CIVIL ENGINEERING},
      year = {2006},
      volume = {20},
      number = {4},
      pages = {261-270},
      doi = {{10.1061/(ASCE)0887-3801(2006)20:4(261)}}
    }
    
    Richard, A., Gold, L. & Nicklaus, M. Chemical structure indexing of toxicity data on the Internet: Moving toward a flat world {2006} CURRENT OPINION IN DRUG DISCOVERY & DEVELOPMENT
    Vol. {9}({3}), pp. {314-325} 
    article  
    Abstract: Standardized chemical structure annotation of public toxicity databases and information resources is playing an increasingly important role in the `flattening' and integration of diverse sets of biological activity data on the Internet. This review discusses public initiatives that are accelerating the pace of this transformation, with particular reference to toxicology-related chemical information. Chemical content annotators, structure locator services, large structure/data aggregator web sites, structure browsers, International Union of Pure and Applied Chemistry (IUPAC) International Chemical Identifier (InChI) codes, toxicity data models and public chemical/biological activity profiling initiatives are all playing a role in overcoming barriers to the integration of toxicity data, and are bringing researchers closer to the reality of a mineable chemical Semantic Web. An example of this integration of data is provided by the collaboration among researchers involved with the Distributed Structure-Searchable Toxicity (DSSTox) project, the Carcinogenic Potency Project, projects at the National Cancer Institute and the PubChem database.
    BibTeX:
    @article{Richard2006,
      author = {Richard, AM and Gold, LS and Nicklaus, MC},
      title = {Chemical structure indexing of toxicity data on the Internet: Moving toward a flat world},
      journal = {CURRENT OPINION IN DRUG DISCOVERY & DEVELOPMENT},
      year = {2006},
      volume = {9},
      number = {3},
      pages = {314-325}
    }
    
    Richardson, M., Agrawal, R. & Domingos, P. Trust management for the semantic web {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {351-368} 
    inproceedings  
    Abstract: Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user maintains trusts in a small number of other users. We then compose these trusts into trust values for all other users. The result of our computation is not an agglomerate ``trustworthiness'' of each user. Instead, each user receives a personalized set of trusts, which may vary widely from person to person. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.
    BibTeX:
    @inproceedings{Richardson2003,
      author = {Richardson, M and Agrawal, R and Domingos, P},
      title = {Trust management for the semantic web},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {351-368},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Rodriguez, M. & Egenhofer, M. Comparing geospatial entity classes: an asymmetric and context-dependent similarity measure {2004} INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE
    Vol. {18}({3}), pp. {229-256} 
    article DOI  
    Abstract: Semantic similarity plays an important role in geographic information systems as it supports the identification of objects that are conceptually close, but not identical. Similarity assessments are particularly important for retrieval of geospatial data in such settings as digital libraries, heterogeneous databases, and the World Wide Web. Although some computational models for semantic similarity assessment exist, these models are typically limited by their inability to handle such important cognitive properties of similarity judgements as their inherent asymmetry and their dependence on context. This paper defines the Matching-Distance Similarity Measure (MDSM) for determining semantic similarity among spatial entity classes, taking into account the distinguishing features of these classes ( parts, functions, and attributes) and their semantic interrelations ( is - a and part - whole relations). A matching process is combined with a semantic-distance calculation to obtain asymmetric values of similarity that depend on the degree of generalization of entity classes. MDSM's matching process is also driven by contextual considerations, where the context determines the relative importance of distinguishing features. Based on a human-subject experiment, MDSM results correlate well with people's judgements of similarity. When contextual information is used for determining the importance of distinguishing features, this correlation increases; however, the major component of the correlation between MDSM results and people's judgements is due to a detailed definition of entity classes.
    BibTeX:
    @article{Rodriguez2004,
      author = {Rodriguez, MA and Egenhofer, MJ},
      title = {Comparing geospatial entity classes: an asymmetric and context-dependent similarity measure},
      journal = {INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE},
      year = {2004},
      volume = {18},
      number = {3},
      pages = {229-256},
      doi = {{10.1080/13658810310001629592}}
    }
    
    Rosati, R. Semantic and computational advantages of the safe integration of ontologies and rules {2005}
    Vol. {3703}PRINCIPLES AND PRACTICE OF SEMANTIC WEB REASONING, PROCEEDINGS, pp. {50-64} 
    inproceedings  
    Abstract: Description Logics (DLs) are playing a central role in ontologies and in the Semantic Web, since they are currently the most used formalisms for building ontologies. Both semantic and computational issues arise when extending DLs with rule-based components. In particular, integrating DLs with nonmonotonic rules requires to properly deal with two semantic discrepancies: (a) DLs are based on the Open World Assumption, while rules are based on (various forms of) Closed World Assumption; (b) The DLs specifically designed for the Semantic Web, i.e., OWL and OWL-DL, are not based on the Unique Name Assumption, while rule-based systems typically adopt the Unique Name Assumption. In this paper we present the following contributions: (1) We define safe hybrid knowledge bases, a general formal framework for integrating ontologies and rules, which provides for a clear treatment of the above semantic issues; (2) We present a reasoning algorithm and establish general decidability and complexity results for reasoning in safe hybrid KBs; (3) As a consequence of these general results, we close a problem left open in [18], i.e., decidability of OWL-DL with DL-safe rules.
    BibTeX:
    @inproceedings{Rosati2005a,
      author = {Rosati, R},
      title = {Semantic and computational advantages of the safe integration of ontologies and rules},
      booktitle = {PRINCIPLES AND PRACTICE OF SEMANTIC WEB REASONING, PROCEEDINGS},
      year = {2005},
      volume = {3703},
      pages = {50-64},
      note = {3rd International Workshop on Principles and Practice of Semantic Web Reasoning, Dagstuhl Castle, GERMANY, SEP 11-16, 2005}
    }
    
    Rosati, R. On the decidability and complexity of integrating ontologies and rules {2005} JOURNAL OF WEB SEMANTICS
    Vol. {3}({1}), pp. {61-73} 
    article DOI  
    Abstract: We define the formal framework of r-hybrid knowledge bases (KBs) integrating ontologies and rules. A r-hybrid KB has a structural component (ontology) and a rule component. Such a framework is very general, in the sense that: (i) the construction is parametric with respect to the logic used to specify the structural component; (ii) the rule component is very expressive, since it consists of a Datalog(V) program, i.e., a Datalog program with negation as failure and disjunction, (iii) the rule component is constrained in its interaction with the structural component according to a safeness condition: such a safe interaction between rules and structural KB captures (and is a generalization of) several previous proposals. As a consequence, we are able to show that such a framework of r-hybrid KBs comprises many systems proposed for combining rules and Description Logics. Then, we study reasoning in r-hybrid KBs. We provide a general algorithm for reasoning in r-hybrid KBs, and prove that, under very general conditions, decidability of reasoning is preserved when we add safe Datalog(V) rules to a KB: in other words, if reasoning in the logic L used to specify the structural component T is decidable, then reasoning in the extension of T with safe Datalog(V) rules is still decidable. We also show that an analogous property holds for the complexity of reasoning in r-hybrid KBs. Our decidability and complexity results generalize in a broad sense previous results obtained in recent research on this topic. In particular, we prove that reasoning in r-hybrid KBs whose structural component is specified in the Web Ontology Language OWL-DL is decidable. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Rosati2005,
      author = {Rosati, R},
      title = {On the decidability and complexity of integrating ontologies and rules},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2005},
      volume = {3},
      number = {1},
      pages = {61-73},
      doi = {{10.1016/j.websem.2005.05.002}}
    }
    
    Roussinov, D. & Zhao, J. Automatic discovery of similarity relationships through Web mining {2003} DECISION SUPPORT SYSTEMS
    Vol. {35}({1}), pp. {149-166} 
    article  
    Abstract: This work demonstrates how the World Wide Web can be mined in a fully automated manner for discovering the semantic similarity relationships among the concepts surfaced during an electronic brainstorming session, and thus improving the accuracy of automated clustering meeting messages. Our novel Context Sensitive Similarity Discovery (CSSD) method takes advantage of the meeting context when selecting a subset of Web pages for data mining, and then conducts regular concept co-occurrence analysis within that subset. Our results have implications on reducing information overload in applications of text technologies such as email filtering, document retrieval, text summarization, and knowledge management. (C) 2002 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Roussinov2003,
      author = {Roussinov, D and Zhao, JL},
      title = {Automatic discovery of similarity relationships through Web mining},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {2003},
      volume = {35},
      number = {1},
      pages = {149-166}
    }
    
    Roy, U. & Kodkani, S. Product modeling within the framework of the World Wide Web {1999} IIE TRANSACTIONS
    Vol. {31}({7}), pp. {667-677} 
    article  
    Abstract: This paper presents an approach to the development of an open collaborative design environment in the Computer Aided Design (CAD) setting of a networked enterprise. Demand for high quality and variety of low to medium quality of products or `mass customization' has led to the concept of `virtual organizations'. The de-centralized design teams of such an organization require a framework to manage the CAD product information. By integrating the emerging standard for 3D geometry on the World Wide Web with conventional CAD packages, we describe a framework for the development of a product model with various levels of abstractions. The syntactic content of the product model, accessible through any Internet interface such as Netscape Navigator associates itself with a centralized database for embedding technological information at the face and feature levels. The semantic and syntactic content of the product model can then be accessed and manipulated through the single Internet interface. The system is deployed on a test-bed utilizing both a UNIX box and a machine operating under the Windows NT system, to demonstrate it's open architecture and interoperability.
    BibTeX:
    @article{Roy1999,
      author = {Roy, U and Kodkani, SS},
      title = {Product modeling within the framework of the World Wide Web},
      journal = {IIE TRANSACTIONS},
      year = {1999},
      volume = {31},
      number = {7},
      pages = {667-677}
    }
    
    Ruts, W., De Deyne, S., Ameel, E., VanPaemel, W., Verbeemen, T. & Storms, G. Dutch norm data for 13 semantic categories and 338 exemplars {2004} BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS
    Vol. {36}({3}), pp. {506-515} 
    article  
    Abstract: A data set is described that includes eight variables gathered for 13 common superordinate natural language categories and a representative set of 338 exemplars in Dutch. The category set contains 6 animal categories (reptiles, amphibians, mammals, birds, fish, and insects), 3 artifact categories (musical instruments, tools, and vehicles), 2 borderline artifact-natural-kind categories (vegetables and fruit), and 2 activity categories (sports and professions). In an exemplar and a feature generation task for the category nouns, frequency data were collected. For each of the 13 categories, a representative sample of 5-30 exemplars was selected. For all exemplars, feature generation frequencies, typicality ratings, pairwise similarity ratings, age-of-acquisition ratings, word frequencies, and word associations were gathered. Reliability estimates and some additional measures are presented. The full set of these norms is available in Excel format at the Psychonomic Society Web archive, www.psychonomic. org/archive/.
    BibTeX:
    @article{Ruts2004,
      author = {Ruts, W and De Deyne, S and Ameel, E and VanPaemel, W and Verbeemen, T and Storms, G},
      title = {Dutch norm data for 13 semantic categories and 338 exemplars},
      journal = {BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS},
      year = {2004},
      volume = {36},
      number = {3},
      pages = {506-515}
    }
    
    Ruttenberg, A., Clark, T., Bug, W., Samwald, M., Bodenreider, O., Chen, H., Doherty, D., Forsberg, K., Gao, Y., Kashyap, V., Kinoshita, J., Luciano, J., Marshall, M.S., Ogbuji, C., Rees, J., Stephens, S., Wong, G.T., Wu, E., Zaccagnini, D., Hongsermeier, T., Neumann, E., Herman, I. & Cheung, K.-H. Methodology - Advancing translational research with the Semantic Web {2007} BMC BIOINFORMATICS
    Vol. {8}({Suppl. 3}) 
    article DOI  
    Abstract: Background: A fundamental goal of the U. S. National Institute of Health (NIH) ``Roadmap'' is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results: We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion: Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of practitioners and installed base, and growing pains as the technology is scaled up. Still, the potential of interoperable knowledge sources for biomedicine, at the scale of the World Wide Web, merits continued work.
    BibTeX:
    @article{Ruttenberg2007,
      author = {Ruttenberg, Alan and Clark, Tim and Bug, William and Samwald, Matthias and Bodenreider, Olivier and Chen, Helen and Doherty, Donald and Forsberg, Kerstin and Gao, Yong and Kashyap, Vipul and Kinoshita, June and Luciano, Joanne and Marshall, M. Scott and Ogbuji, Chimezie and Rees, Jonathan and Stephens, Susie and Wong, Gwendolyn T. and Wu, Elizabeth and Zaccagnini, Davide and Hongsermeier, Tonya and Neumann, Eric and Herman, Ivan and Cheung, Kei-Hoi},
      title = {Methodology - Advancing translational research with the Semantic Web},
      journal = {BMC BIOINFORMATICS},
      year = {2007},
      volume = {8},
      number = {Suppl. 3},
      doi = {{10.1186/1471-2105-8-S3-S2}}
    }
    
    Salaun, G., Bordeaux, L. & Schaerf, M. Describing and reasoning on web services using process algebra {2004} IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES, PROCEEDINGS, pp. {43-50}  inproceedings  
    Abstract: We argue that essential facets of web services, and especially those useful to understand their interaction, can be described using process-algebraic notations. Web service description and execution languages such as BPEL are essentially process description languages; they are based on primitives for behaviour description and message exchange which can also be found in more abstract process algebras. One legitimate question is therefore whether the formal approach and the sophisticated tools introduced for process algebra can be used to improve the effectiveness and the reliability of web service development. Our investigations suggest a positive answer, and we claim that process algebras provide a very complete and satisfactory assistance to the whole process of web service development. We show on a case study that readily available tools based on process algebra are effective at verifying that web services conform their requirements and respect properties. We advocate their use both at the design stage and for reverse engineering issues. More prospectively, we discuss how they can be helpful to tackle choreography issues.
    BibTeX:
    @inproceedings{Salaun2004,
      author = {Salaun, G and Bordeaux, L and Schaerf, M},
      title = {Describing and reasoning on web services using process algebra},
      booktitle = {IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES, PROCEEDINGS},
      year = {2004},
      pages = {43-50},
      note = {IEEE International Conference on Web Services (ICWS 2004), San Diego, CA, JUL 06-09, 2004}
    }
    
    Sampson, D., Lytras, M., Wagner, G. & Diaz, P. Ontologies and the Semantic Web for e-learning {2004} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {7}({4}), pp. {26-28} 
    article  
    BibTeX:
    @article{Sampson2004,
      author = {Sampson, DG and Lytras, MD and Wagner, G and Diaz, P},
      title = {Ontologies and the Semantic Web for e-learning},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2004},
      volume = {7},
      number = {4},
      pages = {26-28}
    }
    
    Schatz, B., Mischo, W., Cole, T., Bishop, A., Harum, S., Johnson, E., Neumann, L., Chen, H. & Ng, D. Federated search of scientific literature {1999} COMPUTER
    Vol. {32}({2}), pp. {51+} 
    article  
    Abstract: The Internet of the 21st century will radically transform how we interact with knowledge. The rise of the World Wide Web and the information infrastructure have rapidly developed the technologies of collections for independent communities. In the future, online information will be dominated by small collections. The information infrastructure must similarly be radically different to support indexing of community collections and searching across such small collections. Users will consider themselves to be navigating in the Interspase, across logical spaces of semantic indexes, rather than in the Internet, across physical networks of computer servers. The Digital Libraries Initiative (DLI) project at the University of Illinois at Urbana-Champaign (UIUC) was one of six sponsored by the NSE DARPA. and NASA from 1994 through 1998. The goal: develop widely usable Web technology to effectively search technical documents on the Internet. This article details their efforts.
    BibTeX:
    @article{Schatz1999,
      author = {Schatz, B and Mischo, W and Cole, T and Bishop, A and Harum, S and Johnson, E and Neumann, L and Chen, HC and Ng, D},
      title = {Federated search of scientific literature},
      journal = {COMPUTER},
      year = {1999},
      volume = {32},
      number = {2},
      pages = {51+}
    }
    
    Schweiger, R., Brumhard, M., Hoelzer, S. & Dudeck, J. Implementing health care systems using XML standards {2005} INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS
    Vol. {74}({2-4}), pp. {267-277} 
    article DOI  
    Abstract: Most healthcare data is narrative text and often not accessible and easy to find at the clinical workstation. XML related standards (XML schema, XForms, XSL, Topic Maps, etc.) provide an infrastructure that might change the situation. Yet, it is up to the application developers to combine the given standards and toots into a running system. The cost of development is often underestimated and may explain the absence of comprehensive XML applications. Our goal is the clinical application of these standards. We have, therefore, implemented the idea of ``plug-and-play XML'', i.e. the development of new applications by means of XML standards. This paper will communicate our experience using such an approach at the example of a clinical drug information system. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
    BibTeX:
    @article{Schweiger2005,
      author = {Schweiger, R and Brumhard, M and Hoelzer, S and Dudeck, J},
      title = {Implementing health care systems using XML standards},
      journal = {INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS},
      year = {2005},
      volume = {74},
      number = {2-4},
      pages = {267-277},
      doi = {{10.1016/j.ijmedinf.2004.04.019}}
    }
    
    Schweiger, R., Hoelzer, S., Rudolf, D., Rieger, J. & Dudeck, J. Linking clinical data using XML topic maps {2003} ARTIFICIAL INTELLIGENCE IN MEDICINE
    Vol. {28}({1}), pp. {105-115} 
    article DOI  
    Abstract: Most clinical data is narrative text and often not accessible and searchable at the clinical workstation. We have therefore developed a search engine that allows indexing, searching and linking different kinds of data using web technologies. Text matching methods fail to represent implicit relationships between data, e.g. the relationship between HIV and AIDS. The international organization for standardization (ISO) topic maps standard provides a data model that allows representing arbitrary relationships between resources. Such relationships form the basis for a context sensitive search and accurate search results. The extensible markup language (XML) standards are used for the interchange of data relationships. The approach has been applied to medical classification systems and clinical practice guidelines. The search engine is compared to other XML retrieval methods and the prospect of a ``semantic web'' is discussed. (C) 2003 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Schweiger2003,
      author = {Schweiger, R and Hoelzer, S and Rudolf, D and Rieger, J and Dudeck, J},
      title = {Linking clinical data using XML topic maps},
      journal = {ARTIFICIAL INTELLIGENCE IN MEDICINE},
      year = {2003},
      volume = {28},
      number = {1},
      pages = {105-115},
      doi = {{10.1016/S0933-3657(03)00038-1}}
    }
    
    Sclaroff, S., La Cascia, M., Sethi, S. & Taycher, L. Unifying textual and visual cues for content-based image retrieval on the World Wide Web {1999} COMPUTER VISION AND IMAGE UNDERSTANDING
    Vol. {75}({1-2}), pp. {86-98} 
    article  
    Abstract: A system is proposed that combines textual and visual statistics in a single index vector for content-based search of a WWW image database. Textual statistics are captured in vector form using latent semantic indexing based on text in the containing HTML document. Visual statistics are captured in vector form using color and orientation histograms, By using an integrated approach, it becomes possible to take advantage of possible statistical couplings between the content of the document (latent semantic content) and the contents of images (visual statistics). The combined approach allows improved performance in conducting content-based search. Search performance experiments are reported for a database containing 350,000 images collected from the WWW. (C) 1999 Academic Press.
    BibTeX:
    @article{Sclaroff1999,
      author = {Sclaroff, S and La Cascia, M and Sethi, S and Taycher, L},
      title = {Unifying textual and visual cues for content-based image retrieval on the World Wide Web},
      journal = {COMPUTER VISION AND IMAGE UNDERSTANDING},
      year = {1999},
      volume = {75},
      number = {1-2},
      pages = {86-98}
    }
    
    Shadbolt, N., Gibbins, N., Glaser, H., Harris, S. & Schraefel, R. CS AKTive space, or how we learned to stop worrying and love the Semantic Web {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({3}), pp. {41-47} 
    article  
    BibTeX:
    @article{Shadbolt2004,
      author = {Shadbolt, N and Gibbins, N and Glaser, H and Harris, S and Schraefel, RC},
      title = {CS AKTive space, or how we learned to stop worrying and love the Semantic Web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {3},
      pages = {41-47}
    }
    
    Shadbolt, N., Hall, W. & Berners-Lee, T. The Semantic Web revisited {2006} IEEE INTELLIGENT SYSTEMS
    Vol. {21}({3}), pp. {96-101} 
    article  
    BibTeX:
    @article{Shadbolt2006,
      author = {Shadbolt, N and Hall, W and Berners-Lee, T},
      title = {The Semantic Web revisited},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2006},
      volume = {21},
      number = {3},
      pages = {96-101}
    }
    
    Shahar, Y., Young, O., Shalom, E., Galperin, M., Mayaffit, A., Moskovitch, R. & Hessing, A. A framework for a distributed, hybrid, multiple-ontology clinical-guideline library, and automated guideline-support tools {2004} JOURNAL OF BIOMEDICAL INFORMATICS
    Vol. {37}({5}), pp. {325-344} 
    article DOI  
    Abstract: Clinical guidelines are a major tool in improving the quality of medical care. However, most guidelines are in free text, not in a formal, executable format, and are not easily accessible to clinicians at the point of care. We introduce a Web-based, modular, distributed architecture the Digital Electronic Guideline Library (DeGeL), which facilitates gradual conversion of clinical guidelines from text to a forma; representation in chosen target guideline ontology. The architecture supports guideline classification, semantic markup, context-sensitive search, browsing, run-time application, and retrospective quality assessment. The DeGeL hybrid meta-ontology includes elements common to all guideline ontologies, such as semantic classification and domain knowledge; it also includes four content-representation formats: free text, semi-structured text, semi-formal representation, and a formal representation. These formats support increasingly sophisticated computational tasks. The DeGeL tools for support of guideline-based care operate, at some level, on all guideline ontologies. We have demonstrated the feasibility of the architecture and the tools for several guideline ontologies, including Asbru and GEM. (C) 2004 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Shahar2004,
      author = {Shahar, Y and Young, O and Shalom, E and Galperin, M and Mayaffit, A and Moskovitch, R and Hessing, A},
      title = {A framework for a distributed, hybrid, multiple-ontology clinical-guideline library, and automated guideline-support tools},
      journal = {JOURNAL OF BIOMEDICAL INFORMATICS},
      year = {2004},
      volume = {37},
      number = {5},
      pages = {325-344},
      doi = {{10.1016/j.jbi.2004.07.001}}
    }
    
    Shamsfard, M. & Barforoush, A. Learning ontologies from natural language texts {2004} INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
    Vol. {60}({1}), pp. {17-63} 
    article DOI  
    Abstract: Research on ontology is becoming increasingly widespread in the computer science community. The major problems in building ontologies are the bottleneck of knowledge acquisition and time-consuming construction of various ontologies for various domains/applications. Meanwhile moving toward automation of ontology construction is a solution. We proposed an automatic ontology building approach. In this approach, the system starts from a small ontology kernel and constructs the ontology through text understanding automatically. The kernel contains the primitive concepts, relations and operators to build an ontology, The features of our proposed model are being domain/application independent, building ontologies upon a small primary kernel, learning words, concepts, taxonomic and non-taxonomic relations and axioms and applying a symbolic, hybrid ontology learning approach consisting of logical, linguistic based, template driven and semantic analysis methods. Hasti is an ongoing project to implement and test the automatic ontology building approach. It extracts lexical and ontological knowledge from Persian (Farsi) texts. In this paper, at first, we will describe some ontology engineering problems, which motivated our approach. In the next sections, after a brief description of Hasti, its features and its architecture, we will discuss its components in detail. In each part, the learning algorithms will be described. Then some experimental results will be discussed and at last, we will have an overview of related works and will introduce a general framework to compare ontology learning systems and will compare Hasti with related works according to the framework. (C) 2003 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Shamsfard2004,
      author = {Shamsfard, M and Barforoush, AA},
      title = {Learning ontologies from natural language texts},
      journal = {INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES},
      year = {2004},
      volume = {60},
      number = {1},
      pages = {17-63},
      doi = {{10.1016/jijhcs.2003.08.001}}
    }
    
    Shannon, P., Reiss, D., Bonneau, R. & Baliga, N. The Gaggle: An open-source software system for integrating bioinformatics software and data sources {2006} BMC BIOINFORMATICS
    Vol. {7} 
    article DOI  
    Abstract: Background: Systems biologists work with many kinds of data, from many different sources, using a variety of software tools. Each of these tools typically excels at one type of analysis, such as of microarrays, of metabolic networks and of predicted protein structure. A crucial challenge is to combine the capabilities of these (and other forthcoming) data resources and tools to create a data exploration and analysis environment that does justice to the variety and complexity of systems biology data sets. A solution to this problem should recognize that data types, formats and software in this high throughput age of biology are constantly changing. Results: In this paper we describe the Gaggle -a simple, open-source Java software environment that helps to solve the problem of software and database integration. Guided by the classic software engineering strategy of separation of concerns and a policy of semantic flexibility, it integrates existing popular programs and web resources into a user-friendly, easily-extended environment. We demonstrate that four simple data types (names, matrices, networks, and associative arrays) are sufficient to bring together diverse databases and software. We highlight some capabilities of the Gaggle with an exploration of Helicobacter pylori pathogenesis genes, in which we identify a putative ricin-like protein -a discovery made possible by simultaneous data exploration using a wide range of publicly available data and a variety of popular bioinformatics software tools. Conclusion: We have integrated diverse databases (for example, KEGG, BioCyc, String) and software (Cytoscape, DataMatrixViewer, R statistical environment, and TIGR Microarray Expression Viewer). Through this loose coupling of diverse software and databases the Gaggle enables simultaneous exploration of experimental data (mRNA and protein abundance, protein-protein and protein-DNA interactions), functional associations (operon, chromosomal proximity, phylogenetic pattern), metabolic pathways (KEGG) and Pubmed abstracts (STRING web resource), creating an exploratory environment useful to `web browser and spreadsheet biologists', to statistically savvy computational biologists, and those in between. The Gaggle uses Java RMI and Java Web Start technologies and can be found at http:// gaggle. systemsbiology. net.
    BibTeX:
    @article{Shannon2006,
      author = {Shannon, PT and Reiss, DJ and Bonneau, R and Baliga, NS},
      title = {The Gaggle: An open-source software system for integrating bioinformatics software and data sources},
      journal = {BMC BIOINFORMATICS},
      year = {2006},
      volume = {7},
      doi = {{10.1186/1471-2105-7-176}}
    }
    
    Shen, Z., Ma, K.-L. & Eliassi-Rad, T. Visual analysis of large heterogeneous social networks by semantic and structural abstraction {2006} IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
    Vol. {12}({6}), pp. {1427-1439} 
    article  
    Abstract: Social network analysis is an active area of study beyond sociology. It uncovers the invisible relationships between actors in a network and provides understanding of social processes and behaviors. It has become an important technique in a variety of application areas such as the Web, organizational studies, and homeland security. This paper presents a visual analytics tool, OntoVis, for understanding large, heterogeneous social networks, in which nodes and links could represent different concepts and relations, respectively. These concepts and relations are related through an ontology (also known as a schema). OntoVis is named such because it uses information in the ontology associated with a social network to semantically prune a large, heterogeneous network. In addition to semantic abstraction, OntoVis also allows users to do structural abstraction and importance filtering to make large networks manageable and to facilitate analytic reasoning. All these unique capabilities of OntoVis are illustrated with several case studies.
    BibTeX:
    @article{Shen2006,
      author = {Shen, Zeqian and Ma, Kwan-Liu and Eliassi-Rad, Tina},
      title = {Visual analysis of large heterogeneous social networks by semantic and structural abstraction},
      journal = {IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS},
      year = {2006},
      volume = {12},
      number = {6},
      pages = {1427-1439}
    }
    
    Sheth, A., Aleman-Meza, B., Arpinar, I., Bertram, C., Warke, Y., Ramakrishanan, C., Halaschek, C., Anyanwu, K., Avant, D., Arpinar, F. & Kochut, K. Semantic association identification and knowledge discovery for national security applications {2005} JOURNAL OF DATABASE MANAGEMENT
    Vol. {16}({1}), pp. {33-53} 
    article  
    Abstract: Public and private organizations have access to a vast amount of internal, deep Web and open Web information. Transforming this heterogeneous and distributed information into actionable and insightful information is the key to the emerging new classes of business intelligence and national security applications. Although the role of semantics in search and integration has been often talked about, in this paper we discuss semantic approaches to support analytics on vast amounts of heterogeneous data. In particular, we bring together novel academic research and commercialized Semantic Web technology. The academic research related to semantic association identification is built upon commercial Semantic Web technology for semantic metadata extraction. A prototypical demonstration of this research and technology is presented in the context of an aviation security application of significance to national security.
    BibTeX:
    @article{Sheth2005,
      author = {Sheth, A and Aleman-Meza, B and Arpinar, IB and Bertram, C and Warke, Y and Ramakrishanan, C and Halaschek, C and Anyanwu, K and Avant, D and Arpinar, FS and Kochut, K},
      title = {Semantic association identification and knowledge discovery for national security applications},
      journal = {JOURNAL OF DATABASE MANAGEMENT},
      year = {2005},
      volume = {16},
      number = {1},
      pages = {33-53}
    }
    
    Sheth, A., Bertram, C., Avant, D., Hammond, B., Kochut, K. & Warke, Y. Managing semantic content for the Web {2002} IEEE INTERNET COMPUTING
    Vol. {6}({4}), pp. {80-87} 
    article  
    BibTeX:
    @article{Sheth2002,
      author = {Sheth, A and Bertram, C and Avant, D and Hammond, B and Kochut, K and Warke, Y},
      title = {Managing semantic content for the Web},
      journal = {IEEE INTERNET COMPUTING},
      year = {2002},
      volume = {6},
      number = {4},
      pages = {80-87}
    }
    
    Sheth, A., Henson, C. & Sahoo, S.S. Semantic sensor web {2008} IEEE INTERNET COMPUTING
    Vol. {12}({4}), pp. {78-83} 
    article  
    BibTeX:
    @article{Sheth2008,
      author = {Sheth, Amit and Henson, Cory and Sahoo, Satya S.},
      title = {Semantic sensor web},
      journal = {IEEE INTERNET COMPUTING},
      year = {2008},
      volume = {12},
      number = {4},
      pages = {78-83}
    }
    
    Shi, Z., Dong, M., Jiang, Y. & Zhang, H. A logical foundation for the semantic Web {2005} SCIENCE IN CHINA SERIES F-INFORMATION SCIENCES
    Vol. {48}({2}), pp. {161-178} 
    article DOI  
    Abstract: The current research progresses and problems of the semantic Web are analyzed in this paper, and the insufficiency of using description logic to act as logical foundation for the semantic Web is analyzed too. According to the characteristics and requirement of the semantic Web, a kind of new dynamic description logic (DDL) framework is presented. The representation and reasoning of static knowledge and dynamic knowledge are integrated in this framework. Especially, a kind of action description method is proposed, and according to description logic theory, the action semantics is described, so DDL is a kind of formal logical framework which can process static knowledge and dynamic knowledge. The DDL has clear and formally defined semantics. It provides decidable reasoning services, and it can support effective representation and reasoning of the static knowledge, dynamic process and running mechanism (realization and subsumption relation of action). Therefore, the DDL provides reasonable logic foundation for the semantic Web, and overcomes the insufficiency of using description logic to act as logical foundation for the semantic Web.
    BibTeX:
    @article{Shi2005,
      author = {Shi, ZZ and Dong, MK and Jiang, YC and Zhang, HJ},
      title = {A logical foundation for the semantic Web},
      journal = {SCIENCE IN CHINA SERIES F-INFORMATION SCIENCES},
      year = {2005},
      volume = {48},
      number = {2},
      pages = {161-178},
      doi = {{10.1360/03yf0506}}
    }
    
    Shilane, P., Min, P., Kazhdan, M. & Funkhouser, T. The Princeton shape benchmark {2004} PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SHAPE MODELING AND APPLICATIONS, pp. {167-178}  inproceedings  
    Abstract: In recent years, many shape representations and geometric algorithms have been proposed for matching 3D shapes. Usually, each algorithm is tested on a different (small) database of 3D models, and thus no direct comparison is available for competing methods. In this paper, we describe the Princeton Shape Benchmark (PSB), a publicly available database of polygonal models collected from the World Wide Web and a suite of tools for comparing shape matching and classification algorithms. One feature of the benchmark is that it provides multiple semantic labels for each 3D model. For instance, it includes one classification of the 3D models based on function, another that considers function and form, and others based on how the object was constructed (e.g., man-made versus natural objects). We find that experiments with these classifications can expose different properties of shape-based retrieval algorithms. For example, out of 12 shape descriptors tested, Extended Gaussian Images [13] performed best for distinguishing man-made from natural objects, while they performed among the worst for distinguishing specific object types. Based on experiments with several different shape descriptors, we conclude that no single descriptor is best for all classifications, and thus the main contribution of this paper is to provide a framework to determine the conditions under which each descriptor performs best.
    BibTeX:
    @inproceedings{Shilane2004,
      author = {Shilane, P and Min, P and Kazhdan, M and Funkhouser, T},
      title = {The Princeton shape benchmark},
      booktitle = {PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SHAPE MODELING AND APPLICATIONS},
      year = {2004},
      pages = {167-178},
      note = {6th International Conference on Shape Modeling and Applications, Genoa, ITALY, JUN 07-09, 2004}
    }
    
    Shvaiko, P. & Euzenat, J. A survey of schema-based matching approaches {2005}
    Vol. {3730}JOURNAL ON DATA SEMANTICS IV, pp. {146-171} 
    incollection  
    Abstract: Schema and ontology matching is a critical problem in many application domains, such as semantic web, schema/ontology integration, data warehouses, e-commerce, etc. Many different matching solutions have been proposed so far. In this paper we present a new classification of schema-based matching techniques that builds on the top of state of the art in both schema and ontology matching. Some innovations are in introducing new criteria which are based on (i) general properties of matching techniques, (ii) interpretation of input information, and (iii) the kind of input information. In particular, we distinguish between approximate and exact techniques at schema-level; and syntactic, semantic, and external techniques at element- and structure-level. Based on the classification proposed we overview some of the recent schema/ontology matching systems pointing which part of the solution space they cover. The proposed classification provides a common conceptual basis, and, hence, can be used for comparing different existing schema/ontology matching techniques and systems as well as for designing new ones, taking advantages of state of the art solutions.
    BibTeX:
    @incollection{Shvaiko2005,
      author = {Shvaiko, P and Euzenat, J},
      title = {A survey of schema-based matching approaches},
      booktitle = {JOURNAL ON DATA SEMANTICS IV},
      year = {2005},
      volume = {3730},
      pages = {146-171}
    }
    
    Siau, K. & Tian, Y. Supply chains integration: Architecture and enabling technologies {2004} JOURNAL OF COMPUTER INFORMATION SYSTEMS
    Vol. {44}({3}), pp. {67-72} 
    article  
    Abstract: An effective and efficient supply chain is vital to the competitiveness and the survival of an organization. With the emergence of the e-business era, supply chain systems need to be able to extend beyond the traditional boundaries. This paper proposes integrated supply chain architecture (SCA) that combines the benefits of Enterprise Resource Planning (ERP) and the various supply chain applications. Some necessary criteria for an integrated supply chain include completeness, security, flexibility, scalability, and interoperability. The enabling technologies for such a supply chain system include XML, DCOM, CORBA, SOAP, Net, and Semantic Web. Wireless and mobile technologies could further extend the supply chain by enabling any time, any place accessibility.
    BibTeX:
    @article{Siau2004,
      author = {Siau, K and Tian, YH},
      title = {Supply chains integration: Architecture and enabling technologies},
      journal = {JOURNAL OF COMPUTER INFORMATION SYSTEMS},
      year = {2004},
      volume = {44},
      number = {3},
      pages = {67-72}
    }
    
    da Silva, P., McGuinness, D. & Fikes, R. A proof markup language for Semantic Web services {2006} INFORMATION SYSTEMS
    Vol. {31}({4-5}), pp. {381-395} 
    article DOI  
    Abstract: The Semantic Web is being designed to enable automated reasoners to be used as core components in a wide variety of Web applications and services. In order for a client to accept and trust a result produced by perhaps an unfamiliar Web service, the result needs to be accompanied by a justification that is understandable and usable by the client. In this paper, we describe the proof markup language (PML), an interlingua representation for justifications of results produced by Semantic Web services. We also introduce our Inference Web infrastructure that uses PML as the foundation for providing explanations of Web services to end users. We additionally show how PM L is critical for and provides the foundation for hybrid reasoning where results are produced cooperatively by multiple reasoners. Our contributions in this paper focus on technological foundations for capturing formal representations of term meaning and justification descriptions thereby facilitating trust and reuse of answers from web agents. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Silva2006,
      author = {da Silva, PP and McGuinness, DL and Fikes, R},
      title = {A proof markup language for Semantic Web services},
      journal = {INFORMATION SYSTEMS},
      year = {2006},
      volume = {31},
      number = {4-5},
      pages = {381-395},
      doi = {{10.1016/j.is.2005.02.003}}
    }
    
    Sintek, M. & Decker, S. TRIPLE - A query, inference, and transformation language for the Semantic Web {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {364-378} 
    inproceedings  
    Abstract: This paper presents TRIPLE, a layered and modular rule language for the Semantic Web [1]. TRIPLE is based on Horn logic and borrows many basic features from F-Logic [11] but is especially designed for querying and transforming RDF models [20]. TRIPLE can be viewed as a successor of SiLRI (Simple Logic-based RDF Interpreter [5]). One of the most important differences to F-Logic and SiLRI is that TRIPLE does not have a fixed semantics for object-oriented features like classes and inheritance. Its layered architecture allows such features to be easily defined for different object-oriented and other data models like UML, Topic Maps, or RDF Schema [19]. Description logics extensions of RDF (Schema) like OIL [17] and DAML+OIL [3] that cannot be fully handled by Horn logic are provided as modules that interact with a description logic classifier, e.g. FaCT [9], resulting in a brid rule language. This paper sketches syntax and semantics of TRIPLE.
    BibTeX:
    @inproceedings{Sintek2002,
      author = {Sintek, M and Decker, S},
      title = {TRIPLE - A query, inference, and transformation language for the Semantic Web},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {364-378},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Sirin, E., Parsia, B. & Hendler, J. Filtering and selecting semantic Web Services with interactive composition techniques {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({4}), pp. {42-49} 
    article  
    BibTeX:
    @article{Sirin2004,
      author = {Sirin, E and Parsia, B and Hendler, J},
      title = {Filtering and selecting semantic Web Services with interactive composition techniques},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {4},
      pages = {42-49}
    }
    
    Sivashanmugam, K., Miller, J., Sheth, A. & Verma, K. Framework for semantic web process composition {2004} INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE
    Vol. {9}({2}), pp. {71-106} 
    article  
    Abstract: Web services have the potential to revolutionize e-commerce by enabling businesses to interact with each other on the fly. To date, however, Web processes using Web services have been created mostly at the syntactic level. Current composition standards focus on building processes based on the interface description of the participating services. This rigid approach, with its strong coupling between the process and the interface of the participating services, does not allow businesses to dynamically change partners and services. As shown in this article, Web process composition techniques can be enhanced by using semantic process templates to capture the semantic requirements of the process. The semantic process templates act as configurable modules for common industry processes maintaining the semantics of the participating activities, control flow, intermediate calculations, and conditional branches, and exposing them in an industry-accepted interface. The templates are instantiated to form executable processes according to the semantics of the activities in the templates. The use of ontologies in template definition allows much richer description of activity requirements and a more effective way of locating services to carry out activities in the executable Web process. Discovery of services considers not only functionality, but also the quality of service (QoS) of the corresponding activities. This unique approach combines the expressive power of present Web service composition standards with the advantages of Semantic Web techniques for process template definition and Web service discovery. The prototype implementation of the framework for building the templates carries out Semantic Web service discovery and generates the processes.
    BibTeX:
    @article{Sivashanmugam2004,
      author = {Sivashanmugam, K and Miller, JA and Sheth, AP and Verma, K},
      title = {Framework for semantic web process composition},
      journal = {INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE},
      year = {2004},
      volume = {9},
      number = {2},
      pages = {71-106}
    }
    
    Sivashanmugam, K., Verma, K., Sheth, A. & Miller, J. Adding semantics to Web services standards {2003} ICWS'03: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON WEB SERVICES, pp. {395-401}  inproceedings  
    Abstract: With the increasing growth in popularity of Web services, discovery of relevant Web services becomes a significant challenge. One approach is to develop semantic Web services where by the Web services are annotated based on shared ontologies, and use these annotations for semantics-based discovery of relevant Web services. We discuss one such approach that involves adding semantics to WSDL using DAML+OIL ontologies. Our approach also uses UDDI to store these semantic annotations and search for Web services based on them. We compare our approach with another initiative to add semantics to support Web service discovery, and show how our approach may fit current standards-based industry approach better.
    BibTeX:
    @inproceedings{Sivashanmugam2003,
      author = {Sivashanmugam, K and Verma, K and Sheth, A and Miller, J},
      title = {Adding semantics to Web services standards},
      booktitle = {ICWS'03: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON WEB SERVICES},
      year = {2003},
      pages = {395-401},
      note = {International Conference on Web Services, LAS VEGAS, NV, JUN 23-26, 2003}
    }
    
    Smith, A.K., Cheung, K.-H., Yip, K.Y., Schultz, M. & Gerstein, M.K. LinkHub: a Semantic Web system that facilitates cross-database queries and information retrieval in proteomics {2007} BMC BIOINFORMATICS
    Vol. {8}({Suppl. 3}) 
    article DOI  
    Abstract: Background: A key abstraction in representing proteomics knowledge is the notion of unique identifiers for individual entities ( e. g. proteins) and the massive graph of relationships among them. These relationships are sometimes simple ( e. g. synonyms) but are often more complex ( e. g. one-to-many relationships in protein family membership). Results: We have built a software system called LinkHub using Semantic Web RDF that manages the graph of identifier relationships and allows exploration with a variety of interfaces. For efficiency, we also provide relational-database access and translation between the relational and RDF versions. LinkHub is practically useful in creating small, local hubs on common topics and then connecting these to major portals in a federated architecture; we have used LinkHub to establish such a relationship between UniProt and the North East Structural Genomics Consortium. LinkHub also facilitates queries and access to information and documents related to identifiers spread across multiple databases, acting as ``connecting glue'' between different identifier spaces. We demonstrate this with example queries discovering ``interologs'' of yeast protein interactions in the worm and exploring the relationship between gene essentiality and pseudogene content. We also show how ``protein family based'' retrieval of documents can be achieved. LinkHub is available at hub.gersteinlab.org and hub.nesg.org with supplement, database models and full-source code. Conclusion: LinkHub leverages Semantic Web standards-based integrated data to provide novel information retrieval to identifier-related documents through relational graph queries, simplifies and manages connections to major hubs such as UniProt, and provides useful interactive and query interfaces for exploring the integrated data.
    BibTeX:
    @article{Smith2007,
      author = {Smith, Andrew K. and Cheung, Kei-Hoi and Yip, Kevin Y. and Schultz, Martin and Gerstein, Mark K.},
      title = {LinkHub: a Semantic Web system that facilitates cross-database queries and information retrieval in proteomics},
      journal = {BMC BIOINFORMATICS},
      year = {2007},
      volume = {8},
      number = {Suppl. 3},
      doi = {{10.1186/1471-2105-8-S3-S5}}
    }
    
    Specia, L. & Motta, E. Integrating folksonomies with the semantic web {2007}
    Vol. {4519}Semantic Web: Research and Applications, Proceedings, pp. {624-639} 
    inproceedings  
    Abstract: While tags in collaborative tagging systems serve primarily an indexing purpose, facilitating search and navigation of resources, the use of the same tags by more than one individual can yield a collective classification schema. We present an approach for making explicit the semantics behind the tag space in social tagging systems, so that this collaborative organization can emerge in the form of groups of concepts and partial ontologies. This is achieved by using a combination of shallow pre-processing strategies and statistical techniques together with knowledge provided by ontologies available on the semantic web. Preliminary results on the del.icio.us and Flickr tag sets show that the approach is very promising: it generates clusters with highly related tags corresponding to concepts in ontologies and meaningful relationships among subsets of these tags can be identified.
    BibTeX:
    @inproceedings{Specia2007,
      author = {Specia, Lucia and Motta, Enrico},
      title = {Integrating folksonomies with the semantic web},
      booktitle = {Semantic Web: Research and Applications, Proceedings},
      year = {2007},
      volume = {4519},
      pages = {624-639},
      note = {4th European Semantic Web Conference, Innsbruck, AUSTRIA, JUN 03-07, 2007}
    }
    
    Spinellis, D. Global analysis and transformations in preprocessed languages {2003} IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
    Vol. {29}({11}), pp. {1019-1030} 
    article  
    Abstract: Tool support for refactoring code written in mainstream languages such as C and C++ is currently lacking due to the complexity introduced by the mandatory preprocessing phase that forms part of the C/C++ compilation cycle. The defintion and use of macros complicates the notions of scope and of identifier boundaries. The concept of token equivalence classes can be used to bridge the gap between the language proper semantic analysis and the nonpreprocessed source code. The CScout toolchest uses the developed theory to analyze large interdependent program families. A Web-based interactive front end allows the precise realization of rename and remove refactorings on the original C source code. In addition, CScout can convert programs into a portable obfuscated format or store a complete and accurate representation of the code and its identifiers in a relational database.
    BibTeX:
    @article{Spinellis2003,
      author = {Spinellis, D},
      title = {Global analysis and transformations in preprocessed languages},
      journal = {IEEE TRANSACTIONS ON SOFTWARE ENGINEERING},
      year = {2003},
      volume = {29},
      number = {11},
      pages = {1019-1030}
    }
    
    Splendiani, A. RDFScape: Semantic Web meets systems biology {2008} BMC BIOINFORMATICS
    Vol. {9}({Suppl. 4}) 
    article DOI  
    Abstract: Background: The recent availability of high-throughput data in molecular biology has increased the need for a formal representation of this knowledge domain. New ontologies are being developed to formalize knowledge, e. g. about the functions of proteins. As the Semantic Web is being introduced into the Life Sciences, the basis for a distributed knowledge-base that can foster biological data analysis is laid. However, there still is a dichotomy, in tools and methodologies, between the use of ontologies in biological investigation, that is, in relation to experimental observations, and their use as a knowledge-base. Results: RDFScape is a plugin that has been developed to extend a software oriented to biological analysis with support for reasoning on ontologies in the semantic web framework. We show with this plugin how the use of ontological knowledge in biological analysis can be extended through the use of inference. In particular, we present two examples relative to ontologies representing biological pathways: we demonstrate how these can be abstracted and visualized as interaction networks, and how reasoning on causal dependencies within elements of pathways can be implemented. Conclusions: The use of ontologies for the interpretation of high-throughput biological data can be improved through the use of inference. This allows the use of ontologies not only as annotations, but as a knowledge-base from which new information relevant for specific analysis can be derived.
    BibTeX:
    @article{Splendiani2008,
      author = {Splendiani, Andrea},
      title = {RDFScape: Semantic Web meets systems biology},
      journal = {BMC BIOINFORMATICS},
      year = {2008},
      volume = {9},
      number = {Suppl. 4},
      note = {7th International Workshop on Network Tools and Applications in Biology, Pisa, ITALY, JUN 12-15, 2007},
      doi = {{10.1186/1471-2105-9-S4-S6}}
    }
    
    Spyns, P., Oberle, D., Volz, R., Zheng, J., Jarrar, M., Sure, Y., Studer, R. & Meersman, R. OntoWeb - A semantic Web community portal {2002}
    Vol. {2569}PRACTICAL ASPECTS OF KNOWLEDGE MANAGEMENT, pp. {189-200} 
    inproceedings  
    Abstract: This paper describes a semantic portal through which knowledge can be gathered, stored, secured and accessed by members of a certain community. In particular, this portal takes into account companies and research institutes participating in the E.U. funded thematic network called OntoWeb. Ontology-based annotation of information is a prerequisite in order to offer the possibility of knowledge retrieval and extraction. The usage of well-defined semantics allows for the knowledge exchange between different OntoWeb community members. Thus, members are able to publish annotated information on the web, which is then crawled by a syndicator and stored in the portal's knowledge base. The backbone of the portal architecture consists of a knowledge base in which the ontology and the instances are stored and maintained. In addition, ontology-boosted query mechanisms and presentation facilities are provided.
    BibTeX:
    @inproceedings{Spyns2002,
      author = {Spyns, P and Oberle, D and Volz, R and Zheng, JJ and Jarrar, M and Sure, Y and Studer, R and Meersman, R},
      title = {OntoWeb - A semantic Web community portal},
      booktitle = {PRACTICAL ASPECTS OF KNOWLEDGE MANAGEMENT},
      year = {2002},
      volume = {2569},
      pages = {189-200},
      note = {4th International Conference on Practical Aspects of Knowledge Management, VIENNA, AUSTRIA, DEC 02-03, 2002}
    }
    
    Staab, S., Angele, J., Decker, S., Erdmann, M., Hotho, A., Maedche, A., Schnurr, H., Studer, R. & Sure, Y. Semantic community Web portals {2000} COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING
    Vol. {33}({1-6}), pp. {473-491} 
    article  
    Abstract: Community Web portals serve as portals for the information needs of particular communities on the Web. We here discuss how a comprehensive and flexible strategy for building and maintaining a high-value community Web portal has been conceived and implemented. The strategy includes collaborative information provisioning by the community members. It is based on an ontology as a semantic backbone for accessing information on the portal, for contributing information, as well as for developing and maintaining the portal. We have also implemented a set of ontology-based tools that have facilitated the construction of our show case - the community Web portal of the knowledge acquisition community. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Staab2000,
      author = {Staab, S and Angele, J and Decker, S and Erdmann, M and Hotho, A and Maedche, A and Schnurr, HP and Studer, R and Sure, Y},
      title = {Semantic community Web portals},
      journal = {COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING},
      year = {2000},
      volume = {33},
      number = {1-6},
      pages = {473-491},
      note = {9th International World Wide Web Conference (WWW9), AMSTERDAM, NETHERLANDS, MAY 15-19, 2000}
    }
    
    Steyvers, M. & Tenenbaum, J. The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth {2005} COGNITIVE SCIENCE
    Vol. {29}({1}), pp. {41-78} 
    article  
    Abstract: We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.
    BibTeX:
    @article{Steyvers2005,
      author = {Steyvers, M and Tenenbaum, JB},
      title = {The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth},
      journal = {COGNITIVE SCIENCE},
      year = {2005},
      volume = {29},
      number = {1},
      pages = {41-78}
    }
    
    Stoilos, G., Simou, N., Stamou, G. & Kollias, S. Uncertainty and the semantic web {2006} IEEE INTELLIGENT SYSTEMS
    Vol. {21}({5}), pp. {84-87} 
    article  
    BibTeX:
    @article{Stoilos2006,
      author = {Stoilos, Giorgos and Simou, Nikos and Stamou, Giorgos and Kollias, Stefanos},
      title = {Uncertainty and the semantic web},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2006},
      volume = {21},
      number = {5},
      pages = {84-87}
    }
    
    Straccia, U. Towards a fuzzy description logic for the semantic web (preliminary report) {2005}
    Vol. {3532}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {167-181} 
    inproceedings  
    Abstract: In this paper we present a fuzzy version of SHOIN(D), the corresponding Description Logic of the ontology description language OWL DL. We show that the representation and reasoning capabilities of fuzzy SHOIN(D) go clearly beyond classical SHOIN(D). We present its syntax and semantics. Interesting features are that concrete domains are fuzzy and entailment and sub-sumption relationships may hold to some degree in the unit interval [0, 1].
    BibTeX:
    @inproceedings{Straccia2005,
      author = {Straccia, U},
      title = {Towards a fuzzy description logic for the semantic web (preliminary report)},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2005},
      volume = {3532},
      pages = {167-181},
      note = {2nd European Semantic Web Conference, Heraklion, GREECE, MAY 29-JUN 01, 2005}
    }
    
    Stroulia, E. & Hatch, M. An intelligent-agent architecture for flexible service integration on the web {2003} IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS
    Vol. {33}({4}), pp. {468-479} 
    article DOI  
    Abstract: A plethora of information and services is available on the World Wide Web; the challenge has now become to enable the interoperation of these services in the context of high-quality, integrated applications, providing personalized value-added services to the end user. TaMeX is a software framework that supports the development of intelligent multiagent applications, integrating services of existing web applications. The TaMeX applications rely on a set of specifications of the domain model, the integration workflow, their semantic constraints, the end-user profiles, and the services of the existing web applications; all these models are declaratively represented in the XML-based TaMeX integration-specification language. At run-time, the TaMeX agents use these models to flexibly interact with the end users, monitor and control the execution of the underlying applications' services and coordinate the information exchange among them, and to collaborate with each other to react to failures and effectively accomplish the desired user request. In this paper, we describe the TaMeX framework and we illustrate its capabilities with an integrated book-finding application as a case study.
    BibTeX:
    @article{Stroulia2003,
      author = {Stroulia, E and Hatch, MP},
      title = {An intelligent-agent architecture for flexible service integration on the web},
      journal = {IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS},
      year = {2003},
      volume = {33},
      number = {4},
      pages = {468-479},
      doi = {{10.1109/TSMCC.2003.818475}}
    }
    
    Stroulia, E. & Wang, Y. Structural and semantic matching for assessing web-service similarity {2005} INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS
    Vol. {14}({4}), pp. {407-437} 
    article  
    Abstract: The web-services stack of standards is designed to support the reuse and interoperation of software components on the web. A critical step in the process of developing applications based on web services is, service discovery, i.e. the identification of existing web services that can potentially be used in the context of a new web application. Discovery through catalog-style browsing (such as supported currently by web-service registries) is clearly insufficient. To support programmatic service discovery, we have developed a suite of methods that assess the similarity between two WSDL (Web Service Description Language) specifications based on the structure of their data types and operations and the semantics of their natural language descriptions and identifiers. Given only a textual description of the desired service, a semantic information-retrieval method can be used to identify and order the most relevant WSDL specifications based on the similarity of the element descriptions of the available specifications with the query. If a (potentially partial) specification of the desired service behavior is also available, this set of likely candidates can be further refined by a semantic structure-matching step, assessing the structural similarity of the desired vs the retrieved services and the semantic similarity of their identifiers. In this paper, we describe and experimentally evaluate our suite of service-similarity assessment methods.
    BibTeX:
    @article{Stroulia2005,
      author = {Stroulia, E and Wang, YQ},
      title = {Structural and semantic matching for assessing web-service similarity},
      journal = {INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS},
      year = {2005},
      volume = {14},
      number = {4},
      pages = {407-437}
    }
    
    Stumme, G. Off to new shores: conceptual knowledge discovery and processing {2003} INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
    Vol. {59}({3}), pp. {287-325} 
    article DOI  
    Abstract: In the last years, the main orientation of formal concept analysis (FCA) has turned from mathematics towards computer science. This article provides a review of this new orientation and analyses why and how FCA and computer science attracted each other. It discusses FCA as a knowledge representation formalism using five knowledge representation principles provided by Davis et al. (1993). It then studies how and why mathematics-based researchers got attracted by computer science. We will argue for continuing this trend by integrating the two research areas FCA and ontology engineering. The second part of the article discusses three lines of research which witness the new orientation of FCA: FCA as a conceptual clustering technique and its application for supporting the merging of ontologies; the efficient computation of association rules and the structuring of the results; and the visualization and management of conceptual hierarchies and ontologies including its application in an email management system. (C) 2003 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Stumme2003,
      author = {Stumme, G},
      title = {Off to new shores: conceptual knowledge discovery and processing},
      journal = {INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES},
      year = {2003},
      volume = {59},
      number = {3},
      pages = {287-325},
      doi = {{10.1016/S1071-5819(03)00044-2}}
    }
    
    Subasic, P. & Huettner, A. Affect analysis of text using fuzzy semantic typing {2001} IEEE TRANSACTIONS ON FUZZY SYSTEMS
    Vol. {9}({4}), pp. {483-496} 
    article  
    Abstract: We propose a novel, convenient fusion of natural language processing and fuzzy logic techniques for analyzing the affect content in free text. Our main goals are fast analysis and visualization of affect content for decision making. The main linguistic resource for fuzzy semantic typing is the fuzzy-affect lexicon, from which other important resources-the fuzzy thesaurus and affect category groups-are generated. Free text is tagged with affect categories from the lexicon and the affect categories' centralities and intensities are combined using techniques from fuzzy logic to produce affect sets-fuzzy sets representing the affect quality of a document. We show different aspects of affect analysis using news content and movie reviews. Our experiments show a good correspondence between affect sets and human judgments of affect content. We ascribe this to the representation of ambiguity in our fuzzy affect lexicon and the ability of fuzzy logic to deal successfully with the ambiguity of words in a natural language. Planned extensions of the system include personalized profiles for Web-based content dissemination, fuzzy retrieval, clustering, and classification.
    BibTeX:
    @article{Subasic2001,
      author = {Subasic, P and Huettner, A},
      title = {Affect analysis of text using fuzzy semantic typing},
      journal = {IEEE TRANSACTIONS ON FUZZY SYSTEMS},
      year = {2001},
      volume = {9},
      number = {4},
      pages = {483-496}
    }
    
    Sugumaran, V. & Storey, V.C. The role of domain ontologies in database design: An ontology management and conceptual modeling environment {2006} ACM TRANSACTIONS ON DATABASE SYSTEMS
    Vol. {31}({3}), pp. {1064-1094} 
    article  
    Abstract: Database design is difficult because it involves a database designer understanding an application and translating the design requirements into a conceptual model. However, the designer may have little or no knowledge about the application or task for which the database is being designed. This research presents a methodology for supporting database design creation and evaluation that makes use of domain-specific knowledge about an application stored in the form of domain ontologies. The methodology is implemented in a prototype system, the Ontology Management and Database Design Environment. Initial testing of the prototype illustrates that the incorporation and use of ontologies is effective in creating entity-relationship models.
    BibTeX:
    @article{Sugumaran2006,
      author = {Sugumaran, Vijayan and Storey, Veda C.},
      title = {The role of domain ontologies in database design: An ontology management and conceptual modeling environment},
      journal = {ACM TRANSACTIONS ON DATABASE SYSTEMS},
      year = {2006},
      volume = {31},
      number = {3},
      pages = {1064-1094},
      note = {International Conference on Information Systems (ICIS 2003), Seattle, WA, DEC 15, 2003}
    }
    
    Sure, Y., Erdmann, M., Angele, J., Staab, S., Studer, R. & Wenke, D. OntoEdit: Collaborative ontology development for the Semantic Web {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {221-235} 
    inproceedings  
    Abstract: Ontologies now play an important role for enabling the semantic web. They provide a source of precisely defined terms e.g. for knowledge-intensive applications. The terms are used for concise communication across people and applications. Typically the development of ontologies involves collaborative efforts of multiple persons. OntoEdit is an ontology editor that integrates numerous aspects of ontology engineering. This paper focuses on collaborative development of ontologies with OntoEdit which is guided by a comprehensive methodology.
    BibTeX:
    @inproceedings{Sure2002,
      author = {Sure, Y and Erdmann, M and Angele, J and Staab, S and Studer, R and Wenke, D},
      title = {OntoEdit: Collaborative ontology development for the Semantic Web},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {221-235},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Sycara, K., Paolucci, M., Soudry, J. & Srinivasan, N. Dynamic discovery and coordination of agent-based Semantic Web Services {2004} IEEE INTERNET COMPUTING
    Vol. {8}({3}), pp. {66-73} 
    article  
    BibTeX:
    @article{Sycara2004,
      author = {Sycara, K and Paolucci, M and Soudry, J and Srinivasan, N},
      title = {Dynamic discovery and coordination of agent-based Semantic Web Services},
      journal = {IEEE INTERNET COMPUTING},
      year = {2004},
      volume = {8},
      number = {3},
      pages = {66-73}
    }
    
    Tamma, V., Phelps, S., Dickinson, I. & Wooldridge, M. Ontologies for supporting negotiation in e-commerce {2005} ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
    Vol. {18}({2}), pp. {223-236} 
    article DOI  
    Abstract: In this paper we present our experience in applying Semantic Web technology to automated negotiation. This result is a novel approach to automated negotiation, that is particularly suitable to open environments such as the Internet. In this approach, agents can negotiate in any type of marketplace regardless of the negotiation mechanism in use. In order to support a wide variety of negotiation mechanisms, protocols are not hard-coded in the agents participating to negotiations, but are expressed in terms of a shared ontology, thus making this approach particularly suitable for applications such as electronic commerce. The paper describes a novel approach to negotiation, where the negotiation protocol does not need to be hard-coded in agents, but it is represented by an ontology: an explicit and declarative representation of the negotiation protocol. In this approach, agents need very little prior knowledge of the protocol, and acquire this knowledge directly from the marketplace. The ontology is also used to tune agents' strategies to the specific protocol used. The paper presents this novel approach and describes the experience gained in implementing the ontology and the learning mechanism to tune the strategy. (c) 2004 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Tamma2005,
      author = {Tamma, V and Phelps, S and Dickinson, I and Wooldridge, M},
      title = {Ontologies for supporting negotiation in e-commerce},
      journal = {ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE},
      year = {2005},
      volume = {18},
      number = {2},
      pages = {223-236},
      doi = {{10.1016/j.engappai.2004.11.011}}
    }
    
    Taylor, K., Gledhill, R., Essex, J., Frey, J., Harris, S. & De Roure, D. Bringing chemical data onto the Semantic Web {2006} JOURNAL OF CHEMICAL INFORMATION AND MODELING
    Vol. {46}({3}), pp. {939-952} 
    article DOI  
    Abstract: Present chemical data storage methodologies place many restrictions on the use of the stored data. The absence of sufficient high-quality metadata prevents intelligent computer access to the data without human intervention. This creates barriers to the automation of data mining in activities such as quantitative structure-activity relationship modelling. The application of Semantic Web technologies to chemical data is shown to reduce these limitations. The use of unique identifiers and relationships ( represented as uniform resource identifiers, URIs, and resource description framework, RDF) held in a triplestore provides for greater detail and flexibility in the sharing and storage of molecular structures and properties.
    BibTeX:
    @article{Taylor2006,
      author = {Taylor, KR and Gledhill, RJ and Essex, JW and Frey, JG and Harris, SW and De Roure, DC},
      title = {Bringing chemical data onto the Semantic Web},
      journal = {JOURNAL OF CHEMICAL INFORMATION AND MODELING},
      year = {2006},
      volume = {46},
      number = {3},
      pages = {939-952},
      doi = {{10.1021/ci050378m}}
    }
    
    Tempich, C., Pinto, H., Sure, Y. & Staab, S. An argumentation ontology for DIstributed, loosely-controlled and evolvInG engineering processes of oNTologies (DILIGENT) {2005}
    Vol. {3532}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {241-256} 
    inproceedings  
    Abstract: A prerequisite to the success of the Semantic Web are shared ontologies which enable the seamless exchange of information between different parties. Engineering a shared ontology is a social process. Since its participants have slightly different views on the world, a harmonization effort requires discussing the resulting ontology. During the discussion, participants exchange arguments which may support or object to certain ontology engineering decisions. Experience from software engineering shows that tracking exchanged arguments can help users at a later stage to better understand the assumptions underlying the design decisions. Furthermore, as the constructed ontology becomes larger, ontology engineers might argue in a contradictory way without knowing so, In this paper we present an ontology which formalizes the main concepts which are used in an DILIGENT ontology engineering discussion and thus enables tracking arguments and allows for inconsistency detection. We provide an example which is drawn from experiments in an ontology engineering process to construct an ontology for knowledge management in our institute. Having constructed the ontology we also show how automated ontology learning algorithms could be taken as participants in the OE discussion. Hence, we enable the integration of manual, semi-automatic and automatic ontology creation approaches.
    BibTeX:
    @inproceedings{Tempich2005,
      author = {Tempich, C and Pinto, HS and Sure, Y and Staab, S},
      title = {An argumentation ontology for DIstributed, loosely-controlled and evolvInG engineering processes of oNTologies (DILIGENT)},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2005},
      volume = {3532},
      pages = {241-256},
      note = {2nd European Semantic Web Conference, Heraklion, GREECE, MAY 29-JUN 01, 2005}
    }
    
    Theobald, A. & Weikum, G. The index-based XXL search engine for querying XML data with relevance ranking {2002}
    Vol. {2287}ADVANCES IN DATABASE TECHNOLOGY - EDBT 2002, pp. {477-495} 
    inproceedings  
    Abstract: Query languages for XML such as XPath or XQuery support Boolean retrieval: a query result is a (possibly restructured) subset of XML elements or entire documents that satisfy the search conditions of the query. This search paradigm works for highly schematic XML data collections such as electronic catalogs. However, for searching information in open environments such as the Web or intranets of large corporations, ranked retrieval is more appropriate: a query result is a rank list of XML elements in descending order of (estimated) relevance. Web search engines, which,are based on the ranked retrieval paradigm, do, however, not consider the additional information and rich annotations provided by the structure of XML documents and their element names. This paper presents the XXL search engine that supports relevance ranking on XML data. XXL is particularly geared for path queries with wildcards that can span multiple XML collections and contain both exact-match as well as semantic-similarity search conditions. In addition, ontological information and suitable index structures are used to improve the search efficiency and effectiveness. XXL is fully implemented as a suite of Java servlets. Experiments with a variety of structurally diverse XML data demonstrate the efficiency of the XXL search engine and underline its effectiveness for ranked retrieval.
    BibTeX:
    @inproceedings{Theobald2002,
      author = {Theobald, A and Weikum, G},
      title = {The index-based XXL search engine for querying XML data with relevance ranking},
      booktitle = {ADVANCES IN DATABASE TECHNOLOGY - EDBT 2002},
      year = {2002},
      volume = {2287},
      pages = {477-495},
      note = {8th International Conference on Extending Database Technology, PRAGUE, CZECH REPUBLIC, MAR 25-27, 2002}
    }
    
    Tho, Q., Hui, S., Fong, A. & Cao, T. Automatic fuzzy ontology generation for Semantic Web {2006} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {18}({6}), pp. {842-856} 
    article  
    Abstract: Ontology is an effective conceptualism commonly used for the Semantic Web. Fuzzy logic can be incorporated to ontology to represent uncertainty information. Typically, fuzzy ontology is generated from a predefined concept hierarchy. However, to construct a concept hierarchy for a certain domain can be a difficult and tedious task. To tackle this problem, this paper proposes the FOGA ( Fuzzy Ontology Generation frAmework) for automatic generation of fuzzy ontology on uncertainty information. The FOGA framework comprises the following components: Fuzzy Formal Concept Analysis, Concept Hierarchy Generation, and Fuzzy Ontology Generation. We also discuss approximating reasoning for incremental enrichment of the ontology with new upcoming data. Finally, a fuzzy-based technique for integrating other attributes of database to the ontology is proposed.
    BibTeX:
    @article{Tho2006,
      author = {Tho, QT and Hui, SC and Fong, ACM and Cao, TH},
      title = {Automatic fuzzy ontology generation for Semantic Web},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2006},
      volume = {18},
      number = {6},
      pages = {842-856}
    }
    
    Thompson, C., Pazandak, P. & Tennant, H. Talk to your semantic Web {2005} IEEE INTERNET COMPUTING
    Vol. {9}({6}), pp. {75-78} 
    article  
    BibTeX:
    @article{Thompson2005,
      author = {Thompson, CW and Pazandak, P and Tennant, HR},
      title = {Talk to your semantic Web},
      journal = {IEEE INTERNET COMPUTING},
      year = {2005},
      volume = {9},
      number = {6},
      pages = {75-78}
    }
    
    Thorisson, G.A., Muilu, J. & Brookes, A.J. Genotype-phenotype databases: challenges and solutions for the post-genomic era {2009} NATURE REVIEWS GENETICS
    Vol. {10}({1}), pp. {9-18} 
    article DOI  
    Abstract: The flow of research data concerning the genetic basis of health and disease is rapidly increasing in speed and complexity. In response, many projects are seeking to ensure that there are appropriate informatics tools, systems and databases available to manage and exploit this flood of information. Previous solutions, such as central databases, journal-based publication and manually intensive data curation, are now being enhanced with new systems for federated databases, database publication, and more automated management of data flows and quality control. Along with emerging technologies that enhance connectivity and data retrieval, these advances should help to create a powerful knowledge environment for genotype-phenotype information.
    BibTeX:
    @article{Thorisson2009,
      author = {Thorisson, Gudmundur A. and Muilu, Juha and Brookes, Anthony J.},
      title = {Genotype-phenotype databases: challenges and solutions for the post-genomic era},
      journal = {NATURE REVIEWS GENETICS},
      year = {2009},
      volume = {10},
      number = {1},
      pages = {9-18},
      doi = {{10.1038/nrg2483}}
    }
    
    Tijerino, Y., Embley, D., Lonsdale, D., Ding, Y. & Nagy, G. Towards ontology generation from tables {2005} WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS
    Vol. {8}({3}), pp. {261-285} 
    article DOI  
    Abstract: At the heart of today's information-explosion problems are issues involving semantics, mutual understanding, concept matching, and interoperability. Ontologies and the Semantic Web are offered as a potential solution, but creating ontologies for real-world knowledge is nontrivial. If we could automate the process, we could significantly improve our chances of making the Semantic Web a reality. While understanding natural language is difficult, tables and other structured information make it easier to interpret new items and relations. In this paper we introduce an approach to generating ontologies based on table analysis. We thus call our approach TANGO (Table ANalysis for Generating Ontologies). Based on conceptual modeling extraction techniques, TANGO attempts to (i) understand a table's structure and conceptual content; (ii) discover the constraints that hold between concepts extracted from the table; (iii) match the recognized concepts with ones from a more general specification of related concepts; and (iv) merge the resulting structure with other similar knowledge representations. TANGO is thus a formalized method of processing the format and content of tables that can serve to incrementally build a relevant reusable conceptual ontology.
    BibTeX:
    @article{Tijerino2005,
      author = {Tijerino, YA and Embley, DW and Lonsdale, DW and Ding, YH and Nagy, G},
      title = {Towards ontology generation from tables},
      journal = {WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS},
      year = {2005},
      volume = {8},
      number = {3},
      pages = {261-285},
      note = {4th International Conference on Web Information Systems Engineering (WISE 2003), ROME, ITALY, DEC 10-12, 2003},
      doi = {{10.1007/s11280-005-0360-8}}
    }
    
    Toms, E. & Taves, A. Measuring user perceptions of Web site reputation {2004} INFORMATION PROCESSING & MANAGEMENT
    Vol. {40}({2}), pp. {291-317} 
    article DOI  
    Abstract: In this study, we compare a search tool, TOPIC, with three other widely used tools that retrieve information from the Web: AltaVista, Google, and Lycos. These tools use different techniques for outputting and ranking Web sites: external link structure (TOPIC and Google) and semantic content analysis (AltaVista and Lycos). TOPIC purports to output, and highly rank within its hit list, reputable Web sites for searched topics. In this study, 80 participants reviewed the output (i.e., highly ranked sites) from each tool and assessed the quality of retrieved sites. The 4800 individual assessments of 240 sites that represent 12 topics indicated that Google tends to identify and highly rank significantly more reputable Web sites than TOPIC, which, in turn, outputs more than AltaVista and Lycos, but this was not consistent from topic to topic. Metrics derived from reputation research were used in the assessment and a factor analysis was employed to identify a key factor, which we call `repute'. The results of this research include insight into the factors that Web users consider in formulating perceptions of Web site reputation, and insight into which search tools are outputting reputable sites for Web users. Our findings, we believe, have implications for Web users and suggest the need for future research to assess the relationship between Web page characteristics and their perceived reputation. (C) 2003 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Toms2004,
      author = {Toms, EG and Taves, AR},
      title = {Measuring user perceptions of Web site reputation},
      journal = {INFORMATION PROCESSING & MANAGEMENT},
      year = {2004},
      volume = {40},
      number = {2},
      pages = {291-317},
      doi = {{10.1016/j.ipm.2003.08.007}}
    }
    
    Tonti, G., Bradshaw, J., Jeffers, R., Montanari, R., Suri, N. & Uszok, A. Semantic Web languages for policy representation and reasoning: A comparison of KAoS, Rei, and Ponder {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {419-437} 
    inproceedings  
    Abstract: Policies are being increasingly used for automated system management and controlling the behavior of complex systems. The use of policies allows administrators to modify system behavior without changing source code or requiring the consent or cooperation of the components being governed. Early approaches to policy representation have been restrictive in many ways. However semantically-rich policy representations can reduce human error, simplify policy analysis, reduce policy conflicts, and facilitate interoperability. In this paper, we compare three approaches to policy representation, reasoning, and enforcement. We highlight similarities and differences between Ponder, KAoS, and Rei, and sketch out some general criteria and properties for more adequate approaches to policy semantics in the future.
    BibTeX:
    @inproceedings{Tonti2003,
      author = {Tonti, G and Bradshaw, JM and Jeffers, R and Montanari, R and Suri, N and Uszok, A},
      title = {Semantic Web languages for policy representation and reasoning: A comparison of KAoS, Rei, and Ponder},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {419-437},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Torralba, A., Fergus, R. & Freeman, W.T. 80 million tiny images: A large data set for nonparametric object and scene recognition {2008} IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
    Vol. {30}({11}), pp. {1958-1970} 
    article DOI  
    Abstract: With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of nonparametric methods, we explore this world with the aid of a large data set of 79,302,017 images collected from the Web. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the data set are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 nonabstract nouns in English, as listed in the Wordnet lexical database. Hence, the image database gives comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with the nearest neighbor methods to perform object classification over a range of semantic levels, minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the data set, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors.
    BibTeX:
    @article{Torralba2008,
      author = {Torralba, Antonio and Fergus, Rob and Freeman, William T.},
      title = {80 million tiny images: A large data set for nonparametric object and scene recognition},
      journal = {IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE},
      year = {2008},
      volume = {30},
      number = {11},
      pages = {1958-1970},
      doi = {{10.1109/TPAMI.2008.128}}
    }
    
    Trastour, D., Bartolini, C. & Preist, C. Semantic Web support for the business-to-business e-commerce pre-contractual lifecycle {2003} COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING
    Vol. {42}({5}), pp. {661-673} 
    article DOI  
    Abstract: If an e-services approach to electronic commerce is to become widespread, standardisation of ontologies, message content and message protocols will be necessary. In this paper, we present a lifecycle of a business-to-business (B2B) e-commerce interaction, and show how the Semantic Web can support a service description language that can be used throughout this lifecycle. DAML+OIL is a sufficiently expressive and flexible service description language to be used not only in advertisements, but also in matchmaking queries, negotiation proposals and agreements. We also identify which operations must be carried out on this description language if the B2B lifecycle is to be fully supported. We do not propose specific standard protocols, but instead argue that our operators are able to support a wide variety of interaction protocols, and so will be fundamental irrespective of which protocols are finally adopted. (C) 2003 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Trastour2003,
      author = {Trastour, D and Bartolini, C and Preist, C},
      title = {Semantic Web support for the business-to-business e-commerce pre-contractual lifecycle},
      journal = {COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING},
      year = {2003},
      volume = {42},
      number = {5},
      pages = {661-673},
      doi = {{10.1016/S1389-1286(03)00229-9}}
    }
    
    Traverso, P. & Pistore, M. Automated composition of semantic web services into executable processes {2004}
    Vol. {3298}SEMANTIC WEB - ISWC 2004, PROCEEDINGS, pp. {380-394} 
    inproceedings  
    Abstract: Different planning techniques have been applied to the problem of automated composition of web services. However, in realistic cases, this planning problem is far from trivial: the planner needs to deal with the nondeterministic behavior of web services, the partial observability of their internal status, and with complex goals expressing temporal conditions and preference requirements. We propose a planning technique for the automated composition of web services described in OWL-S process models, which can deal effectively with nondeterminism, partial observability, and complex goals. The technique allows for the synthesis of plans that encode compositions of web services with the usual programming constructs, like conditionals and iterations. The generated plans can thus be translated into executable processes, e.g., BPEL4WS programs. We implement our solution in a planner and do some preliminary experimental evaluations that show the potentialities of our approach, and the gain in performance of automating the composition at the semantic level w.r.t. the automated composition at the level of executable processes.
    BibTeX:
    @inproceedings{Traverso2004,
      author = {Traverso, P and Pistore, M},
      title = {Automated composition of semantic web services into executable processes},
      booktitle = {SEMANTIC WEB - ISWC 2004, PROCEEDINGS},
      year = {2004},
      volume = {3298},
      pages = {380-394},
      note = {3rd International Semantic Web Conference, Hiroshima, JAPAN, NOV 07-11, 2004}
    }
    
    Troncy, R. Integrating structure and semantics into audio-visual documents {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {566-581} 
    inproceedings  
    Abstract: Describing audio-visual documents amounts to consider documentary aspects (the structure) as well as conceptual aspects (the content). In this paper, we propose an architecture which describes formally the content of the videos and which constrains the structure of their descriptions. This work is based on languages and technologies underlying the Semantic Web and in particular ontologies. Therefore, we propose to combine emerging Web standards, namely MPEG-7/XML Schema for the structural part and OWL/RDF for the knowledge part of the description. Finally, our work offers reasoning support on both aspects when querying a database of videos.
    BibTeX:
    @inproceedings{Troncy2003,
      author = {Troncy, R},
      title = {Integrating structure and semantics into audio-visual documents},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {566-581},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Turney, P. & Littman, M. Measuring praise and criticism: Inference of semantic orientation from association {2003} ACM TRANSACTIONS ON INFORMATION SYSTEMS
    Vol. {21}({4}), pp. {315-346} 
    article  
    Abstract: The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., ``honest'', ``intrepid'') and negative semantic orientation indicates criticism (e.g., ``disturbing'', `` superfluous''). Semantic orientation varies in both direction ( positive or negative) and degree ( mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This article introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words ( including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive ( 1,614 words) and negative ( 1,982 words). The method attains an accuracy of 82.8% on the full test set, but the accuracy rises above 95% when the algorithm is allowed to abstain from classifying mild words.
    BibTeX:
    @article{Turney2003,
      author = {Turney, PD and Littman, ML},
      title = {Measuring praise and criticism: Inference of semantic orientation from association},
      journal = {ACM TRANSACTIONS ON INFORMATION SYSTEMS},
      year = {2003},
      volume = {21},
      number = {4},
      pages = {315-346}
    }
    
    Uren, V., Cimiano, P., Iria, J., Handschuh, S., Vargas-Vera, M., Motta, E. & Ciravegna, F. Semantic annotation for knowledge management: Requirements and a survey of the state of the art {2006} JOURNAL OF WEB SEMANTICS
    Vol. {4}({1}), pp. {14-28} 
    article DOI  
    Abstract: While much of a company's knowledge can be found in text repositories, current content management systems have limited capabilities for structuring and interpreting documents. In the emerging SemanticWeb, search, interpretation and aggregation can be addressed by ontology- based semantic mark- up. In this paper, we examine semantic annotation, identify a number of requirements, and review the current generation of semantic annotation systems. This analysis shows that, while there is still some way to go before semantic annotation tools will be able to address fully all the knowledge management needs, research in the area is active and making good progress. (c) 2005 Elsevier B. V. All rights reserved.
    BibTeX:
    @article{Uren2006,
      author = {Uren, Victoria and Cimiano, Philipp and Iria, Jose and Handschuh, Siegfried and Vargas-Vera, Maria and Motta, Enrico and Ciravegna, Fabio},
      title = {Semantic annotation for knowledge management: Requirements and a survey of the state of the art},
      journal = {JOURNAL OF WEB SEMANTICS},
      year = {2006},
      volume = {4},
      number = {1},
      pages = {14-28},
      doi = {{10.1016/j.websem.2005.10.002}}
    }
    
    Uschold, M. Where are the semantics in the semantic web? {2003} AI MAGAZINE
    Vol. {24}({3}), pp. {25-36} 
    article  
    Abstract: The most widely accepted defining feature of the semantic web is machine-usable content. By this definition, the semantic web is already manifest in shopping agents that automatically access and use web content to find the lowest air fares or book prices. However, where are the semantics? Most people regard the semantic web as a vision, not a reality-so shopping agents should not ``count.'' To use web content, machines need to know what to do when they encounter it, which, in turn, requires the machine to know what the content means (that is, its semantics). The challenge of developing the semantic web is how to put this knowledge into the machine. The manner in which it is done is at the heart of the confusion about the semantic web. The goal of this article is to clear up some of this confusion. I explain that shopping agents work in the complete absence of any explicit account of the semantics of web content because the meaning of the web content that the agents are expected to encounter can be determined by the human programmers who hardwire it into the web application software. I therefore regard shopping agents as a degenerate case of the semantic web. I note various shortcomings of this approach. I conclude by presenting some ideas about how the semantic web will likely evolve.
    BibTeX:
    @article{Uschold2003,
      author = {Uschold, M},
      title = {Where are the semantics in the semantic web?},
      journal = {AI MAGAZINE},
      year = {2003},
      volume = {24},
      number = {3},
      pages = {25-36}
    }
    
    Uszok, A., Bradshaw, J., Johnson, M., Jeffers, R., Tate, A., Dalton, J. & Aitken, S. KAoS policy management for semantic Web services {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({4}), pp. {32-41} 
    article  
    BibTeX:
    @article{Uszok2004,
      author = {Uszok, A and Bradshaw, JM and Johnson, M and Jeffers, R and Tate, A and Dalton, J and Aitken, S},
      title = {KAoS policy management for semantic Web services},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {4},
      pages = {32-41}
    }
    
    Vallet, D., Fernandez, M. & Castells, P. An ontology-based information retrieval model {2005}
    Vol. {3532}SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS, pp. {455-470} 
    inproceedings  
    Abstract: Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keyword-based search, and providing ground for further research and discussion.
    BibTeX:
    @inproceedings{Vallet2005,
      author = {Vallet, D and Fernandez, M and Castells, P},
      title = {An ontology-based information retrieval model},
      booktitle = {SEMANTIC WEB: RESEARCH AND APPLICATIONS, PROCEEDINGS},
      year = {2005},
      volume = {3532},
      pages = {455-470},
      note = {2nd European Semantic Web Conference, Heraklion, GREECE, MAY 29-JUN 01, 2005}
    }
    
    Vargas-Vera, M., Motta, E., Domingue, J., Lanzoni, M., Stutt, A. & Ciravegna, F. MnM: Ontology driven semi-automatic and automatic support for semantic markup {2002}
    Vol. {2473}KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT, PROCEEDINGS - ONTOLOGIES AND THE SEMANTIC WEB , pp. {379-391} 
    inproceedings  
    Abstract: An important precondition for realizing the goal of a semantic web is the ability to annotate web resources with semantic information. In order to carry out this task, users need appropriate representation languages, ontologies, and support tools. In this paper we present MnM, an annotation tool which provides both automated and semi-automated support for annotating web pages with semantic contents. MnM integrates a web browser with an ontology editor and provides open APIs to link to ontology servers and for integrating information extraction tools. MnM can be seen as an early example of the next generation of ontology editors, being web-based, oriented to semantic markup and providing mechanisms for large-scale automatic markup of web pages.
    BibTeX:
    @inproceedings{Vargas-Vera2002,
      author = {Vargas-Vera, M and Motta, E and Domingue, J and Lanzoni, M and Stutt, A and Ciravegna, F},
      title = {MnM: Ontology driven semi-automatic and automatic support for semantic markup},
      booktitle = {KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT, PROCEEDINGS - ONTOLOGIES AND THE SEMANTIC WEB },
      year = {2002},
      volume = {2473},
      pages = {379-391},
      note = {13th International Conference on Knowledge Engineering and Knowledge Management (EKAW 2002), Siguenza, SPAIN, OCT 01-04, 2002}
    }
    
    Venkatasubramanian, V., Zhao, C., Joglekar, G., Jain, A., Hailemariam, L., Suresh, P., Akkisetty, P., Morris, K. & Reklaitis, G.V. Ontological informatics infrastructure for pharmaceutical product development and manufacturing {2006} COMPUTERS & CHEMICAL ENGINEERING
    Vol. {30}({10-12}), pp. {1482-1496} 
    article DOI  
    Abstract: Informatics infrastructure plays a crucial role in supporting different decision making activities related to pharmaceutical product development, pilot plant and commercial scale manufacturing by streamlining information gathering, data integration, model development and managing all these for easy and timely access and reuse. The foundation of such an infrastructure is the explicitly and formally modeled information. This foundation enables knowledge in different forms, and best manufacturing practices, to be modeled and captured into tools to support the product lifecycle management. This paper discusses the development of ontologies, Semantic Web infrastructure and Web related technologies that make such an infrastructure development possible. While many of the issues addressed in this paper are applicable to a wide spectrum of molecular-based products, we focus our work on the development of pharmaceutical informatics to support Active Pharmaceutical Ingredient (API) as well as drug product development as case studies to illustrate the various aspects of this infrastructure.(c) 2006 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Venkatasubramanian2006,
      author = {Venkatasubramanian, Venkat and Zhao, Chunhua and Joglekar, Girish and Jain, Ankur and Hailemariam, Leaelaf and Suresh, Pradeep and Akkisetty, Pavankumar and Morris, Ken and Reklaitis, G. V.},
      title = {Ontological informatics infrastructure for pharmaceutical product development and manufacturing},
      journal = {COMPUTERS & CHEMICAL ENGINEERING},
      year = {2006},
      volume = {30},
      number = {10-12},
      pages = {1482-1496},
      note = {7th International Conference on Chemical Process Control (CPC 7), Lake Louise, CANADA, JAN 08-13, 2006},
      doi = {{10.1016/j.compchemeng.2006.05.036}}
    }
    
    de Vergara, J., Villagra, V., Asensio, J. & Berrocal, J. Ontologies: Giving semantics to network management models {2003} IEEE NETWORK
    Vol. {17}({3}), pp. {15-21} 
    article  
    Abstract: The multiplicity of network management models may imply in some scenarios the use of multiple management information languages defining the resources to be managed. Each language has a different level of semantic expressiveness, which is not easily measurable. Also, these management information models cannot be easily integrated due to the difficulty of translation of the semantics they contain. This article proposes the use of ontologies as a new approach to improve the semantic expressiveness of management information languages. Ontologies are currently used, for instance, to provide Web pages and Web services, the semantics they usually lack (known today as the Semantic Web). Applying ontologies to management information languages can also be useful for integration of information definitions specified by different management languages and adding behavior information to them.
    BibTeX:
    @article{Vergara2003,
      author = {de Vergara, JEL and Villagra, VA and Asensio, JI and Berrocal, J},
      title = {Ontologies: Giving semantics to network management models},
      journal = {IEEE NETWORK},
      year = {2003},
      volume = {17},
      number = {3},
      pages = {15-21}
    }
    
    de Vergara, J., Villagra, V. & Berrocal, J. Applying the Web ontology language to management information definitions {2004} IEEE COMMUNICATIONS MAGAZINE
    Vol. {42}({7}), pp. {68-74} 
    article  
    Abstract: The Extended Markup Language (XML) has emerged in the Internet world as a standard representation format, which can be useful to describe and transmit management information. However, XML formats alone do not give formal semantics to it. To solve this question, ontology languages based on the Resource Description Framework can be used to improve the expressiveness of management information specifications. This article presents an approach that uses an XML-based ontology language to define network and system management information. For this, the structures of the Web Ontology Language known as OWL are analyzed and compared to those used in management definitions, also studying the advantages ontology languages can provide in this area.
    BibTeX:
    @article{Vergara2004,
      author = {de Vergara, JEL and Villagra, VA and Berrocal, J},
      title = {Applying the Web ontology language to management information definitions},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2004},
      volume = {42},
      number = {7},
      pages = {68-74}
    }
    
    Vernadat, F.B. Interoperable enterprise systems: Principles, concepts, and methods {2007} ANNUAL REVIEWS IN CONTROL
    Vol. {31}({1}), pp. {137-145} 
    article DOI  
    Abstract: Interoperable enterprise systems (be they supply chains, extended enterprises, or any form of virtual organizations) must be designed, controlled, and appraised from a holistic and systemic point of view. Systems interoperability is a key to enterprise integration, which recommends that the IT architecture and infrastructure be aligned with business process organization and control, themselves designed according to a strategic view expressed in an enterprise architecture. The paper discusses architectures and methods to build interoperable, enterprise systems, advocating a mixed service and process orientation, to support synchronous and/or asynchronous operations, both at the business level (business events, business services, business processes) and at the application level (workflow, IT and Web services, application programs). (c) 2007 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Vernadat2007,
      author = {Vernadat, F. B.},
      title = {Interoperable enterprise systems: Principles, concepts, and methods},
      journal = {ANNUAL REVIEWS IN CONTROL},
      year = {2007},
      volume = {31},
      number = {1},
      pages = {137-145},
      note = {12th IFAC Symposium on Information Control Problems in Manufacturing (INCOM 2006), St Etienne, FRANCE, MAY 17-JUL 19, 2006},
      doi = {{10.1016/j.arcontrol.2007.03.004}}
    }
    
    Villa, F. Integrating modelling architecture: a declarative framework for multi-paradigm, multi-scale ecological modelling {2001} ECOLOGICAL MODELLING
    Vol. {137}({1}), pp. {23-42} 
    article  
    Abstract: Multiple modelling paradigms are necessary to formulate crucial modelling problems in modern environmental science. Modelling paradigms help researchers to conceive, formulate and solve problems by providing semantic structures to organise their view of a system or process. An unusually large array of different paradigms is used in Ecology, reflecting the complexity and variety of the natural world. As a result of this, multi-disciplinary problems in particular suffer of representational difficulties that prevent them to be approached efficiently with available software toolkits. In this paper I outline the theoretical aspects of model compatibility in the operational aspects of representation, scale and domain, and I describe the Integrating Modelling Architecture (IMA), a declarative framework and an open-source software toolkit to allow integrated meta-modelling. The IMA allows to specify generic model components using a common markup language, and loads paradigm-specific grammars that can be extended to support multiple paradigms. Among the project's goals are: (1) allow web-based integration of models and state-of-the-art resources distributed across a wide area network; (2) integrate and reuse existing simulation programs and toolkits; (3) allow integration between independently developed models adopting different modelling paradigms, scales, and domains; and (4) provide extendible, efficient and clear abstractions to conceptualise and solve complex, multiple-paradigm modelling problems in environmental science. At the end of the paper I argue that an integrative meta-modelling paradigm allows us to formulate and solve new important problems, and illustrate some of the new modelling scenarios enabled by the availability of these new concepts and tools. (C) 2001 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Villa2001,
      author = {Villa, F},
      title = {Integrating modelling architecture: a declarative framework for multi-paradigm, multi-scale ecological modelling},
      journal = {ECOLOGICAL MODELLING},
      year = {2001},
      volume = {137},
      number = {1},
      pages = {23-42}
    }
    
    Villanueva-Rosales, N. & Dumontier, M. yOWL: An ontology-driven knowledge base for yeast biologists {2008} JOURNAL OF BIOMEDICAL INFORMATICS
    Vol. {41}({5, Sp. Iss. SI}), pp. {779-789} 
    article DOI  
    Abstract: Knowledge management is an ongoing challenge for the biological Community Such that large, diverse and continuously growing information requires more sophisticated methods to store, integrate and query their knowledge. The semantic web initiative provides a new knowledge engineering framework to represent, share and discover information. In this paper, we describe our efforts towards the development of an ontology-based knowledge base, including aspects from ontology design and population using ``semantic'' data mashup, to automated reasoning and semantic query answering. Based on yeast data obtained from the Saccharomyces Genome Database and UniProt, we discuss the challenges encountered during the building of the knowledge base and how they were overcome. (C) 2008 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Villanueva-Rosales2008,
      author = {Villanueva-Rosales, Natalia and Dumontier, Michel},
      title = {yOWL: An ontology-driven knowledge base for yeast biologists},
      journal = {JOURNAL OF BIOMEDICAL INFORMATICS},
      year = {2008},
      volume = {41},
      number = {5, Sp. Iss. SI},
      pages = {779-789},
      doi = {{10.1016/j.jbi.2008.05.001}}
    }
    
    Vu, L., Hauswirth, M. & Aberer, K. QoS-based service selection and ranking with trust and reputation management {2005}
    Vol. {3760}ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2005: COOPIS, DOA, AND ODBASE, PT 1, PROCEEDINGS, pp. {466-483} 
    inproceedings  
    Abstract: QoS-based service selection mechanisms will play an essential role in service-oriented architectures, as e-Business applications want to use services that most accurately meet their requirements. Standard approaches in this field typically are based on the prediction of services' performance from the quality advertised by providers as well a's from feedback of users on the actual levels of QoS delivered to them. The key issue in this setting is to detect and deal with false ratings by dishonest providers and users, which has only received limited attention so far. In this paper, we present a new QoS-based semantic web service selection and ranking solution with the application of a trust and reputation management method to address this problem. We will give a formal description of our approach and validate it with experiments which demonstrate that our solution yields high-quality results under various realistic cheating behaviors.
    BibTeX:
    @inproceedings{Vu2005,
      author = {Vu, LH and Hauswirth, M and Aberer, K},
      title = {QoS-based service selection and ranking with trust and reputation management},
      booktitle = {ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2005: COOPIS, DOA, AND ODBASE, PT 1, PROCEEDINGS},
      year = {2005},
      volume = {3760},
      pages = {466-483},
      note = {OTM Confederated International Conference and Workshop, Agia Napa, CYPRUS, OCT 31-NOV 04, 2005}
    }
    
    Wang, J., Li, J. & Wiederhold, G. SIMPLIcity: Semantics-sensitive integrated matching for picture libraries {2001} IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
    Vol. {23}({9}), pp. {947-963} 
    article  
    Abstract: The need for efficient content-based image retrieval has increased tremendously in many application areas such as biomedicine, military, commerce, education, and Web image classification and searching. We present here SIMPLIcity (Semantics-sensitive Integrated Matching for Picture Libraries), an image retrieval system, which uses semantics classification methods, a wavelet-based approach for feature extraction, and integrated region matching based upon image segmentation. As in other region-based retrieval systems, an image is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. The system classifies images into semantic categories, such as textured-nontextured, graph-photograph. Potentially, the categorization enhances retrieval by permitting semantically-adaptive searching methods and narrowing down the searching range in a database. A measure for the overall similarity between images is developed using a region-matching scheme that integrates properties of all the regions in the images. Compared with retrieval based on individual regions, the overall similarity approach 1) reduces the adverse effect of inaccurate segmentation, 2) helps to clarify the semantics of a particular region, and 3) enables a simple querying interface for region-based image retrieval systems. The application of SIMPLIcity to several databases, including a database of about 200,000 general-purpose images, has demonstrated that our system performs significantly better and faster than existing ones. The system is fairly robust to image alterations.
    BibTeX:
    @article{Wang2001,
      author = {Wang, JZ and Li, J and Wiederhold, G},
      title = {SIMPLIcity: Semantics-sensitive integrated matching for picture libraries},
      journal = {IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE},
      year = {2001},
      volume = {23},
      number = {9},
      pages = {947-963}
    }
    
    Wang, S., Shen, W. & Hao, Q. An agent-based Web service workflow model for inter-enterprise collaboration {2006} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {31}({4}), pp. {787-799} 
    article DOI  
    Abstract: The service-orientated computing paradigm is transforming traditional workflow management from a close and centralized control system into a worldwide dynamic business process. A complete workflow serving inter-enterprise collaboration should include both internal processes and ad hoc external processes. This paper presents an agent-based workflow model to address this challenge. In the proposed model, agent-based technology provides the workflow coordination at both inter- and intra-enterprise levels while Web service-based technology provides infrastructures for messaging, service description and workflow enactment. A proof-of-concept prototype system simulating the order entry, partner search and selection, and contracting in a virtual enterprise creation scenario is implemented to demonstrate the dynamic workflow definition and execution for inter-enterprise collaboration. Crown Copyright (c) 2006 Published by Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Wang2006a,
      author = {Wang, Shuying and Shen, Weiming and Hao, Qi},
      title = {An agent-based Web service workflow model for inter-enterprise collaboration},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2006},
      volume = {31},
      number = {4},
      pages = {787-799},
      note = {9th International Conference on Computer Supported Cooperative Work in Design, Coventry, ENGLAND, MAY 24-26, 2005},
      doi = {{10.1016/j.eswa.2006.01.011}}
    }
    
    Wang, X., Gorlitsky, R. & Almeida, J. From XML to RDF: how semantic web technologies will change the design of `omic' standards {2005} NATURE BIOTECHNOLOGY
    Vol. {23}({9}), pp. {1099-1103} 
    article  
    Abstract: With the ongoing rapid increase in both volume and diversity of `omic' data (genomics, transcriptomics, proteomics, and others), the development and adoption of data standards is of paramount importance to realize the promise of systems biology. A recent trend in data standard development has been to use extensible markup language (XML) as the preferred mechanism to define data representations. But as illustrated here with a few examples from proteomics data, the syntactic and document-centric XML cannot achieve the level of interoperability required by the highly dynamic and integrated bioinformatics applications. In the present article, we discuss why semantic web technologies, as recommended by the World Wide Web consortium (W3C), expand current data standard technology for biological data representation and management.
    BibTeX:
    @article{Wang2005,
      author = {Wang, XS and Gorlitsky, R and Almeida, JS},
      title = {From XML to RDF: how semantic web technologies will change the design of `omic' standards},
      journal = {NATURE BIOTECHNOLOGY},
      year = {2005},
      volume = {23},
      number = {9},
      pages = {1099-1103}
    }
    
    Wang, X., Vitvar, T., Kerrigan, M. & Toma, I. A QoS-aware selection model for semantic Web services {2006}
    Vol. {4294}Service Oriented Computing - ICSOC 2006, pp. {390-401} 
    inproceedings  
    Abstract: Automating Service Oriented Architectures by augmenting them with semantics will form the basis of the next generation of computing. Selection of service still is an important challenge, especially, when a set of services fulfilling user's capabilities requirements have been discovered, among these services which one will be eventually invoked by user is very critical, generally depending on a combined evaluation of qualities of services (Qos). This paper proposes a QoS-based selection of services. Initially we specify a QoS ontology and its vocabulary using the Web Services Modeling Ontology (WSMO) for annotating service descriptions with QoS data. We continue by defining quality attributes and their respective measurements along with a QoS selection model. Finally, we present a fair and dynamic selection mechanism, using an optimum normalization algorithm.
    BibTeX:
    @inproceedings{Wang2006,
      author = {Wang, Xia and Vitvar, Tomas and Kerrigan, Mick and Toma, Ioan},
      title = {A QoS-aware selection model for semantic Web services},
      booktitle = {Service Oriented Computing - ICSOC 2006},
      year = {2006},
      volume = {4294},
      pages = {390-401},
      note = {4th International Conference on Service-Oriented Computing, Chicago, IL, DEC 04-07, 2006}
    }
    
    Wei, C., Hu, P. & Dong, Y. Managing document categories in e-commerce environments: an evolution-based approach {2002} EUROPEAN JOURNAL OF INFORMATION SYSTEMS
    Vol. {11}({3}), pp. {208-222} 
    article DOI  
    Abstract: Management of textual documents obtained from various online sources represents a challenge in emerging e-commerce environments, where individuals and organisations have to perform continual surveillance of important events or trends pertinent to multiple topic areas of interest. Observations of textual document management by individuals and organisations have suggested the popularity of using categories to organise, archive and access documents. The sheer volume and availability of documents obtained from the internet make manual document-category management prohibitively tedious, if practicable or effective at all. An automated approach underpinned by appropriate artificial intelligence techniques has potential for solving this problem. In this vein, a critical challenge is the preservation of the user's perspective on semantic coherence in different documents and thus supports his or her preferred practice for document groupings. Motivated by the significance of, and the need for automated document-category management, the current research proposed and experimentally examined an evolution-based approach for supporting user-centric document- category management in e-commerce environments. Specifically, we designed and implemented the Category Evolution (CE) technique, capable of supporting personalised document-category management by taking into account categories previously established by the user. Our evaluation results suggest that CE exhibited satisfactory effectiveness and reasonable robustness in different scenarios and achieved a performance level better than that recorded by the benchmark technique using complete category discovery.
    BibTeX:
    @article{Wei2002,
      author = {Wei, CP and Hu, PJ and Dong, YX},
      title = {Managing document categories in e-commerce environments: an evolution-based approach},
      journal = {EUROPEAN JOURNAL OF INFORMATION SYSTEMS},
      year = {2002},
      volume = {11},
      number = {3},
      pages = {208-222},
      doi = {{10.1057/palgrave.ejis.3000429}}
    }
    
    Wei, C.-P., Chiang, R.H.L. & Wu, C.-C. Accommodating individual preferences in the categorization of documents: A personalized clustering approach {2006} JOURNAL OF MANAGEMENT INFORMATION SYSTEMS
    Vol. {23}({2}), pp. {173-201} 
    article DOI  
    Abstract: As electronic commerce and knowledge economy environments proliferate, both individuals and organizations increasingly generate and consume large amounts of online information, typically available as textual documents. To manage this ever-increasing volume of documents, individuals and organizations frequently organize their documents into categories that facilitate document management and subsequent access and browsing. Document clustering is an intentional act that should reflect individual preferences with regard to the semantic coherency and relevant categorization of documents. Hence, effective document clustering must consider individual preferences and needs to support personalization in document categorization. In this paper, we present an automatic document-clustering approach that incorporates an individual's partial clustering as preferential information. Combining two document representation methods, feature refinement and feature weighting, with two clustering methods, precluster-based hierarchical agglomerative clustering (HAC) and atomic-based HAC, we establish four personalized document-clustering techniques. Using a traditional content-based document-clustering technique as a performance benchmark, we find that the proposed personalized document-clustering techniques improve clustering effectiveness, as measured by cluster precision and cluster recall.
    BibTeX:
    @article{Wei2006,
      author = {Wei, Chih-Ping and Chiang, Roger H. L. and Wu, Chia-Chen},
      title = {Accommodating individual preferences in the categorization of documents: A personalized clustering approach},
      journal = {JOURNAL OF MANAGEMENT INFORMATION SYSTEMS},
      year = {2006},
      volume = {23},
      number = {2},
      pages = {173-201},
      doi = {{10.2753/MIS0742-1222230208}}
    }
    
    Weng, S., Tsai, H., Liu, S. & Hsu, C. Ontology construction for information classification {2006} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {31}({1}), pp. {1-12} 
    article DOI  
    Abstract: Following the advent of the Internet technology and the rapid growth of its applications, users have spent long periods of time browsing through the ocean of information found in the Internet. This time-consuming hunt, however, makes searching, retrieving. displaying, integrating and maintaining data such arduous tasks. One way to solve this problem is to study the concept behind the Semantic Web in accordance with the principles of ontology. Apart from facilitating the process of information search in the Semantic Web, ontology also provides a method that will enable computers to exchange, search and identify text information. But establishing the ontology necessitates a great deal of expert assistance; manually setting it up would entail a lot of time, not to mention that there are only a handful of experts available. For this reason, using automatic technology to construct the ontology is a subject worth pursuing. This research uses the theory of formal concept analysis to serve as the groundwork in assembling the different levels of ontological concepts in an automated fashion. An ontology diagram will be presented to show the correlation of concepts and their corresponding significance. Moreover, the experiments of this research select a collection of different concepts in an attempt to classify the relationships between documents and concepts. The objective is to develop an automated technology of ontology construction that will support the present information classification system, as well as to upgrade the ontological aspect of the Semantic Web. (c) 2005 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Weng2006,
      author = {Weng, SS and Tsai, HJ and Liu, SC and Hsu, CH},
      title = {Ontology construction for information classification},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2006},
      volume = {31},
      number = {1},
      pages = {1-12},
      doi = {{10.1016/j.eswa.2005.09.007}}
    }
    
    Wermter, S. Neural network agents for learning semantic text classification {2000} INFORMATION RETRIEVAL
    Vol. {3}({2}), pp. {87-103} 
    article  
    Abstract: The research project AgNeT develops Agents For Neural Text routing in the internet. Unrestricted potentially faulty text messages arrive at a certain delivery point (e.g. email address: or world wide web address). These text messages are scanned and then distributed tu one of several expert agents according to a certain task criterium. Possible specific scenarios within this framework include the learning of the routing of publication titles ol news titles. In this paper we describe extensive experiments for semantic text rooting based on classified library titles and newswire titles. This task is challenging since incoming messages may contain constructions which have not been anticipated. Therefore, the contributions of this research are in learning and generalizing neural architectures for the robust interpretation of potentially noisy unrestricted messages. Neural networks were developed and examined for this topic since they support robustness and learning in noisy unrestricted real-world texts. We describe and compare different sets of experiments. The first set of experiments tests a recurrent neural network for the task of library title classification. Then we describe a larger more difficult newswire classification task from information retrieval. The comparison of the examined models demonstrates that techniques from information retrieval integrated into recurrent plausibility networks performed well even under noise and fur different corpora.
    BibTeX:
    @article{Wermter2000,
      author = {Wermter, S},
      title = {Neural network agents for learning semantic text classification},
      journal = {INFORMATION RETRIEVAL},
      year = {2000},
      volume = {3},
      number = {2},
      pages = {87-103}
    }
    
    Wielemaker, J., Schreiber, G. & Wielinga, B. Prolog-based infrastructure for RDF: Scalability and performance {2003}
    Vol. {2870}SEMANTIC WEB - ISWC 2003, pp. {644-658} 
    inproceedings  
    Abstract: The semantic web is a promising application-area for the Prolog programming language for its non-determinism and pattern-matching. In this paper we outline an infrastructure for loading and saving RDF/XML, storing triples, elementary reasoning with triples and visualization. A predecessor of the infrastructure described here has been used in various applications for ontology-based annotation of multimedia objects using semantic web languages. Our library aims at fast parsing, fast access and scalability for fairly large but not unbounded applications upto 40 million triples. The RDF parser is distributed with SWI-Prolog under the LGPL Free Software licence. The other components will be added to the distribution as they become stable and documented.
    BibTeX:
    @inproceedings{Wielemaker2003,
      author = {Wielemaker, J and Schreiber, G and Wielinga, B},
      title = {Prolog-based infrastructure for RDF: Scalability and performance},
      booktitle = {SEMANTIC WEB - ISWC 2003},
      year = {2003},
      volume = {2870},
      pages = {644-658},
      note = {2nd International Semantic Web Conference, SANIBEL, FLORIDA, OCT 20-23, 2003}
    }
    
    Wilkinson, M.D., Senger, M., Kawas, E., Bruskiewich, R., Gouzy, J., Noirot, C., Bardou, P., Ng, A., Haase, D., Saiz, E. d.A., Wang, D., Gibbons, F., Gordon, P.M.K., Sensen, C.W., Carrasco, J.M.R., Fernandez, J.M., Shen, L., Links, M., Ng, M., Opushneva, N., Neerincx, P.B.T., Leunissen, J.A.M., Ernst, R., Twigger, S., Usadel, B., Good, B., Wong, Y., Stein, L., Crosby, W., Karlsson, J., Royo, R., Parraga, I., Ramirez, S., Gelpi, J.L., Trelles, O., Pisano, D.G., Jimenez, N., Kerhornou, A., Rosset, R., Zamacola, L., Tarraga, J., Huerta-Cepas, J., Carazo, J.M., Dopazo, J., Guigo, R., Navarro, A., Orozco, M., Valencia, A., Claros, M.G., Perez, A.J., Aldana, J., Rojano, M.M., Cruz, R.F.-S., Navas, I., Schiltz, G., Farmer, A., Gessler, D., Schoof, H., Groscurth, A. & BioMoby Consortium Interoperability with Moby 1.0 - Its better than sharing your toothbrush {2008} BRIEFINGS IN BIOINFORMATICS
    Vol. {9}({3}), pp. {220-231} 
    article DOI  
    Abstract: The BioMoby project was initiated in 2001 from within the model organism database community. It aimed to standardize methodologies to facilitate information exchange and access to analytical resources, using a consensus driven approach. Six years later, the BioMoby development community is pleased to announce the release of the 1.0 version of the interoperability framework, registry Application Programming Interface and supporting Perl and Java code-bases. Together, these provide interoperable access to over 1400 bioinformatics resources worldwide through the BioMoby platform, and this number continues to grow. Here we highlight and discuss the features of BioMoby that make it distinct from other Semantic Web Service and interoperability initiatives, and that have been instrumental to its deployment and use by a wide community of bioinformatics service providers. The standard, client software, and supporting code libraries are all freely available at http://www.biomoby.org/.
    BibTeX:
    @article{Wilkinson2008,
      author = {Wilkinson, Mark D. and Senger, Martin and Kawas, Edward and Bruskiewich, Richard and Gouzy, Jerome and Noirot, Celine and Bardou, Philippe and Ng, Ambrose and Haase, Dirk and Saiz, Enrique de Andres and Wang, Dennis and Gibbons, Frank and Gordon, Paul M. K. and Sensen, Christoph W. and Carrasco, Jose Manuel Rodriguez and Fernandez, Jose M. and Shen, Lixin and Links, Matthew and Ng, Michael and Opushneva, Nina and Neerincx, Pieter B. T. and Leunissen, Jack A. M. and Ernst, Rebecca and Twigger, Simon and Usadel, Bjorn and Good, Benjamin and Wong, Yan and Stein, Lincoln and Crosby, William and Karlsson, Johan and Royo, Romina and Parraga, Ivan and Ramirez, Sergio and Gelpi, Josep Lluis and Trelles, Oswaldo and Pisano, David G. and Jimenez, Natalia and Kerhornou, Arnaud and Rosset, Roman and Zamacola, Leire and Tarraga, Joaquin and Huerta-Cepas, Jaime and Carazo, Jose Maria and Dopazo, Joaquin and Guigo, Roderic and Navarro, Arcadi and Orozco, Modesto and Valencia, Alfonso and Claros, M. Gonzalo and Perez, Antonio J. and Aldana, Jose and Rojano, M. Mar and Cruz, Raul Fernandez-Santa and Navas, Ismael and Schiltz, Gary and Farmer, Andrew and Gessler, Damian and Schoof, Heiko and Groscurth, Andreas and BioMoby Consortium},
      title = {Interoperability with Moby 1.0 - Its better than sharing your toothbrush},
      journal = {BRIEFINGS IN BIOINFORMATICS},
      year = {2008},
      volume = {9},
      number = {3},
      pages = {220-231},
      doi = {{10.1093/bib/bbn003}}
    }
    
    Williams, A. Learning to share meaning in a multi-agent system {2004} AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS
    Vol. {8}({2}), pp. {165-193} 
    article  
    Abstract: The development of the semantic Web will require agents to use common domain ontologies to facilitate communication of conceptual knowledge. However, the proliferation of domain ontologies may also result in conflicts between the meanings assigned to the various terms. That is, agents with diverse ontologies may use different terms to refer to the same meaning or the same term to refer to different meanings. Agents will need a method for learning and translating similar semantic concepts between diverse ontologies. Only until recently have researchers diverged from the last decade's ``common ontology'' paradigm to a paradigm involving agents that can share knowledge using diverse ontologies. This paper describes how we address this agent knowledge sharing problem of how agents deal with diverse ontologies by introducing a methodology and algorithms for multi-agent knowledge sharing and learning in a peer-to-peer setting. We demonstrate how this approach will enable multi-agent systems to assist groups of people in locating, translating, and sharing knowledge using our Distributed Ontology Gathering Group Integration Environment (DOGGIE) and describe our proof-of-concept experiments. DOGGIE synthesizes agent communication, machine learning, and reasoning for information sharing in the Web domain.
    BibTeX:
    @article{Williams2004,
      author = {Williams, AB},
      title = {Learning to share meaning in a multi-agent system},
      journal = {AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS},
      year = {2004},
      volume = {8},
      number = {2},
      pages = {165-193}
    }
    
    Williams, A.J. Internet-based tools for communication and collaboration in chemistry {2008} DRUG DISCOVERY TODAY
    Vol. {13}({11-12}), pp. {502-506} 
    article DOI  
    Abstract: Web-based technologies, coupled with a drive for improved communication between scientists, have resulted in the proliferation of scientific opinion, data and knowledge at an ever-increasing rate. The availability of tools to host wikis and blogs has provided the necessary building blocks for scientists with only a rudimentary understanding of computer software science to communicate to the masses. This newfound freedom has the ability to speed up research and sharing of results, develop extensive collaborations, conduct science in public, and in near-real time. The technologies supporting chemistry, while immature, are fast developing to support chemical structures and reactions, analytical data support and integration to related data sources via supporting software technologies. Communication in chemistry is already witnessing a new revolution.
    BibTeX:
    @article{Williams2008,
      author = {Williams, Antony J.},
      title = {Internet-based tools for communication and collaboration in chemistry},
      journal = {DRUG DISCOVERY TODAY},
      year = {2008},
      volume = {13},
      number = {11-12},
      pages = {502-506},
      doi = {{10.1016/j.drudis.2008.03.015}}
    }
    
    Winn, J., Criminisi, A. & Minka, T. Object categorization by learned universal visual dictionary {2005} TENTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1 AND 2, PROCEEDINGS, pp. {1800-1807}  inproceedings  
    Abstract: This paper presents a new algorithm for the automatic recognition of object classes from images (categorization). Compact and yet discriminative appearance-based object class models are automatically learned from a set of training images The method is simple and extremely fast, making it suitable for many applications such as semantic image retrieval, web search, and interactive image editing. It classifies a region according to the proportions of different visual words (clusters in feature space). The specific visual words and the typical proportions in each object are learned from a segmented training set. The main contribution of this paper is twofold: i) an optimally compact visual dictionary is learned by pair-wise merging of visual words from an initially large dictionary. The final visual words are described by GMMs. ii) A novel statistical measure of discrimination is proposed which is optimized by each merge operation. High classification accuracy is demonstrated for nine object classes on photographs of real objects viewed under general lighting conditions, poses and viewpoints. The set of test images used for validation comprise: i) photographs acquired by us, ii) images from the web and iii) images from the recently released Pascal dataset. The proposed algorithm performs well on both texture-rich objects (e.g. grass, sky, trees) and structure-rich ones (e.g. cars, bikes, planes).
    BibTeX:
    @inproceedings{Winn2005,
      author = {Winn, J and Criminisi, A and Minka, T},
      title = {Object categorization by learned universal visual dictionary},
      booktitle = {TENTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1 AND 2, PROCEEDINGS},
      year = {2005},
      pages = {1800-1807},
      note = {10th IEEE International Conference on Computer Vision (ICCV 2005), Beijing, PEOPLES R CHINA, OCT 17-20, 2005}
    }
    
    Wong, A., Ray, P., Parameswaran, N. & Strassner, J. Ontology mapping for the interoperability problem in network management {2005} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {23}({10}), pp. {2058-2068} 
    article DOI  
    Abstract: Interoperability between different network management domains, heterogeneous devices, and various management systems is one of the main requirements for managing complex enterprise services. While substantial advances have been made in low-level device and data interoperability using common data formats and specifications such as simple network management protocol's (SNMP's) SMI and TMF's SID, various interoperability issues including semantic interoperability offer interesting research challenges. While semantic interoperability is a difficult problem in its own right, the semantic web that incorporates intelligent agents necessitates An interoperability solution requiring agents to communicate unambiguously and reason intelligently to perform cooperative management tasks. Agents need a formal representation of knowledge; an ontology is capable of modeling, the rich semantics of the managed environment (and especially, relationships between managed entities) so that agents can act on them. This paper presents an ontology-driven approach for solving the semantic interoperability problem in the management of enterprise services, illustrated here with a router configuration management application.
    BibTeX:
    @article{Wong2005,
      author = {Wong, AKY and Ray, P and Parameswaran, N and Strassner, J},
      title = {Ontology mapping for the interoperability problem in network management},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2005},
      volume = {23},
      number = {10},
      pages = {2058-2068},
      doi = {{10.1109/JSAC.2005.854130}}
    }
    
    Wroe, C., Goble, C., Greenwood, M., Lord, P., Miles, S., Papay, J., Payne, T. & Moreau, L. Automating experiments using semantic data on a bioinformatics grid {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({1}), pp. {48-55} 
    article  
    BibTeX:
    @article{Wroe2004,
      author = {Wroe, C and Goble, C and Greenwood, M and Lord, P and Miles, S and Papay, J and Payne, T and Moreau, L},
      title = {Automating experiments using semantic data on a bioinformatics grid},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {1},
      pages = {48-55}
    }
    
    Wroe, C., Stevens, R., Goble, C., Roberts, A. & Greenwood, M. A suite of DAML plus OIL ontologies to describe bioinformatics web services and data {2003} INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS
    Vol. {12}({2}), pp. {197-224} 
    article  
    Abstract: The growing quantity and distribution of bioinformatics resources means that finding and utilizing them requires a great deal of expert knowledge, especially as many resources need to be tied together into a workflow to-accomplish a useful goal. We want to formally capture at least some of this knowledge within a virtual workbench and middleware framework to assist a wider range of biologists in utilizing these resources. Different activities require different representations of knowledge. Finding or substituting a service within a workflow is often best supported by a classification. Marshalling and configuring services is best accomplished using a formal description. Both representations are highly interdependent and maintaining consistency between the two by hand is difficult. We report on a description logic approach using the web ontology language DAML+OIL that uses property based service descriptions. The ontology is founded on DAML-S to dynamically create service classifications. These classifications are then used to support semantic service matching and discovery in a large grid based middleware project (my)GRID. We describe the extensions necessary to DAML-S in order to support bioinformatics service description; the utility of DAML+OIL in creating dynamic classifications based on formal descriptions; and the implementation of a DAML+OIL ontology service to support partial user-driven service matching and composition.
    BibTeX:
    @article{Wroe2003,
      author = {Wroe, C and Stevens, R and Goble, C and Roberts, A and Greenwood, M},
      title = {A suite of DAML plus OIL ontologies to describe bioinformatics web services and data},
      journal = {INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS},
      year = {2003},
      volume = {12},
      number = {2},
      pages = {197-224}
    }
    
    Wu, H., Gordon, M., DeMaagd, K. & Fan, W. Mining web navigations for intelligence {2006} DECISION SUPPORT SYSTEMS
    Vol. {41}({3}), pp. {574-591} 
    article DOI  
    Abstract: The Internet is one of the fastest growing areas of intelligence gathering. We present a statistical approach, called principal clusters analysis, for analyzing millions of user navigations on the Web. This technique identifies prominent navigation clusters on different topics. Furthermore, it can determine information items that are useful starting points to explore a topic, as well as key documents to explore the topic in greater detail. Trends can be detected by observing navigation prominence over time. We apply this technique on a large popular website. The results show promise in web intelligence mining. (c) 2004 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Wu2006,
      author = {Wu, H and Gordon, M and DeMaagd, K and Fan, WG},
      title = {Mining web navigations for intelligence},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {2006},
      volume = {41},
      number = {3},
      pages = {574-591},
      doi = {{10.1016/j.dss.2004.06.011}}
    }
    
    Wu, Z., Chen, H. & Xu, J. Knowledge Base Grid: A generic Grid Architecture for semantic web {2003} JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY
    Vol. {18}({4}), pp. {462-473} 
    article  
    Abstract: The emergence of semantic web will result in an enormous amount of knowledge base resources on the web. In this paper, a generic Knowledge Base Grid Architecture (KB-Grid) for building large-scale knowledge systems on the semantic web is presented. KB-Grid suggests a paradigm that emphasizes how to organize, discover, utilize, and manage web knowledge base resources. Four principal components are under development: a semantic browser for retrieving and browsing semantically enriched information, a knowledge server acting as the web container for knowledge, an ontology server for managing web ontologies, and a knowledge base directory server acting as the registry and catalog of KBs. Also a referential model of knowledge service and the mechanisms required for semantic communication within KB-Grid are defined. To verify the design rationale underlying the KB-Grid, an implementation of Traditional Chinese Medicine (TCM) is described.
    BibTeX:
    @article{Wu2003,
      author = {Wu, ZH and Chen, HJ and Xu, JF},
      title = {Knowledge Base Grid: A generic Grid Architecture for semantic web},
      journal = {JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY},
      year = {2003},
      volume = {18},
      number = {4},
      pages = {462-473},
      note = {1st International Workshop on Grid and Cooperative Computing (GCC2002), HAINAN, PEOPLES R CHINA, DEC, 2002}
    }
    
    Wuwongse, V., Anutariya, C., Akama, K. & Nantajeewarawat, E. XML declarative description: A language for the semantic web {2001} IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {16}({3}), pp. {54-65} 
    article  
    BibTeX:
    @article{Wuwongse2001,
      author = {Wuwongse, V and Anutariya, C and Akama, K and Nantajeewarawat, E},
      title = {XML declarative description: A language for the semantic web},
      journal = {IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {2001},
      volume = {16},
      number = {3},
      pages = {54-65}
    }
    
    Xu, C., Wang, J., Lu, H. & Zhang, Y. A novel framework for semantic annotation and personalized retrieval of sports video {2008} IEEE TRANSACTIONS ON MULTIMEDIA
    Vol. {10}({3}), pp. {421-436} 
    article DOI  
    Abstract: Sports video annotation is important for sports video semantic analysis such as event detection and personalization. In this paper, we propose a novel approach for sports video semantic annotation and personalized retrieval. Different from the state of the art sports video analysis methods which heavily rely on audio/visual features, the proposed approach incorporates web-casting text into sports video analysis. Compared with previous approaches, the contributions of our approach include the following. 1) The event detection accuracy is significantly improved due to the incorporation of web-casting text analysis. 2) The proposed approach is able to detect exact event boundary and extract event semantics that are very difficult or impossible to be handled by previous approaches. 3) The proposed method is able to create personalized summary from both general and specific point of view related to particular game, event, player or team according to user's preference. We present the framework of our approach and details of text analysis, video analysis, text/video alignment, and personalized retrieval. The experimental results on event boundary detection in sports video are encouraging and comparable to the manually selected events. The evaluation on personalized retrieval is effective in helping meet users' expectations.
    BibTeX:
    @article{Xu2008,
      author = {Xu, Changsheng and Wang, Jinjun and Lu, Hanqing and Zhang, Yifan},
      title = {A novel framework for semantic annotation and personalized retrieval of sports video},
      journal = {IEEE TRANSACTIONS ON MULTIMEDIA},
      year = {2008},
      volume = {10},
      number = {3},
      pages = {421-436},
      doi = {{10.1109/TMM.2008.917346}}
    }
    
    Yang, C. & Luk, J. Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws {2003} JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY
    Vol. {54}({7}), pp. {671-682} 
    article DOI  
    Abstract: The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers (''English Will Dominate Web for Only Three More Years,'' Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among non-English speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 (''Report: China Internet users double to 17 million,'' CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (LIS's Internet population was 166 millions) (Robyn Greenspan, ``China Pulls Ahead of Japan,'' Internet.com, April 22, 2002, http://cyberatlas.internet. com/big-picture/geographics/article/0,,5911-1013841,00. html). All of the evidences reveal the importance of cross-lingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue on Digital Libraries, February, 32(2),45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue on Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus on cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based on English/Chinese parallel corpus. When the searchers encounter retrieval problems, professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based on a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    BibTeX:
    @article{Yang2003,
      author = {Yang, CC and Luk, J},
      title = {Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws},
      journal = {JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY},
      year = {2003},
      volume = {54},
      number = {7},
      pages = {671-682},
      doi = {{10.1002/asi.10259}}
    }
    
    Yang, Q.Z. & Zhang, Y. Semantic interoperability in building design: Methods and tools {2006} COMPUTER-AIDED DESIGN
    Vol. {38}({10}), pp. {1099-1112} 
    article DOI  
    Abstract: Semantic interoperability is a crucial element to make building information models understandable and model data sharable across multiple design disciplines and heterogeneous computer systems. This paper presents a new approach and its software implementation for the development of building design objects with semantics of interoperable information to support semantic interoperability in building designs. The novelty of the approach includes its incorporation of building design domain ontology, object-based CAD information modeling, and interoperability standard to make building information models and model data semantically interoperable. A set of methods are proposed to address the issues of object-based building information representation compliant with the Industrial Foundation Classes (IFC); extension of IFC models with the supplementary information; and semantic annotation of the interoperable and extensible information sets. The prototype implementation of these methods provides a set of Web-enabled software tools for effectively generating, managing, and reusing the semantically interoperable building objects in design applications of architectural CAD, structural analysis, and building code conformance checking. (C) 2006 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Yang2006,
      author = {Yang, Q. Z. and Zhang, Y.},
      title = {Semantic interoperability in building design: Methods and tools},
      journal = {COMPUTER-AIDED DESIGN},
      year = {2006},
      volume = {38},
      number = {10},
      pages = {1099-1112},
      doi = {{10.1016/j.cad.2006.06.003}}
    }
    
    Yang, S., Chen, I. & Shao, N. Ontology enabled annotation and knowledge management for collaborative learning in virtual learning community {2004} EDUCATIONAL TECHNOLOGY & SOCIETY
    Vol. {7}({4}), pp. {70-81} 
    article  
    Abstract: The nature of collaborative learning involves intensive interactions among collaborators, such as articulating knowledge into written, verbal or symbolic forms, authoring articles or posting messages to this community's discussion forum, responding or adding comments to messages or articles posted by others, etc. Knowledge collaborators' capabilities to provide knowledge and the motivation to collaborate in the learning process influence the quantity and quality of the knowledge to flow into the virtual learning community. In this paper, we have developed an ontology enabled annotation and knowledge management to provide semantic web services from three perspectives, personalized annotation, real-time discussion, and semantic content retrieval. Personalized annotation is used to equip the collaborators with Web based authoring tools for commenting, knowledge articulation and exertion by extracting metadata from both the annotated content and the annotation itself, and establishing ontological relation between them. The real-time discussion is used as a bridge to link collaborators and knowledge and motivate collaborators for knowledge sharing by building profiles for collaborators and knowledge ( in the forms of content and annotation) during every discussion session, and establishing ontological relation between the collaborators and knowledge for the use of semantic content retrieval. The semantic content retrieval then utilizes the ontological relations constructed from the personalized annotation and real-time discussion for finding more relevant collaborators and knowledge.
    BibTeX:
    @article{Yang2004,
      author = {Yang, SJH and Chen, IYL and Shao, NWY},
      title = {Ontology enabled annotation and knowledge management for collaborative learning in virtual learning community},
      journal = {EDUCATIONAL TECHNOLOGY & SOCIETY},
      year = {2004},
      volume = {7},
      number = {4},
      pages = {70-81}
    }
    
    Yen, J., Fan, X., Sun, S., Hanratty, T. & Dumer, J. Agents with shared mental models for enhancing team decision makings {2006} DECISION SUPPORT SYSTEMS
    Vol. {41}({3}), pp. {634-653} 
    article DOI  
    Abstract: Proactive information sharing is a challenging issue faced by intelligence agencies in effectively making critical decisions under time pressure in areas related to homeland security. Motivated by psychological studies on human teams, a team-oriented agent architecture, Collaborative Agents for Simulating Teamwork (CAST), was implemented to allow agents in a team to anticipate the information needs of teammates and help them with their information needs proactively and effectively. In this paper, we extend CAST with a decision-making module. Through two sets of experiments in a simulated battlefield, we evaluate the effectiveness of the decision-theoretic proactive communication strategy in improving team performance, and the effectiveness of information fusion as an approach to alleviating the information overload problem faced by distributed decision makers. (c) 2004 Elsevier B.V All rights reserved.
    BibTeX:
    @article{Yen2006,
      author = {Yen, J and Fan, XC and Sun, S and Hanratty, T and Dumer, J},
      title = {Agents with shared mental models for enhancing team decision makings},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {2006},
      volume = {41},
      number = {3},
      pages = {634-653},
      doi = {{10.1016/j.dss.2004.06.008}}
    }
    
    Yu, Q., Liu, X., Bouguettaya, A. & Medjahed, B. Deploying and managing Web services: issues, solutions, and directions {2008} VLDB JOURNAL
    Vol. {17}({3}), pp. {537-572} 
    article DOI  
    Abstract: Web services are expected to be the key technology in enabling the next installment of the Web in the form of the Service Web. In this paradigm shift, Web services would be treated as first-class objects that can be manipulated much like data is now manipulated using a database management system. Hitherto, Web services have largely been driven by standards. However, there is a strong impetus for defining a solid and integrated foundation that would facilitate the kind of innovations witnessed in other fields, such as databases. This survey focuses on investigating the different research problems, solutions, and directions to deploying Web services that are managed by an integrated Web Service Management System (WSMS). The survey identifies the key features of a WSMS and conducts a comparative study on how current research approaches and projects fit in.
    BibTeX:
    @article{Yu2008,
      author = {Yu, Qi and Liu, Xumin and Bouguettaya, Athman and Medjahed, Brahim},
      title = {Deploying and managing Web services: issues, solutions, and directions},
      journal = {VLDB JOURNAL},
      year = {2008},
      volume = {17},
      number = {3},
      pages = {537-572},
      doi = {{10.1007/s00778-006-0020-3}}
    }
    
    Yue, P., Di, L., Yang, W., Yu, G. & Zhao, P. Semantics-based automatic composition of geospatial Web service chains {2007} COMPUTERS & GEOSCIENCES
    Vol. {33}({5}), pp. {649-665} 
    article DOI  
    Abstract: Recent developments in Web service technologies and the semantic Web have shown promise for automatic discovery, access, and use of Web services to quickly and efficiently solve particular application problems. One such application area is in the geospatial discipline, where Web services can significantly reduce the data volume and required computing resources at the end-user side. A key challenge in promoting widespread use of Web services in the geospatial applications is to automate the construction of a chain or process flow that involves multiple services and highly diversified and distributed data. This work presents an approach for automating geospatial Web service composition by employing geospatial semantics in the service-oriented architecture (SOA). It shows how ontology-based geospatial semantics are used in a prototype system for enabling the automatic discovery, access, and chaining of geospatial Web services. A case study of the chaining process for deriving a landslide susceptibility index illustrates the applicability of ontology-driven automatic Web service composition for geospatial applications. (c) 2006 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Yue2007,
      author = {Yue, Peng and Di, Liping and Yang, Wenli and Yu, Genong and Zhao, Peisheng},
      title = {Semantics-based automatic composition of geospatial Web service chains},
      journal = {COMPUTERS & GEOSCIENCES},
      year = {2007},
      volume = {33},
      number = {5},
      pages = {649-665},
      doi = {{10.1016/j.cageo.2006.09.003}}
    }
    
    Zeng, Q., Crowell, J., Plovnick, R., Kim, E., Ngo, L. & Dibble, E. Assisting consumer health information retrieval with query recommendations {2006} JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION
    Vol. {13}({1}), pp. {80-90} 
    article DOI  
    Abstract: Objective: Health information retrieval (FUR) on the Internet has become an important practice for millions of people, many of whom have problems forming effective queries. We have developed and evaluated a tool to assist people in health-related query formation. Design: We developed the Health Information Query Assistant (HIQuA) system. The system suggests alternative/additional query terms related to the user's initial query that can be used as building blocks to construct a better, more specific query. The recommended terms are selected according to their semantic distance from the original query, which is calculated on the basis of concept co-occurrences in medical literature and log data as well as semantic relations in medical vocabularies. Measurements: An evaluation of the HIQuA system was conducted and a total of 213 subjects participated in the study. The subjects were randomized into 2 groups. One group Was given query recommendations and the other was not. Each subject performed HIR for both a predefined and a self-defined task. Results: The study showed that providing HIQuA recommendations resulted in statistically significantly higher rates of successful queries (odds ratio = 1.66, 95% confidence interval = 1.16 - 2.38), although no statistically significant impact on user satisfaction or the users' ability to accomplish the predefined retrieval task was found. Conclusion: Providing semantic-distance-based query recommendations can help consumers with query formation during HIR.
    BibTeX:
    @article{Zeng2006,
      author = {Zeng, QT and Crowell, J and Plovnick, RM and Kim, E and Ngo, L and Dibble, E},
      title = {Assisting consumer health information retrieval with query recommendations},
      journal = {JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION},
      year = {2006},
      volume = {13},
      number = {1},
      pages = {80-90},
      doi = {{10.1197/jamia.M1820}}
    }
    
    Zeng, Q., Kogan, S., Ash, N., Greenes, R. & Boxwala, A. Characteristics of consumer terminology for health information retrieval {2002} METHODS OF INFORMATION IN MEDICINE
    Vol. {41}({4}), pp. {289-298} 
    article  
    Abstract: Objectives: As millions of consumers perform health information retrieval online, the mismatch between their terminology and the terminologies of the information sources could become a major barrier to successful retrievals. To address this problem, we studied the characteristics of consumer terminology for health information retrieval. Methods: Our study focused on consumer queries that were used on a consumer health service Web site and a consumer health information Web site. We analyzed data from the site-usage logs and conducted interviews with patients. Results: Our findings show that consumers' information retrieval performance is very poor. There are significant mismatches at all levels (lexical, semantic and mental models) between the consumer terminology and both the information source terminology and standard medical vocabularies. Conclusions: Comprehensive terminology support on all levels is needed for consumer health information retrieval.
    BibTeX:
    @article{Zeng2002,
      author = {Zeng, Q and Kogan, S and Ash, N and Greenes, RA and Boxwala, AA},
      title = {Characteristics of consumer terminology for health information retrieval},
      journal = {METHODS OF INFORMATION IN MEDICINE},
      year = {2002},
      volume = {41},
      number = {4},
      pages = {289-298}
    }
    
    Zhang, H., Chen, Z., Li, M. & Su, Z. Relevance feedback and learning in content-based image search {2003} WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS
    Vol. {6}({2}), pp. {131-155} 
    article  
    Abstract: A major bottleneck in content-based image retrieval (CBIR) systems or search engines is the large gap between low-level image features used to index images and high-level semantic contents of images. One solution to this bottleneck is to apply relevance feedback to refine the query or similarity measures in image search process. In this paper, we first address the key issues involved in relevance feedback of CBIR systems and present a brief over-view of a set of commonly used relevance feedback algorithms. Almost all of the previously proposed methods fall well into such framework. We present a framework of relevance feedback and semantic learning in CBIR. In this framework, low-level features and keyword annotations are integrated in image retrieval and in feedback processes to improve the retrieval performance. We have also extended framework to a content-based web image search engine in which hosting web pages are used to collect relevant annotations for images and users' feedback logs are used to refine annotations. A prototype system has developed to evaluate our proposed schemes, and our experimental results indicated that our approach outperforms traditional CBIR system and relevance feedback approaches.
    BibTeX:
    @article{Zhang2003,
      author = {Zhang, HJ and Chen, Z and Li, MJ and Su, Z},
      title = {Relevance feedback and learning in content-based image search},
      journal = {WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS},
      year = {2003},
      volume = {6},
      number = {2},
      pages = {131-155},
      note = {6th IFIP 2.6 Working Conference on Visual Database Systems (VDB6), BRISBANE, AUSTRALIA, MAY 29-31, 2002}
    }
    
    Zhang, P., Zhang, J., Sheng, H., Russo, J., Osborne, B. & Buetow, K. Gene functional similarity search tool (GFSST) {2006} BMC BIOINFORMATICS
    Vol. {7} 
    article DOI  
    Abstract: Background: With the completion of the genome sequences of human, mouse, and other species and the advent of high throughput functional genomic research technologies such as biomicroarray chips, more and more genes and their products have been discovered and their functions have begun to be understood. Increasing amounts of data about genes, gene products and their functions have been stored in databases. To facilitate selection of candidate genes for gene-disease research, genetic association studies, biomarker and drug target selection, and animal models of human diseases, it is essential to have search engines that can retrieve genes by their functions from proteome databases. In recent years, the development of Gene Ontology ( GO) has established structured, controlled vocabularies describing gene functions, which makes it possible to develop novel tools to search genes by functional similarity. Results: By using a statistical model to measure the functional similarity of genes based on the Gene Ontology directed acyclic graph, we developed a novel Gene Functional Similarity Search Tool (GFSST) to identify genes with related functions from annotated proteome databases. This search engine lets users design their search targets by gene functions. Conclusion: An implementation of GFSST which works on the UniProt ( Universal Protein Resource) for the human and mouse proteomes is available at GFSST Web Server. GFSST provides functions not only for similar gene retrieval but also for gene search by one or more GO terms. This represents a powerful new approach for selecting similar genes and gene products from proteome databases according to their functions.
    BibTeX:
    @article{Zhang2006,
      author = {Zhang, PS and Zhang, JH and Sheng, HT and Russo, JJ and Osborne, B and Buetow, K},
      title = {Gene functional similarity search tool (GFSST)},
      journal = {BMC BIOINFORMATICS},
      year = {2006},
      volume = {7},
      doi = {{10.1186/1471-2105-7-135}}
    }
    
    Zhang, Z., Cheung, K.-H. & Townsend, J.P. Bringing Web 2.0 to bioinformatics {2009} BRIEFINGS IN BIOINFORMATICS
    Vol. {10}({1}), pp. {1-10} 
    article DOI  
    Abstract: Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
    BibTeX:
    @article{Zhang2009,
      author = {Zhang, Zhang and Cheung, Kei-Hoi and Townsend, Jeffrey P.},
      title = {Bringing Web 2.0 to bioinformatics},
      journal = {BRIEFINGS IN BIOINFORMATICS},
      year = {2009},
      volume = {10},
      number = {1},
      pages = {1-10},
      doi = {{10.1093/bib/bbn041}}
    }
    
    Zhao, J., Wroe, C., Goble, C., Stevens, R., Quan, D. & Greenwood, M. Using semantic web technologies for representing e-Science provenance {2004}
    Vol. {3298}SEMANTIC WEB - ISWC 2004, PROCEEDINGS, pp. {92-106} 
    inproceedings  
    Abstract: Life science researchers increasingly rely on the web as a primary source of data, forcing them to apply the same rigor to its use as to an experiment in the laboratory. The (my)Grid project is developing the use of workflows to explicitly capture web-based procedures, and provenance to describe how and why results were produced. Experience within (my)Grid has shown that this provenance metadata is formed from a complex web of heterogenous resources that impact on the production of a result. Therefore we have explored the use of Semantic Web technologies such as RDF, and ontologies to support its representation and used existing initiatives such as Jena and LSID, to generate and store such material. The effective presentation of complex RDF graphs is challenging. Haystack has been used to provide multiple views of provenance metadata that can be further annotated. This work therefore forms a case study showing how existing Semantic Web tools can effectively support the emerging requirements of life science research.
    BibTeX:
    @inproceedings{Zhao2004,
      author = {Zhao, J and Wroe, C and Goble, C and Stevens, R and Quan, D and Greenwood, M},
      title = {Using semantic web technologies for representing e-Science provenance},
      booktitle = {SEMANTIC WEB - ISWC 2004, PROCEEDINGS},
      year = {2004},
      volume = {3298},
      pages = {92-106},
      note = {3rd International Semantic Web Conference, Hiroshima, JAPAN, NOV 07-11, 2004}
    }
    
    Zhao, R. & Grosky, W. Narrowing the semantic gap - Improved text-based web document retrieval using visual features {2002} IEEE TRANSACTIONS ON MULTIMEDIA
    Vol. {4}({2}), pp. {189-200} 
    article  
    Abstract: In this paper, we present the results of our work that seek to negotiate the gap between low-level features and high-level concepts in the domain of web document retrieval. This work concerns a technique, latent semantic indexing (LSI), which has been used for textual information retrieval for many years. In this environment, LSI determines clusters of co-occurring keywords-sometimes called concepts-so that a query which uses a particular keyword can then retrieve documents perhaps not containing this keyword, but containing other keywords from the same cluster. In this paper, we examine the use of this technique for content-based web document retrieval, using both keywords and image features to represent the documents. Two different approaches to image feature representation, namely, color histograms and color anglograms, are adopted and evaluated. Experimental results show that LSI, together with both textual and visual features, is able to extract the underlying semantic structure of web documents, thus helping to improve the retrieval performance significantly, even when querying is done using only keywords.
    BibTeX:
    @article{Zhao2002,
      author = {Zhao, R and Grosky, WI},
      title = {Narrowing the semantic gap - Improved text-based web document retrieval using visual features},
      journal = {IEEE TRANSACTIONS ON MULTIMEDIA},
      year = {2002},
      volume = {4},
      number = {2},
      pages = {189-200}
    }
    
    Zhong, N. Toward Web Intelligence {2003}
    Vol. {2663}ADVANCES IN WEB INTELLIGENCE, pp. {1-14} 
    inproceedings  
    Abstract: Web Intelligence (WI) presents excellent opportunities and challenges for the research and development of new generation of Web-based information processing technology, as well as for exploiting Web-based advanced applications. Based on two perspectives of WI research: an intelligent Web-based business-centric schematic diagram and the conceptual levels of WI, we investigates various ways to study WI and potential applications.
    BibTeX:
    @inproceedings{Zhong2003,
      author = {Zhong, N},
      title = {Toward Web Intelligence},
      booktitle = {ADVANCES IN WEB INTELLIGENCE},
      year = {2003},
      volume = {2663},
      pages = {1-14},
      note = {1st International Atlantic Web Intelligence Conference (AWIC 2003), MADRUD, SPAIN, MAY 05-06, 2003}
    }
    
    Zhou, L. Ontology learning: state of the art and open issues {2007} INFORMATION TECHNOLOGY & MANAGEMENT
    Vol. {8}({3, Sp. Iss. SI}), pp. {241-252} 
    article DOI  
    Abstract: Ontology is one of the fundamental cornerstones of the semantic Web. The pervasive use of ontologies in information sharing and knowledge management calls for efficient and effective approaches to ontology development. Ontology learning, which seeks to discover ontological knowledge from various forms of data automatically or semi-automatically, can overcome the bottleneck of ontology acquisition in ontology development. Despite the significant progress in ontology learning research over the past decade, there remain a number of open problems in this field. This paper provides a comprehensive review and discussion of major issues, challenges, and opportunities in ontology learning. We propose a new learning-oriented model for ontology development and a framework for ontology learning. Moreover, we identify and discuss important dimensions for classifying ontology learning approaches and techniques. In light of the impact of domain on choosing ontology learning approaches, we summarize domain characteristics that can facilitate future ontology learning effort. The paper offers a road map and a variety of insights about this fast-growing field.
    BibTeX:
    @article{Zhou2007,
      author = {Zhou, Lina},
      title = {Ontology learning: state of the art and open issues},
      journal = {INFORMATION TECHNOLOGY & MANAGEMENT},
      year = {2007},
      volume = {8},
      number = {3, Sp. Iss. SI},
      pages = {241-252},
      doi = {{10.1007/s10799-007-0019-5}}
    }
    
    Zhuge, H. Communities and Emerging Semantics in Semantic Link Network: Discovery and Learning {2009} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {21}({6, Sp. Iss. SI}), pp. {785-799} 
    article DOI  
    Abstract: The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.
    BibTeX:
    @article{Zhuge2009,
      author = {Zhuge, Hai},
      title = {Communities and Emerging Semantics in Semantic Link Network: Discovery and Learning},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {2009},
      volume = {21},
      number = {6, Sp. Iss. SI},
      pages = {785-799},
      note = {6th International Conference on Web-Based Learning (ICWL 07), Edinburgh, SCOTLAND, AUG, 2007},
      doi = {{10.1109/TKDE.2008.141}}
    }
    
    Zhuge, H. China's E-science knowledge grid environment {2004} IEEE INTELLIGENT SYSTEMS
    Vol. {19}({1}), pp. {13-17} 
    article  
    BibTeX:
    @article{Zhuge2004,
      author = {Zhuge, H},
      title = {China's E-science knowledge grid environment},
      journal = {IEEE INTELLIGENT SYSTEMS},
      year = {2004},
      volume = {19},
      number = {1},
      pages = {13-17}
    }
    
    Zhuge, H. Semantics, resource and grid {2004} FUTURE GENERATION COMPUTER SYSTEMS
    Vol. {20}({1}), pp. {1-5} 
    article DOI  
    Abstract: The future interconnection environment will be a platform-independent Virtual Grid consisting of requirements, roles and resources. With machine-understandable semantics, a resource can actively and dynamically cluster relevant resources to provide on-demand services by understanding requirements and functions each other. Versatile resources are encapsulated to provide services in the form of single semantic image by using the uniform resource model. A resource can intelligently assist people to accomplish complex tasks and solve problems by participating versatile resource flow cycles through virtual roles to use proper knowledge, information, and computing resources. From this Virtual Grid point of view, this paper conceptualizes the Social Grid, Semantic Resource Grid and Knowledge Grid, then points out the key research issues of the future interconnection environment. (C) 2003 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Zhuge2004a,
      author = {Zhuge, H},
      title = {Semantics, resource and grid},
      journal = {FUTURE GENERATION COMPUTER SYSTEMS},
      year = {2004},
      volume = {20},
      number = {1},
      pages = {1-5},
      doi = {{10.1016/S0167-739X(03)00159-6}}
    }
    
    Zhuge, H. Resource space model, its design method and applications {2004} JOURNAL OF SYSTEMS AND SOFTWARE
    Vol. {72}({1}), pp. {71-81} 
    article DOI  
    Abstract: A resource space model (RSM) is a model for specifying, sharing and managing versatile Web resources with a universal resource view. A normal resource space is a semantic coordinate system with independent coordinates and mutual-orthogonal axes. This paper first introduces the main viewpoint and basic content of the RSM, and then proposes a four-step method for designing the logical-level resource spaces: resource analysis, top-down resource partition, design two-dimensional resource spaces, and join between resource spaces. Design strategies and tools include: reference model, analogy and abstraction strategy, resource dictionary, independency checking tool, and orthogonality checking tool. The study on using the RSM to manage relational tables shows that the RSM is also suitable for managing structured resources. Applications show that the RSM together with the proposed development method is an applicable solution to realize normal and effective management of versatile web resources. Comparisons show the differences between the proposed model and the relational data model. (C) 2003 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Zhuge2004b,
      author = {Zhuge, H},
      title = {Resource space model, its design method and applications},
      journal = {JOURNAL OF SYSTEMS AND SOFTWARE},
      year = {2004},
      volume = {72},
      number = {1},
      pages = {71-81},
      doi = {{10.1016/S0164-1212(03)00058-X}}
    }
    
    Zhuge, H. Resource Space Grid: model, method and platform {2004} CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
    Vol. {16}({14}), pp. {1385-1413} 
    article DOI  
    Abstract: A Resource Space Grid is a virtual Grid that aims at effectively sharing, using and managing versatile resources across the Internet. The kernel of the Resource Space Grid includes a Resource Space Model (RSM) and a uniform Resource Using Mechanism (RUM). This paper presents the Resource Space Grid's core scientific issues and methodology, architecture, model and theory, design criteria and method, and practice. A normal form theory is proposed to normalize the resource space-a coordinate system for uniformly specifying and organizing resources. The RUM provides not only the end-users with an operable resource browser to operate resources using the built-in Resource Operation Language (ROL), but also the application developers with the ROL-based programming environment. The prototype platform based on the proposed model and method has been implemented and used for sharing and managing resources in distributed research teams. Operations on Resource Spaces can constitute the virtual communities of Resource Space Grids-a platform independent resource sharing environment. Copyright (C) 2004 John Wiley Sons, Ltd.
    BibTeX:
    @article{Zhuge2004c,
      author = {Zhuge, H},
      title = {Resource Space Grid: model, method and platform},
      journal = {CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE},
      year = {2004},
      volume = {16},
      number = {14},
      pages = {1385-1413},
      doi = {{10.1002/cpe.867}}
    }
    
    Zhuge, H. Active e-document framework ADF: model and tool {2003} INFORMATION & MANAGEMENT
    Vol. {41}({1}), pp. {87-97} 
    article DOI  
    Abstract: An active document framework is a self-representable, self-explainable, and self-executable document mechanism. A document's content is reflected in four aspects: granularity hierarchy, template hierarchy, background knowledge, and semantic links between fragments. An active document has a set of build-in engines for browsing, retrieving, and reasoning, which can work in a way best suited to the document's content. Besides browsing and retrieval services, the active document supports intelligent information services such as complex question answering, online teaching, and assistant problem solving. The client side service provider is only responsible for the retrieval of the required active document. The detailed information services are provided by the document mechanism. This improves the current Web information retrieval approach by raising the efficiency of information retrieval, enhancing the preciseness and mobility of information services, and enabling intelligent information services. A tool for making semantic links in a document and an intelligent browser have been developed to support the proposed approach, which provides a new type of web information service. (C) 2003 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Zhuge2003,
      author = {Zhuge, H},
      title = {Active e-document framework ADF: model and tool},
      journal = {INFORMATION & MANAGEMENT},
      year = {2003},
      volume = {41},
      number = {1},
      pages = {87-97},
      doi = {{10.1016/S0378-7206(03)00029-6}}
    }
    
    Zhuge, H. A knowledge grid model and platform for global knowledge sharing {2002} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {22}({4}), pp. {313-320} 
    article  
    Abstract: This paper proposes a knowledge grid model for sharing and managing globally distributed knowledge resources. The model organizes knowledge in a three-dimensional knowledge space, and provides a knowledge grid operation language, KGOL. Internet users can use the KGOL to create their knowledge grids, to put knowledge to them, to edit knowledge, to partially or wholly open their grids to all or some particular grids, and to get the required knowledge from the open knowledge of all the knowledge grids. The model enables people to conveniently share knowledge with each other when they work on the Internet. A software platform based on the proposed model has been implemented and used for knowledge sharing in research teams. (C) 2002 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Zhuge2002,
      author = {Zhuge, H},
      title = {A knowledge grid model and platform for global knowledge sharing},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2002},
      volume = {22},
      number = {4},
      pages = {313-320}
    }
    
    Zhuge, H. A knowledge flow model for peer-to-peer team knowledge sharing and management {2002} EXPERT SYSTEMS WITH APPLICATIONS
    Vol. {23}({1}), pp. {23-30} 
    article  
    Abstract: To realize effective knowledge sharing in teamwork, this paper proposes a knowledge flow model for peer-to-peer knowledge sharing and management in cooperative teams. The model consists of the concepts, rules and methods about the knowledge flow, the knowledge flow process model, and the knowledge flow engine. A reference model for coordinating the knowledge flow process with the workflow process is suggested to provide an integrated approach to model teamwork process. We also discuss the peer-to-peer knowledge-sharing paradigm in large-scale teams and propose the approach for constructing a knowledge flow network from the corresponding workflow. The proposed model provides a new way to model and manage teamwork processes. (C) 2002 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Zhuge2002a,
      author = {Zhuge, H},
      title = {A knowledge flow model for peer-to-peer team knowledge sharing and management},
      journal = {EXPERT SYSTEMS WITH APPLICATIONS},
      year = {2002},
      volume = {23},
      number = {1},
      pages = {23-30}
    }
    
    Ziegler, C. & Lausen, G. Propagation models for trust and distrust in social networks {2005} INFORMATION SYSTEMS FRONTIERS
    Vol. {7}({4-5}), pp. {337-358} 
    article DOI  
    Abstract: Semantic Web endeavors have mainly focused on issues pertaining to knowledge representation and ontology design. However, besides understanding information metadata stated by subjects, knowing about their credibility becomes equally crucial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contribution to Semantic Web trust management through this work is twofold. First, we introduce a classification scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for Semantic Web scenarios. Hereby, we devise an advocacy for local group trust metrics, guiding us to the second part which presents Appleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion. Moreover, we provide extensions for the Appleseed nucleus that make our trust metric handle distrust statements.
    BibTeX:
    @article{Ziegler2005,
      author = {Ziegler, CN and Lausen, G},
      title = {Propagation models for trust and distrust in social networks},
      journal = {INFORMATION SYSTEMS FRONTIERS},
      year = {2005},
      volume = {7},
      number = {4-5},
      pages = {337-358},
      doi = {{10.1007/s10796-005-4807-3}}
    }
    

    Created by JabRef on 15/11/2010.