QuickSearch:   Number of matching entries: 0.

Search Settings

    AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
    Abadi, M. & Needham, R. Prudent engineering practice for cryptographic protocols {1996} IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
    Vol. {22}({1}), pp. {6-15} 
    article  
    Abstract: We present principles for designing cryptographic protocols. The principles are neither necessary nor sufficient for correctness. They are however helpful, in that adherence to them would have prevented a number of published errors. Our principles are informal guidelines; they complement formal methods, but do not assume them. In order to demonstrate the actual applicability of these guidelines, we discuss some instructive examples from the literature.
    BibTeX:
    @article{Abadi1996,
      author = {Abadi, M and Needham, R},
      title = {Prudent engineering practice for cryptographic protocols},
      journal = {IEEE TRANSACTIONS ON SOFTWARE ENGINEERING},
      year = {1996},
      volume = {22},
      number = {1},
      pages = {6-15},
      note = {1994 IEEE-Computer-Society Symposium on Research in Security and Privacy, OAKLAND, CA, MAY 16-18, 1994}
    }
    
    Abdelzaher, T., Shin, K. & Bhatti, N. Performance guarantees for Web server end-systems: A control-theoretical approach {2002} IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
    Vol. {13}({1}), pp. {80-96} 
    article  
    Abstract: The Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and marketing a myriad of services. The World Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the Web server at the center of a gradually emerging e-service infrastructure with increasing requirements for service quality and reliability guarantees in an unpredictable and highly-dynamic environment. This paper describes performance control of a Web server using classical feedback control theory. We use feedback control theory to achieve overload protection, performance guarantees, and service differentiation in the presence of load unpredictability. We show that feedback control theory offers a promising analytic foundation for providing service differentiation and performance guarantees. We demonstrate how a general Web server may be modeled for purposes of performance control, present the equivalents of sensors and actuators, formulate a simple feedback loop, describe how it can leverage on real-time scheduling and feedback-control theories to achieve per-class response-time and throughput guarantees, and evaluate the efficacy of the scheme on an experimental testbed using the most popular Web server, Apache. Experimental results indicate that control-theoretic techniques offer a sound way of achieving desired performance in performance-critical Internet applications. Our QoS (Quality-of-Service) management solutions can be implemented either in middleware that is transparent to the server, or as a library called by server code.
    BibTeX:
    @article{Abdelzaher2002,
      author = {Abdelzaher, TF and Shin, KG and Bhatti, N},
      title = {Performance guarantees for Web server end-systems: A control-theoretical approach},
      journal = {IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS},
      year = {2002},
      volume = {13},
      number = {1},
      pages = {80-96}
    }
    
    Akyildiz, I., McNair, J., Ho, J., Uzunalioglu, H. & Wang, W. Mobility management in next-generation wireless systems {1999} PROCEEDINGS OF THE IEEE
    Vol. {87}({8}), pp. {1347-1384} 
    article  
    Abstract: This paper describes current and proposed protocols for mobility management for public land mobile network (PLMN)-based networks, mobile Internet protocol (IP), wireless asynchronous transfer mode (ATM), and satellite networks. The integration of these,networks will be discussed in the context of the next evolutionary step of wireless communication networks. First, a review is provided of location management algorithms for personal communication systems (PCS) implemented ol er a PLMN network. The latest protocol changes for location registration and handoff are investigated for Mobile IF, followed by a discussion of proposed protocols for- wireless ATM and satellite networks. Finally, an outline of open problems to be addressed by the next generation of wireless network service is discussed.
    BibTeX:
    @article{Akyildiz1999,
      author = {Akyildiz, IF and McNair, J and Ho, JSM and Uzunalioglu, H and Wang, WY},
      title = {Mobility management in next-generation wireless systems},
      journal = {PROCEEDINGS OF THE IEEE},
      year = {1999},
      volume = {87},
      number = {8},
      pages = {1347-1384}
    }
    
    Akyildiz, I., Wang, X. & Wang, W. Wireless mesh networks: a survey {2005} COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING
    Vol. {47}({4}), pp. {445-487} 
    article DOI  
    Abstract: Wireless mesh networks (WMNs) consist of mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. The integration of WMNs with other networks such as the Internet, cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be accomplished through the gateway and bridging functions in the mesh routers. Mesh clients can be either stationary or mobile, and can form a client mesh network among themselves and with mesh routers. WMNs are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless metropolitan area networks (WMANs). They are undergoing rapid progress and inspiring numerous deployments. WMNs will deliver wireless services for a large variety of applications in personal, local, campus, and metropolitan areas. Despite recent advances in wireless mesh networking, many research challenges remain in all protocol layers. This paper presents a detailed study on recent advances and open research issues in WMNs. System architectures and applications of WMNs are described, followed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to point out a number of open research issues. Finally, test-beds, industrial practice, and current standard activities related to WMNs are highlighted. (C) 2004 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Akyildiz2005,
      author = {Akyildiz, IF and Wang, XD and Wang, WL},
      title = {Wireless mesh networks: a survey},
      journal = {COMPUTER NETWORKS-THE INTERNATIONAL JOURNAL OF COMPUTER AND TELECOMMUNICATIONS NETWORKING},
      year = {2005},
      volume = {47},
      number = {4},
      pages = {445-487},
      doi = {{10.1016/j.comnet.2004.12.001}}
    }
    
    Aladwani, A. & Palvia, P. Developing and validating an instrument for measuring user-perceived web quality {2002} INFORMATION & MANAGEMENT
    Vol. {39}({6}), pp. {467-476} 
    article  
    Abstract: Many of the instruments to measure information and system quality were developed in the context of mainframe and PC-based technologies of yesteryears. With the proliferation of the Internet and World Wide Web applications, users are increasingly interfacing and interacting with web-based applications. It is, therefore, important to develop new instruments and scales, which are directly targeted to these new interfaces and applications. In this article, we report on the development of an instrument that captures key characteristics of web site quality from the user's perspective. The 25-item instrument measures four dimensions of web quality: specific content, content quality, appearance and technical adequacy. While improvements are possible, the instrument exhibits excellent psychometric properties. The instrument would be useful to organizations and web designers as it provides an aggregate measure of web quality, and to researchers in related web research. (C) 2002 Elsevier Science B.V. All rights resented.
    BibTeX:
    @article{Aladwani2002,
      author = {Aladwani, AM and Palvia, PC},
      title = {Developing and validating an instrument for measuring user-perceived web quality},
      journal = {INFORMATION & MANAGEMENT},
      year = {2002},
      volume = {39},
      number = {6},
      pages = {467-476}
    }
    
    Albert, R., Albert, I. & Nakarado, G. Structural vulnerability of the North American power grid {2004} PHYSICAL REVIEW E
    Vol. {69}({2, Part 2}) 
    article DOI  
    Abstract: The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior.
    BibTeX:
    @article{Albert2004,
      author = {Albert, R and Albert, I and Nakarado, GL},
      title = {Structural vulnerability of the North American power grid},
      journal = {PHYSICAL REVIEW E},
      year = {2004},
      volume = {69},
      number = {2, Part 2},
      doi = {{10.1103/PhysRevE.69.025103}}
    }
    
    Albert, R. & Barabasi, A. Statistical mechanics of complex networks {2002} REVIEWS OF MODERN PHYSICS
    Vol. {74}({1}), pp. {47-97} 
    article  
    Abstract: Complex networks describe a wide range of systems in nature and society. Frequently cited examples include the cell, a network of chemicals linked by chemical reactions, and the Internet, a network of routers and computers connected by physical links. While traditionally these systems have been modeled as random graphs, it is increasingly recognized that the topology and evolution of real networks are governed by robust organizing principles. This article reviews the recent advances in the field of complex networks, focusing on the statistical mechanics of network topology and dynamics. After reviewing the empirical data that motivated the recent interest in networks, the authors discuss the main models and analytical tools, covering random graphs, small-world and scale-free networks, the emerging theory of evolving networks, and the interplay between topology and the network's robustness against failures and attacks.
    BibTeX:
    @article{Albert2002,
      author = {Albert, R and Barabasi, AL},
      title = {Statistical mechanics of complex networks},
      journal = {REVIEWS OF MODERN PHYSICS},
      year = {2002},
      volume = {74},
      number = {1},
      pages = {47-97}
    }
    
    Albert, R. & Barabasi, A. Topology of evolving networks: Local events and universality {2000} PHYSICAL REVIEW LETTERS
    Vol. {85}({24}), pp. {5234-5237} 
    article  
    Abstract: Networks grow and evolve by local events, such as the addition of new nodes and links, or rewiring of links from one node to another. We show that depending on the frequency of these processes two topologically different networks can emerge, the connectivity distribution following either a generalized power law or an exponential. We propose a continuum theory that pr-edicts these two regimes as well as the scaling function and the exponents, in good agreement with numerical results. Finally, we use the obtained predictions to fit the connectivity distribution of the network describing the professional links between movie actors.
    BibTeX:
    @article{Albert2000a,
      author = {Albert, R and Barabasi, AL},
      title = {Topology of evolving networks: Local events and universality},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2000},
      volume = {85},
      number = {24},
      pages = {5234-5237}
    }
    
    Albert, R., Jeong, H. & Barabasi, A. Error and attack tolerance of complex networks {2000} NATURE
    Vol. {406}({6794}), pp. {378-382} 
    article  
    Abstract: Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network(1). Complex communication networks(2) display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web(3-5), the Internet(6), social networks(7) and cells(8). We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.
    BibTeX:
    @article{Albert2000,
      author = {Albert, R and Jeong, H and Barabasi, AL},
      title = {Error and attack tolerance of complex networks},
      journal = {NATURE},
      year = {2000},
      volume = {406},
      number = {6794},
      pages = {378-382}
    }
    
    Albert, R., Jeong, H. & Barabasi, A. Internet - Diameter of the World-Wide Web {1999} NATURE
    Vol. {401}({6749}), pp. {130-131} 
    article  
    BibTeX:
    @article{Albert1999,
      author = {Albert, R and Jeong, H and Barabasi, AL},
      title = {Internet - Diameter of the World-Wide Web},
      journal = {NATURE},
      year = {1999},
      volume = {401},
      number = {6749},
      pages = {130-131}
    }
    
    Allcock, B., Bester, J., Bresnahan, J., Chervenak, A., Foster, I., Kesselman, C., Meder, S., Nefedova, V., Quesnel, D. & Tuecke, S. Data management and transfer in high-performance computational grid environments {2002} PARALLEL COMPUTING
    Vol. {28}({5}), pp. {749-771} 
    article  
    Abstract: An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by super-computers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit. (C) 2002 Published by Elsevier Science B.V.
    BibTeX:
    @article{Allcock2002,
      author = {Allcock, B and Bester, J and Bresnahan, J and Chervenak, AL and Foster, I and Kesselman, C and Meder, S and Nefedova, V and Quesnel, D and Tuecke, S},
      title = {Data management and transfer in high-performance computational grid environments},
      journal = {PARALLEL COMPUTING},
      year = {2002},
      volume = {28},
      number = {5},
      pages = {749-771}
    }
    
    Alvarez, L., Weickert, J. & Sanchez, J. Reliable estimation of dense optical flow fields with large displacements {2000} INTERNATIONAL JOURNAL OF COMPUTER VISION
    Vol. {39}({1}), pp. {41-56} 
    article  
    Abstract: In this paper we show that a classic optical flow technique by Nagel and Enkelmann (1986, IEEE Trans. Pattern Anal. Mach. Intell., Vol. 8, pp. 565-593) can be regarded as an early anisotropic diffusion method with a diffusion tensor. We introduce three improvements into the model formulation that (i) avoid inconsistencies caused by centering the brightness term and the smoothness term in different images, (ii) use a linear scale-space focusing strategy from coarse to fine scales for avoiding convergence to physically irrelevant local minima, and (iii) create an energy functional that is invariant under linear brightness changes. Applying a gradient descent method to the resulting energy functional leads to a system of diffusion-reaction equations. We prove that this system has a unique solution under realistic assumptions on the initial data, and we present an efficient linear implicit numerical scheme in detail. Our method creates flow fields with 100% density over the entire image domain, it is robust under a large range of parameter variations, and it can recover displacement fields that are far beyond the typical one-pixel limits which are characteristic for many differential methods for determining optical flow. We show that it performs better than the optical flow methods with 100% density that are evaluated by Barron et al. (1994, Int. J. Comput. Vision, Vol. 12, pp. 43-47). Our software is available from the Internet.
    BibTeX:
    @article{Alvarez2000,
      author = {Alvarez, L and Weickert, J and Sanchez, J},
      title = {Reliable estimation of dense optical flow fields with large displacements},
      journal = {INTERNATIONAL JOURNAL OF COMPUTER VISION},
      year = {2000},
      volume = {39},
      number = {1},
      pages = {41-56}
    }
    
    Amaral, L., Scala, A., Barthelemy, M. & Stanley, H. Classes of small-world networks {2000} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {97}({21}), pp. {11149-11152} 
    article  
    Abstract: We study the statistical properties of a variety of diverse real-world networks. We present evidence of the occurrence of three classes of small-world networks: (a) scale-free networks, characterized by a vertex connectivity distribution that decays as a power law; (b) broad-scale networks, characterized by a connectivity distribution that has a power law regime followed by a sharp cutoff; and (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail. Moreover. we note for the classes of broad-scale and single-scale networks that there are constraints limiting the addition of new links. Our results suggest that the nature of such constraints may be the controlling factor for the emergence of different classes of networks.
    BibTeX:
    @article{Amaral2000,
      author = {Amaral, LAN and Scala, A and Barthelemy, M and Stanley, HE},
      title = {Classes of small-world networks},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2000},
      volume = {97},
      number = {21},
      pages = {11149-11152}
    }
    
    Anderson, K. Targeting recovery: Priorities of the spinal cord-injured population {2004} JOURNAL OF NEUROTRAUMA
    Vol. {21}({10}), pp. {1371-1383} 
    article  
    Abstract: In the United States alone, there are more than 200,000 individuals living with a chronic spinal cord injury (SCI). Healthcare for these individuals creates a significant economic burden for the country, not to mention the physiological, psychological, and social suffering these people endure everyday. Regaining partial function can lead to greater independence, thereby improving quality of life. To ascertain what functions are most important to the SCI population, in regard to enhancing quality of life, a novel survey was performed in which subjects were asked to rank seven functions in order of importance to their quality of life. The survey was distributed via email, postal mail, the internet, interview, and word of mouth to the SCI community at large. A total of 681 responses were completed. Regaining arm and hand function was most important to quadriplegics, while regaining sexual function was the highest priority for paraplegics. Improving bladder and bowel function was of shared importance to both injury groups. A longitudinal analysis revealed only slight differences between individuals injured <3 years compared to those injured >3 years. The majority of participants indicated that exercise was important to functional recovery, yet more than half either did not have access to exercise or did not have access to a trained therapist to oversee that exercise. In order to improve the relevance of research in this area, the concerns of the SCI population must be better known and taken into account. This approach is consistent with and emphasized by the new NIH roadmap to discovery.
    BibTeX:
    @article{Anderson2004,
      author = {Anderson, KD},
      title = {Targeting recovery: Priorities of the spinal cord-injured population},
      journal = {JOURNAL OF NEUROTRAUMA},
      year = {2004},
      volume = {21},
      number = {10},
      pages = {1371-1383}
    }
    
    Andersson, C. & Bro, R. The N-way Toolbox for MATLAB {2000} CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS
    Vol. {52}({1}), pp. {1-4} 
    article  
    Abstract: This communication describes a free toolbox for MATLAB(R) for analysis of multiway data. The toolbox is called ``The N-way Toolbox for MATLAB'' and is available on the internet at http://www.models.kvl.dk/source/. This communication is by no means an attempt to summarize or review the extensive work done in multiway data analysis but is intended solely for informing the reader of the existence, functionality, and applicability of the N-way Toolbox for MATLAB. (C) 2000 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Andersson2000,
      author = {Andersson, CA and Bro, R},
      title = {The N-way Toolbox for MATLAB},
      journal = {CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS},
      year = {2000},
      volume = {52},
      number = {1},
      pages = {1-4}
    }
    
    Andrade, J., Herrmann, H., Andrade, R. & da Silva, L. Apollonian networks: Simultaneously scale-free, small world, Euclidean, space filling, and with matching graphs {2005} PHYSICAL REVIEW LETTERS
    Vol. {94}({1}) 
    article DOI  
    Abstract: We introduce a new family of networks, the Apollonian networks, that are simultaneously scale-free, small-world, Euclidean, space filling, and with matching graphs. These networks describe force chains in polydisperse granular packings and could also be applied to the geometry of fully fragmented porous media, hierarchical road systems, and area-covering electrical supply networks. Some of the properties of these networks, namely, the connectivity exponent, the clustering coefficient, and the shortest path are calculated and found to be particularly rich. The percolation, the electrical conduction, and the Ising models on such networks are also studied and found to be quite peculiar. Consequences for applications are also discussed.
    BibTeX:
    @article{Andrade2005,
      author = {Andrade, JS and Herrmann, HJ and Andrade, RFS and da Silva, LR},
      title = {Apollonian networks: Simultaneously scale-free, small world, Euclidean, space filling, and with matching graphs},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2005},
      volume = {94},
      number = {1},
      doi = {{10.1103/PhysRevLett.94.018702}}
    }
    
    Androutsellis-Theotokis, S. & Spinellis, D. A survey of peer-to-peer content distribution technologies {2004} ACM COMPUTING SURVEYS
    Vol. {36}({4}), pp. {335-371} 
    article  
    Abstract: Distributed computer architectures labeled ``peer-to-peer'' are designed for the sharing of computer resources ( content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by their ability to adapt to failures and accommodate transient populations of nodes while maintaining acceptable connectivity and performance. Content distribution is an important peer-to-peer application on the Internet that has received considerable research attention. Content distribution applications typically allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content. In this survey, we propose a framework for analyzing peer-to-peer content distribution technologies. Our approach focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in - and affected by - the architectural design decisions adopted by current peer-to-peer systems. We study current peer-to-peer systems and infrastructure technologies in terms of their distributed object location and routing mechanisms, their approach to content replication, caching and migration, their support for encryption, access control, authentication and identity, anonymity, deniability, accountability and reputation, and their use of resource trading and management schemes.
    BibTeX:
    @article{Androutsellis-Theotokis2004,
      author = {Androutsellis-Theotokis, S and Spinellis, D},
      title = {A survey of peer-to-peer content distribution technologies},
      journal = {ACM COMPUTING SURVEYS},
      year = {2004},
      volume = {36},
      number = {4},
      pages = {335-371}
    }
    
    Ansari, A., Essegaier, S. & Kohli, R. Internet recommendation systems {2000} JOURNAL OF MARKETING RESEARCH
    Vol. {37}({3}), pp. {363-375} 
    article  
    Abstract: Several online firms, including Yahoo!, Amazon.com, and Movie Critic, recommend documents and products to consumers. Typically, the recommendations are based on content and/or collaborative filtering methods. The authors examine the merits of these methods, suggest that preference models used in marketing offer good alternatives, and describe a Bayesian preference model that allows statistical integration of five types of information useful for making recommendations: a person's expressed preferences, preferences of other consumers, expert evaluations, item characteristics, and individual characteristics. The proposed method accounts for not only preference heterogeneity across users but also unobserved product heterogeneity by introducing the interaction of unobserved product attributes with customer characteristics. The authors describe estimation by means of Markov chain Monte Carlo methods and use the model with a large data set to recommend movies either when collaborative filtering methods are viable alternatives or when no recommendations can be made by these methods.
    BibTeX:
    @article{Ansari2000,
      author = {Ansari, A and Essegaier, S and Kohli, R},
      title = {Internet recommendation systems},
      journal = {JOURNAL OF MARKETING RESEARCH},
      year = {2000},
      volume = {37},
      number = {3},
      pages = {363-375}
    }
    
    Appel, R., Palagi, P., Walther, D., Vargas, J., Sanchez, J., Ravier, F., Pasquali, C. & Hochstrasser, D. Melanie II - a third-generation software package for analysis of two-dimensional electrophoresis images: I. Features and user interface {1997} ELECTROPHORESIS
    Vol. {18}({15}), pp. {2724-2734} 
    article  
    Abstract: Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the ``Virtual Lab'' of the post-genome area.
    BibTeX:
    @article{Appel1997,
      author = {Appel, RD and Palagi, PM and Walther, D and Vargas, JR and Sanchez, JC and Ravier, F and Pasquali, C and Hochstrasser, DF},
      title = {Melanie II - a third-generation software package for analysis of two-dimensional electrophoresis images: I. Features and user interface},
      journal = {ELECTROPHORESIS},
      year = {1997},
      volume = {18},
      number = {15},
      pages = {2724-2734}
    }
    
    Argenziano, G., Soyer, H., Chimenti, S., Talamini, R., Corona, R., Sera, F., Binder, M., Cerroni, L., De Rosa, G., Ferrara, G., Hofmann-Wellenhof, R., Landthater, M., Menzies, S., Pehamberger, H., Piccolo, D., Rabinovitz, H., Schiffner, R., Staibano, S., Stolz, W., Bartenjev, I., Blum, A., Braun, R., Cabo, H., Carli, P., De Giorgi, V., Fleming, M., Grichnik, J., Grin, C., Halpern, A., Johr, R., Katz, B., Kenet, R., Kittler, H., Kreusch, J., Malvehy, J., Mazzocchetti, G., Oliviero, M., Ozdemir, F., Peris, K., Perotti, R., Perusquia, A., Pizzichetta, M., Puig, S., Rao, B., Rubegni, P., Saida, T., Scalvenzi, M., Seidenari, S., Stanganelli, I., Tanaka, M., Westerhoff, K., Wolf, I., Braun-Falco, O., Kerl, H., Nishikawa, T. & Wolff, K. Dermoscopy of pigmented skin lesions: Results of a consensus meeting via the Internet {2003} JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY
    Vol. {48}({5}), pp. {679-693} 
    article DOI  
    Abstract: Background: There is a need for better standardization of the dermoscopic terminology in assessing pigmented skin lesions. Objective: The virtual Consensus Net Meeting on Dermoscopy was organized to investigate reproducibility and validity of the various features and diagnostic algorithms. Methods: Dermoscopic images of 108 lesions were evaluated via the Internet by 40 experienced dermoscopists using a 2-step diagnostic procedure. The first-step algorithm distinguished melanocytic versus nonmelanocytic lesions. The second step in the diagnostic procedure used 4 algorithms (pattern analysis, ABCD rule, Menzies method, and 7-point checklist) to distinguish melanoma versus benign melanocytic lesions. kappa Values, log odds ratios, sensitivity, specificity, and positive likelihood ratios were estimated for all diagnostic algorithms and dermoscopic features. Results: Interobserver agreement was fair to good for all diagnostic methods, but it was poor for the majority of dermoscopic criteria. Intraobserver agreement was good to excellent for all algorithms and features considered. Pattern analysis allowed the best diagnostic performance (positive likelihood ratio: 5.1), whereas alternative algorithms revealed comparable sensitivity but less specificity. Interobserver agreement on management decisions made by dermoscopy was fairly good (mean kappa value: 0.53). Conclusion: The virtual Consensus Net Meeting on Dermoscopy represents a valid tool for better standardization of the dermoscopic terminology and, moreover, opens Lip a new territory for diagnosing and managing pigmented skin lesions.
    BibTeX:
    @article{Argenziano2003,
      author = {Argenziano, G and Soyer, HP and Chimenti, S and Talamini, R and Corona, R and Sera, F and Binder, M and Cerroni, L and De Rosa, G and Ferrara, G and Hofmann-Wellenhof, R and Landthater, M and Menzies, SW and Pehamberger, H and Piccolo, D and Rabinovitz, HS and Schiffner, R and Staibano, S and Stolz, W and Bartenjev, I and Blum, A and Braun, R and Cabo, H and Carli, P and De Giorgi, V and Fleming, MG and Grichnik, JM and Grin, CM and Halpern, AC and Johr, R and Katz, B and Kenet, RO and Kittler, H and Kreusch, J and Malvehy, J and Mazzocchetti, G and Oliviero, M and Ozdemir, F and Peris, K and Perotti, R and Perusquia, A and Pizzichetta, MA and Puig, S and Rao, B and Rubegni, P and Saida, T and Scalvenzi, M and Seidenari, S and Stanganelli, I and Tanaka, M and Westerhoff, K and Wolf, IH and Braun-Falco, O and Kerl, H and Nishikawa, T and Wolff, K},
      title = {Dermoscopy of pigmented skin lesions: Results of a consensus meeting via the Internet},
      journal = {JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY},
      year = {2003},
      volume = {48},
      number = {5},
      pages = {679-693},
      doi = {{10.1067/mjd.2003.281}}
    }
    
    Arlitt, M. & Williamson, C. Internet Web servers: Workload characterization and performance implications {1997} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {5}({5}), pp. {631-645} 
    article  
    Abstract: This paper presents a workload characterization study for Internet Web servers. Six different data sets are used in the study: three from academic environments, two from scientific research organizations, and one from a commercial Internet provider. These data sets represent three different orders of magnitude in server activity, and two different orders of magnitude in time duration, ranging from one week of activity to one year. The workload characterization focuses on the document type distribution, the document size distribution, the document referencing behavior, and the geographic distribution of server requests. Throughout the study, emphasis is placed on finding workload characteristics that are common to all the data sets studied. Ten such characteristics are identified. The paper concludes with a discussion of caching and performance issues, using the observed workload characteristics to suggest performance enhancements that seem promising for Internet Web servers.
    BibTeX:
    @article{Arlitt1997,
      author = {Arlitt, MF and Williamson, CL},
      title = {Internet Web servers: Workload characterization and performance implications},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1997},
      volume = {5},
      number = {5},
      pages = {631-645}
    }
    
    Bahl, A., Brunk, B., Crabtree, J., Fraunholz, M., Gajria, B., Grant, G., Ginsburg, H., Gupta, D., Kissinger, J., Labo, P., Li, L., Mailman, M., Milgram, A., Pearson, D., Roos, D., Schug, J., Stoeckert, C. & Whetzel, P. PlasmoDB: the Plasmodium genome resource. A database integrating experimental and computational data {2003} NUCLEIC ACIDS RESEARCH
    Vol. {31}({1}), pp. {212-215} 
    article DOI  
    Abstract: PlasmoDB (http://PlasmoDB.org) is the official database of the Plasmodium falciparum genome sequencing consortium. This resource incorporates the recently completed P. falciparum genome sequence and annotation, as well as draft sequence and annotation emerging from other Plasmodium sequencing projects. PlasmoDB currently houses information from five parasite species and provides tools for intra- and inter-species comparisons. Sequence information is integrated with other genomic-scale data emerging from the Plasmodium research community, including gene expression analysis from EST, SAGE and microarray projects and proteomics studies. The relational schema used to build PlasmoDB, GUS (Genomics Unified Schema) employs a highly structured format to accommodate the diverse data types generated by sequence and expression projects. A variety of tools allow researchers to formulate complex, biologically-based, queries of the database. A stand-alone version of the database is also available on CD-ROM (P. falciparum GenePlot), facilitating access to the data in situations where internet access is difficult (e.g. by malaria researchers working in the field). The goal of PlasmoDB is to facilitate utilization of the vast quantities of genomic-scale data produced by the global malaria research community. The software used to develop PlasmoDB has been used to create a second Apicomplexan parasite genome database, ToxoDB (http://ToxoDB.org).
    BibTeX:
    @article{Bahl2003,
      author = {Bahl, A and Brunk, B and Crabtree, J and Fraunholz, MJ and Gajria, B and Grant, GR and Ginsburg, H and Gupta, D and Kissinger, JC and Labo, P and Li, L and Mailman, MD and Milgram, AJ and Pearson, DS and Roos, DS and Schug, J and Stoeckert, CJ and Whetzel, P},
      title = {PlasmoDB: the Plasmodium genome resource. A database integrating experimental and computational data},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2003},
      volume = {31},
      number = {1},
      pages = {212-215},
      doi = {{10.1093/nar/gkg081}}
    }
    
    Bajari, P. & Hortacsu, A. The winner's curse, reserve prices, and endogenous entry: empirical insights from eBay auctions {2003} RAND JOURNAL OF ECONOMICS
    Vol. {34}({2}), pp. {329-355} 
    article  
    Abstract: Internet auctions have recently gained widespread popularity and are one of the most successful forms of electronic commerce. We examine a unique dataset of eBay coin auctions to explore the determinants of bidder and seller behavior. We first document a number of empirical regularities. We then specify and estimate a structural econometric model of bidding on eBay. Using our parameter estimates from this model, we measure the extent of the winner's curse and simulate seller revenue under different reserve prices.
    BibTeX:
    @article{Bajari2003,
      author = {Bajari, P and Hortacsu, A},
      title = {The winner's curse, reserve prices, and endogenous entry: empirical insights from eBay auctions},
      journal = {RAND JOURNAL OF ECONOMICS},
      year = {2003},
      volume = {34},
      number = {2},
      pages = {329-355},
      note = {Conference on the Economics-of-the-Internet-and-Software-Industries, TOULOUSE, FRANCE, JAN, 2001}
    }
    
    Baker, L., Wagner, T., Singer, S. & Bundorf, M. Use of the Internet and e-mail for health care information - Results from a national survey {2003} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {289}({18}), pp. {2400-2406} 
    article  
    Abstract: Context The Internet has attracted considerable attention as a means to improve health and health care delivery, but it is not clear how prevalent Internet use for health care really is or what impact it has on health care utilization. Available estimates of use and impact vary widely. Without accurate estimates of use and effects, it is difficult to focus policy discussions. or design appropriate policy activities. Objectives To measure the extent of Internet use for health care among a representative sample of the US population, to examine the prevalence of e-mail use for health care, and to examine the effects that Internet and e-mail use has on users, knowledge about health care matters and their use of the health care system. Design, Setting, and Participants Survey conducted in December 2001. and January 2002 among a sample drawn from a research panel of more than 60000 US households developed and maintained by Knowledge Networks. Responses were analyzed from 4764 individuals aged 21 years or older who were self-reported Internet users. Main Outcome Measures Self-reported rates in the past year of Internet and e-mail use to obtain information related to health, contact health care professionals, and obtain prescriptions; perceived effects of Internet and e-mail use on health care use. Results Approximately 40% of respondents with Internet access reported using the Internet to look for advice or information about health or health care in 2001. Six percent reported using e-mail to contact a physician or other health care professional. About one third of those using the Internet for health reported that using the Internet affected a decision about health or their health care, but very few reported impacts on measurable health care utilization; 94% said that Internet use had no effect on the number of physician visits they had and 93% said it had no effect on the number of telephone contacts. Five percent or less reported use of the Internet to obtain prescriptions or. purchase pharmaceutical products. Conclusions Although many people use the Internet for health information, use is not as common as is sometimes reported. Effects on actual health care utilization are also less substantial than some have claimed. Discussions of the role of the Internet in health care and the development of policies that might influence this role should not presume that use of the Internet for health information is universal or that the Internet strongly influences health care utilization.
    BibTeX:
    @article{Baker2003,
      author = {Baker, L and Wagner, TH and Singer, S and Bundorf, MK},
      title = {Use of the Internet and e-mail for health care information - Results from a national survey},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2003},
      volume = {289},
      number = {18},
      pages = {2400-2406}
    }
    
    Bakos, J. Reducing buyer search costs: Implications for electronic marketplaces {1997} MANAGEMENT SCIENCE
    Vol. {43}({12}), pp. {1676-1692} 
    article  
    Abstract: Information systems can serve as intermediaries between the buyers and the sellers in a market, creating an ``electronic marketplace'' that lowers the buyers' cost to acquire information about seller prices and product offerings. As a result, electronic marketplaces reduce the inefficiencies caused by buyer search costs, in the process reducing the ability of sellers to extract monopolistic profits while increasing the ability of markets to optimally allocate productive resources. This article models the role of buyer search costs in markets with differentiated product offerings. The impact of reducing these search costs is analyzed in the context of an electronic marketplace, and the allocational efficiencies such a reduction can bring to a differentiated market are formalized. The resulting implications for the incentives of buyers, sellers, and independent intermediaries to invest in electronic marketplaces are explored. Finally, the possibility to separate price information from product attribute information is introduced, and the implications of designing markets promoting competition along each of these dimensions are discussed.
    BibTeX:
    @article{Bakos1997,
      author = {Bakos, JY},
      title = {Reducing buyer search costs: Implications for electronic marketplaces},
      journal = {MANAGEMENT SCIENCE},
      year = {1997},
      volume = {43},
      number = {12},
      pages = {1676-1692}
    }
    
    Bakos, Y. The emerging role of electronic marketplaces on the Internet {1998} COMMUNICATIONS OF THE ACM
    Vol. {41}({8}), pp. {35-42} 
    article  
    BibTeX:
    @article{Bakos1998,
      author = {Bakos, Y},
      title = {The emerging role of electronic marketplaces on the Internet},
      journal = {COMMUNICATIONS OF THE ACM},
      year = {1998},
      volume = {41},
      number = {8},
      pages = {35-42}
    }
    
    Bakos, Y. & Brynjolfsson, E. Bundling information goods: Pricing, profits, and efficiency {1999} MANAGEMENT SCIENCE
    Vol. {45}({12}), pp. {1613-1630} 
    article  
    Abstract: We study the strategy of bundling a large number of information goods, such as those increasingly available on the Internet, and selling them for a fixed price. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this ``predictive value of bundling'' makes it possible to achieve greater sales, greater economic efficiency, and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our main results do not extend to most physical goods, as the marginal costs of production for goods not used by the buyer typically negate any benefits from the predictive value of large-scale bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints, and bundling of goods with various types of correlations and how each of these conditions can lead to limits on optimal bundle size. In particular we find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, bundling makes traditional price discrimination strategies more powerful by reducing the role of unpredictable idiosyncratic components of valuations. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and online content, cable television programming, and copyrighted music.
    BibTeX:
    @article{Bakos1999,
      author = {Bakos, Y and Brynjolfsson, E},
      title = {Bundling information goods: Pricing, profits, and efficiency},
      journal = {MANAGEMENT SCIENCE},
      year = {1999},
      volume = {45},
      number = {12},
      pages = {1613-1630}
    }
    
    Banerjee, A., Drake, J., Lang, J., Turner, B., Kompella, K. & Rekhter, Y. Generalized multiprotocol label switching: An overview of routing and management enhancements {2001} IEEE COMMUNICATIONS MAGAZINE
    Vol. {39}({1}), pp. {144-150} 
    article  
    Abstract: Generalized multiprotocol label switching, also referred to as multiprotocol lambda switching, supports not only devices that perform packet switching, but also those that perform switching in the time, wavelength. and space domains. The development of GMPLS requires modifications to current signaling and routing protocols. It has also triggered the development of new protocols such as the Link Management Protocol. In this article, we present the traffic engineering enhancements to the Open Shortest Path First Internet routing protocol [1] and ISIS Intradomain Routing Protocol ([2, 3]) two popular routing protocols, to support GMPLS. We present the concepts of generalized interfaces, label-switched path hierarchy, and link bundling intended to improve GMPLS scalability. We also discuss the Link Management Protocol which can be used to make the underlying links more manageable.
    BibTeX:
    @article{Banerjee2001,
      author = {Banerjee, A and Drake, J and Lang, JP and Turner, B and Kompella, K and Rekhter, Y},
      title = {Generalized multiprotocol label switching: An overview of routing and management enhancements},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2001},
      volume = {39},
      number = {1},
      pages = {144-150}
    }
    
    Barabasi, A. & Albert, R. Emergence of scaling in random networks {1999} SCIENCE
    Vol. {286}({5439}), pp. {509-512} 
    article  
    Abstract: Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many Large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of Large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
    BibTeX:
    @article{Barabasi1999,
      author = {Barabasi, AL and Albert, R},
      title = {Emergence of scaling in random networks},
      journal = {SCIENCE},
      year = {1999},
      volume = {286},
      number = {5439},
      pages = {509-512}
    }
    
    Barabasi, A., Jeong, H., Neda, Z., Ravasz, E., Schubert, A. & Vicsek, T. Evolution of the social network of scientific collaborations {2002} PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS
    Vol. {311}({3-4}), pp. {590-614} 
    article  
    Abstract: The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991-98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks. (C) 2002 Published by Elsevier Science B.V.
    BibTeX:
    @article{Barabasi2002,
      author = {Barabasi, AL and Jeong, H and Neda, Z and Ravasz, E and Schubert, A and Vicsek, T},
      title = {Evolution of the social network of scientific collaborations},
      journal = {PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS},
      year = {2002},
      volume = {311},
      number = {3-4},
      pages = {590-614}
    }
    
    Barabasi, A., Ravasz, E. & Vicsek, T. Deterministic scale-free networks {2001} PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS
    Vol. {299}({3-4}), pp. {559-564} 
    article  
    Abstract: Scale-free networks are abundant in nature and society, describing such diverse systems as the world wide web, the web of human sexual contacts, or the chemical network of a cell. All models used to generate a scale-free topology are stochastic, that is they create networks in which the nodes appear to be randomly connected to each other. Here we propose a simple model that generates scale-free networks in a deterministic fashion. We solve exactly the model, showing that the tail of the degree distribution follows a power law. (C) 2001 Published by Elsevier Science B.V.
    BibTeX:
    @article{Barabasi2001,
      author = {Barabasi, AL and Ravasz, E and Vicsek, T},
      title = {Deterministic scale-free networks},
      journal = {PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS},
      year = {2001},
      volume = {299},
      number = {3-4},
      pages = {559-564}
    }
    
    Bargh, J., McKenna, K. & Fitzsimons, G. Can you see the real me? Activation and expression of the ``true self'' on the Internet {2002} JOURNAL OF SOCIAL ISSUES
    Vol. {58}({1}), pp. {33-48} 
    article  
    Abstract: Those who feel better able to express their ``true selves `` in Internet rather than face-to-face interaction settings are more likely, to form close relationships with people met on the Internet (McKenna, Green, & Gleason, this issue). Building on these correlational findings from survey data, we conducted three laboratory experiments to directly, test the hypothesized causal role of differential self-expression in Internet relationship formation. Experiments 1 and 2, using a reaction time task, found that for university, undergraduates, the trite-self concept is more accessible in memory during Internet interactions, and the actual self more accessible during face-to-face interactions. Experiment 3 confirmed that people randomly assigned to interact over the Internet (vs. face to face) were better able to express their true-self qualities to their partners.
    BibTeX:
    @article{Bargh2002,
      author = {Bargh, JA and McKenna, KYA and Fitzsimons, GM},
      title = {Can you see the real me? Activation and expression of the ``true self'' on the Internet},
      journal = {JOURNAL OF SOCIAL ISSUES},
      year = {2002},
      volume = {58},
      number = {1},
      pages = {33-48}
    }
    
    Bellomo, R., Ronco, C., Kellum, J., Mehta, R., Palevsky, P. & ADQI Workgroup Acute renal failure - definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group {2004} CRITICAL CARE
    Vol. {8}({4}), pp. {R204-R212} 
    article DOI  
    Abstract: Introduction There is no consensus definition of acute renal failure (ARF) in critically ill patients. More than 30 different definitions have been used in the literature, creating much confusion and making comparisons difficult. Similarly, strong debate exists on the validity and clinical relevance of animal models of ARF; on choices of fluid management and of end-points for trials of new interventions in this field; and on how information technology can be used to assist this process. Accordingly, we sought to review the available evidence, make recommendations and delineate key questions for future studies. Methods We undertook a systematic review of the literature using Medline and PubMed searches. We determined a list of key questions and convened a 2-day consensus conference to develop summary statements via a series of alternating breakout and plenary sessions. In these sessions, we identified supporting evidence and generated recommendations and/or directions for future research. Results We found sufficient consensus on 47 questions to allow the development of recommendations. Importantly, we were able to develop a consensus definition for ARF. In some cases it was also possible to issue useful consensus recommendations for future investigations. We present a summary of the findings. (Full versions of the six workgroups' findings are available on the internet at http://www.ADQI.net) Conclusion Despite limited data, broad areas of consensus exist for the physiological and clinical principles needed to guide the development of consensus recommendations for defining ARF, selection of animal models, methods of monitoring fluid therapy, choice of physiological and clinical end-points for trials, and the possible role of information technology.
    BibTeX:
    @article{Bellomo2004,
      author = {Bellomo, R and Ronco, C and Kellum, JA and Mehta, RL and Palevsky, P and ADQI Workgroup},
      title = {Acute renal failure - definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group},
      journal = {CRITICAL CARE},
      year = {2004},
      volume = {8},
      number = {4},
      pages = {R204-R212},
      doi = {{10.1186/cc2872}}
    }
    
    Bender, P., Black, P., Grob, M., Padovani, R., Sindhushayana, N. & Viterbi, A. CDMA/HDR: A bandwidth-efficient high-speed wireless data service for nomadic users {2000} IEEE COMMUNICATIONS MAGAZINE
    Vol. {38}({7}), pp. {70-77} 
    article  
    Abstract: This article presents an approach to providing very high-data-rate downstream Internet access by nomadic users within the current CDMA physical layer architecture. Means for considerably increasing throughput by optimizing packet data protocols and by other network and coding techniques are presented and supported by simulations and laboratory measurements. The network architecture, based on Internet protocols adapted to the mobile environment, is described, followed by a brief discussion of economic considerations in comparison to cable and DSL services.
    BibTeX:
    @article{Bender2000,
      author = {Bender, P and Black, P and Grob, M and Padovani, R and Sindhushayana, N and Viterbi, A},
      title = {CDMA/HDR: A bandwidth-efficient high-speed wireless data service for nomadic users},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2000},
      volume = {38},
      number = {7},
      pages = {70-77}
    }
    
    Benkler, Y. Coase's penguin, or, Linux and The Nature of the Firm {2002} YALE LAW JOURNAL
    Vol. {112}({3}), pp. {369+} 
    article  
    Abstract: For decades our common understanding of the organization of economic production has been that individuals order their productive activities in one of two ways: either as employees in firms, following the directions of managers, or as individuals in markets, following price signals. This dichotomy was first identified in the early work of Ronald Coase and was developed most explicitly in the work of institutional economist Oliver Williamson. Recently, public attention has focused on a fifteen-year-old phenomenon called free software or open source software. This phenomenon involves thousands, or even tens of thousands, of computer programmers who collaborate on large- and small-scale projects without traditional firm-based or market-based ownership of the resulting product. This Article explains why free software is only one example of a much broader social-economic phenomenon emerging in the digitally networked. environment, a third mode of production that the author calls ``commons-based peer production.'' The Article begins by demonstrating the widespread use of commons-based peer production on the Internet through a number of detailed examples, such as Wikipedia, Slashdot the Open Directory Project, and Google. The Article uses these examples to reveal fundamental characteristics of commons-based peer production that distinguish it from the property- and contract-based modes of firms and markets. The central distinguishing characteristic. is that groups of individuals successfully collaborate on large-scale projects following a diverse cluster of motivational drives and social signals rather than market prices or managerial commands. The Article then explains why this mode has systematic advantages over markets and managerial hierarchies in the digitally networked environment when the object of production is information or culture. First, peer production has an advantage in what the author calls ``information opportunity cost,'' because it loses less information about who might be the best person for a given job. Second, there are substantial increasing allocation gains to be captured from allowing large clusters of potential contributors to interact with large clusters of information resources in search of new projects and opportunities for collaboration. The Article concludes with an overview of how these models use a variety of technological, social, and formal strategies to overcome the collective action problems usually solved in managerial and market-based systems by property, contract, and managerial commands.
    BibTeX:
    @article{Benkler2002,
      author = {Benkler, Y},
      title = {Coase's penguin, or, Linux and The Nature of the Firm},
      journal = {YALE LAW JOURNAL},
      year = {2002},
      volume = {112},
      number = {3},
      pages = {369+}
    }
    
    Bennett, J., Partridge, C. & Shectman, N. Packet reordering is not pathological network behavior {1999} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {7}({6}), pp. {789-798} 
    article  
    Abstract: It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to ``route fluttering'', router ``pauses'' or simply to broken equipment, We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previousl, reported. More importantly, we observe that in the presence of massive packet reordering Transmission Control Protocol (TCP) performance can be profoundly effected, Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.
    BibTeX:
    @article{Bennett1999,
      author = {Bennett, JCR and Partridge, C and Shectman, N},
      title = {Packet reordering is not pathological network behavior},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1999},
      volume = {7},
      number = {6},
      pages = {789-798}
    }
    
    Bennett-Lovsey, R.M., Herbert, A.D., Sternberg, M.J.E. & Kelley, L.A. Exploring the extremes of sequence/structure space with ensemble fold recognition in the program Phyre {2008} PROTEINS-STRUCTURE FUNCTION AND BIOINFORMATICS
    Vol. {70}({3}), pp. {611-625} 
    article DOI  
    Abstract: Structural and functional annotation of the large and growing database of genomic sequences is a major problem in modern biology. Protein structure prediction by detecting remote homology to known structures is a well-established and successful annotation technique. However, the broad spectrum of evolutionary change that accompanies the divergence of close homologues to become remote homologues cannot easily be captured with a single algorithm. Recent advances to tackle this problem have involved the, use of multiple predictive algorithms available on the Internet. Here we demonstrate how such ensembles of predictors can be designed in-house under controlled conditions and permit significant improvements in recognition by using a concept taken from protein loop energetics and applying it to the general problem of 3D clustering. We have developed a stringent test that simulates the situation where a protein sequence of interest is submitted to multiple different algorithms and not one of these algorithms can make a confident (95 correct assignment. A method of meta-server prediction (Phyre) that exploits the benefits of a controlled environment for the component methods was implemented. At 95% precision or higher, Phyre identified 64.0% of all correct homologous query-template relationships, and 84.0% of the individual test query proteins could be accurately annotated. In comparison to the improvement that the single best fold recognition algorithm (according to training) has over PSI-Blast, this represents a 29.6% increase in the number of correct homologous query-template relationships, and a 46.2% increase in the number of accurately annotated queries. It has been well recognised in fold prediction, other bioinformatics applications, and in many other areas, that ensemble predictions generally are superior in accuracy to any of the component individual methods. However there is a paucity of information as to why the ensemble methods are superior and indeed this has never been systematically addressed in fold recognition. Here we show that the source Of ensemble power stems from noise reduction in filtering out false positive matches. The results indicate greater coverage of sequence space and improved model quality, which can consequently lead to a reduction in the experimental workload of structural genomics initiatives.
    BibTeX:
    @article{Bennett-Lovsey2008,
      author = {Bennett-Lovsey, Riccardo M. and Herbert, Alex D. and Sternberg, Michael J. E. and Kelley, Lawrence A.},
      title = {Exploring the extremes of sequence/structure space with ensemble fold recognition in the program Phyre},
      journal = {PROTEINS-STRUCTURE FUNCTION AND BIOINFORMATICS},
      year = {2008},
      volume = {70},
      number = {3},
      pages = {611-625},
      doi = {{10.1002/prot.21688}}
    }
    
    Benson, D., Boguski, M., Lipman, D. & Ostell, J. GenBank {1996} NUCLEIC ACIDS RESEARCH
    Vol. {24}({1}), pp. {1-5} 
    article  
    Abstract: The GenBank sequence database continues to expand its data coverage, quality control, annotation content and retrieval services. GenBank is comprised of DNA sequences submitted directly by authors as well as sequences from the other major public databases. An integrated retrieval system, known as Entrez, contains data from GenBank and from the major protein sequence and structural databases, as well as related MEDLINE abstracts. Users may access GenBank over the Internet through the World Wide Web and through special client-server programs for text and sequence similarity searching. FTP, CD-ROM and e-mail servers are alternate means of access.
    BibTeX:
    @article{Benson1996,
      author = {Benson, DA and Boguski, M and Lipman, DJ and Ostell, J},
      title = {GenBank},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {1996},
      volume = {24},
      number = {1},
      pages = {1-5}
    }
    
    Bentler, P. & Dudgeon, P. Covariance structure analysis: Statistical practice, theory, and directions {1996} ANNUAL REVIEW OF PSYCHOLOGY
    Vol. {47}, pp. {563-592} 
    article  
    Abstract: Although covariance structure analysis is used increasingly to analyze nonexperimental data, important statistical requirements for its proper use are frequently ignored. Valid conclusions about the adequacy of a model as an acceptable representation of data, which are based on goodness-of-fit test statistics and standard errors of parameter estimates, rely on the model estimation procedure being appropriate for the data. Using analogies to linear regression and anova, this review examines conditions under which conclusions drawn from various estimation methods will be correct and the consequences of ignoring these conditions. A distinction is made between estimation methods that are either correctly or incorrectly specified for the distribution of data being analyzed, and it is shown that valid conclusions are possible even under misspecification. A brief example illustrates the ideas. Internet access is given to a computer code for several methods that are not available in programs such as EQS or LISREL.
    BibTeX:
    @article{Bentler1996,
      author = {Bentler, PM and Dudgeon, P},
      title = {Covariance structure analysis: Statistical practice, theory, and directions},
      journal = {ANNUAL REVIEW OF PSYCHOLOGY},
      year = {1996},
      volume = {47},
      pages = {563-592}
    }
    
    Berland, G., Elliott, M., Morales, L., Algazy, J., Kravitz, R., Broder, M., Kanouse, D., Munoz, J., Puyol, J., Lara, M., Watkins, K., Yang, H. & McGlynn, E. Health information on the Internet - Accessibility, quality, and readability in English and Spanish {2001} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {285}({20}), pp. {2612-2621} 
    article  
    Abstract: Context Despite the substantial amount of health-related information available on the Internet, little is known about the accessibility, quality, and reading grade level of that health information. Objective To evaluate health information on breast cancer, depression, obesity, and childhood asthma available through English- and Spanish-language search engines and Web sites. Design and Setting Three unique studies were performed from July 2000 through December 2000, Accessibility of 14 search engines was assessed using a structured search experiment. Quality of 25 health Web sites and content provided by 1 search engine was evaluated by 34 physicians using structured implicit review (interrater reliability >0.90). The reading grade level of text selected for structured implicit review was established using the Fry Readability Graph method. Main Outcome Measures For the accessibility study, proportion of links leading to relevant content; for quality, coverage and accuracy of key clinical elements; and grade level reading formulas. Results Less than one quarter of the search engine's first pages of links led to relevant content (20% of English and 12% of Spanish). On average, 45% of the clinical elements on English- and 22% on Spanish-language Web sites were more than minimally covered and completely accurate and 24% of the clinical elements on English- and 53% on Spanish-language Web sites were not covered at all. All English and 86% of Spanish Web sites required high school level or greater reading ability. Conclusion Accessing health information using search engines and simple search terms is not efficient. Coverage of key information on English- and Spanish-language Web sites is poor and inconsistent, although the accuracy of the information provided is generally good. High reading levels are required to comprehend Web-based health information.
    BibTeX:
    @article{Berland2001,
      author = {Berland, GK and Elliott, MN and Morales, LS and Algazy, JI and Kravitz, RL and Broder, MS and Kanouse, DE and Munoz, JA and Puyol, JA and Lara, M and Watkins, KE and Yang, H and McGlynn, EA},
      title = {Health information on the Internet - Accessibility, quality, and readability in English and Spanish},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2001},
      volume = {285},
      number = {20},
      pages = {2612-2621}
    }
    
    Berry, M., Drmac, Z. & Jessup, E. Matrices, vector spaces, and information retrieval {1999} SIAM REVIEW
    Vol. {41}({2}), pp. {335-362} 
    article  
    Abstract: The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user's query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections.
    BibTeX:
    @article{Berry1999,
      author = {Berry, MW and Drmac, Z and Jessup, ER},
      title = {Matrices, vector spaces, and information retrieval},
      journal = {SIAM REVIEW},
      year = {1999},
      volume = {41},
      number = {2},
      pages = {335-362}
    }
    
    Bianconi, G. & Barabasi, A. Competition and multiscaling in evolving networks {2001} EUROPHYSICS LETTERS
    Vol. {54}({4}), pp. {436-442} 
    article  
    Abstract: The rate at which nodes in a network increase their connectivity depends on their fitness to compete for links. For example, in social networks some individuals acquire more social links than others, or on the www some webpages attract considerably more links than others. We nd that this competition for links translates into multiscaling, i.e. a fitness-dependent dynamic exponent, allowing fitter nodes to overcome the more connected but less fit ones. Uncovering this fitter-gets-richer phenomenon can help us understand in quantitative terms the evolution of many competitive systems in nature and society.
    BibTeX:
    @article{Bianconi2001,
      author = {Bianconi, G and Barabasi, AL},
      title = {Competition and multiscaling in evolving networks},
      journal = {EUROPHYSICS LETTERS},
      year = {2001},
      volume = {54},
      number = {4},
      pages = {436-442}
    }
    
    Bianconi, G. & Barabasi, A. Bose-Einstein condensation in complex networks {2001} PHYSICAL REVIEW LETTERS
    Vol. {86}({24}), pp. {5632-5635} 
    article  
    Abstract: The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system's constituents. Despite their irreversible and nonequilibrium nature these networks follow Bose statistics and can undergo Bose-Einstein condensation. Addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the ``first-mover-advantage.'' ``fit-get-rich,'' and ``winner-takes-all'' phenomena observed in competitive systems an thermodynamically distinct phases of the underlying evolving networks.
    BibTeX:
    @article{Bianconi2001a,
      author = {Bianconi, G and Barabasi, AL},
      title = {Bose-Einstein condensation in complex networks},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2001},
      volume = {86},
      number = {24},
      pages = {5632-5635}
    }
    
    Biermann, J., Golladay, G., Greenfield, M. & Baker, L. Evaluation of cancer information on the Internet {1999} CANCER
    Vol. {86}({3}), pp. {381-390} 
    article  
    BibTeX:
    @article{Biermann1999,
      author = {Biermann, JS and Golladay, GJ and Greenfield, MLVH and Baker, LH},
      title = {Evaluation of cancer information on the Internet},
      journal = {CANCER},
      year = {1999},
      volume = {86},
      number = {3},
      pages = {381-390}
    }
    
    Bilal, D. Children's use of the Yahooligans! Web search engine: I. Cognitive, physical, and affective behaviors on fact-based search tasks {2000} JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE
    Vol. {51}({7}), pp. {646-665} 
    article  
    Abstract: This study reports on the first part of a research project that investigated children's cognitive, affective, and physical behaviors as they use the Yahooligans! search engine to find information on a specific search task. Twenty-two seventh-grade science children from a middle school located in Knoxville, Tennessee participated in the project. Their cognitive and physical behaviors were captured using Lotus ScreenCam, a Windows-based software package that captures and replays activities recorded in Web browsers, such as Netscape. Their affective states were captured via a one-on-one exit interview, A new measure called ``Web Traversal Measure'' was developed to measure children's ``weighted'' traversal effectiveness and efficiency scores, as well as their quality moves in Yahooligans! Children's prior experience in using the Internet/Web and their knowledge of the Yahooligans! interface were gathered via a questionnaire. The findings provided insights into children's behaviors and success, as their weighted traversal effectiveness and efficiency scores, as well as quality moves. Implications for user training and system design are discussed.
    BibTeX:
    @article{Bilal2000,
      author = {Bilal, D},
      title = {Children's use of the Yahooligans! Web search engine: I. Cognitive, physical, and affective behaviors on fact-based search tasks},
      journal = {JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE},
      year = {2000},
      volume = {51},
      number = {7},
      pages = {646-665}
    }
    
    Birman, K., Hayden, M., Ozkasap, O., Xiao, Z., Budiu, M. & Minsky, Y. Bimodal multicast {1999} ACM TRANSACTIONS ON COMPUTER SYSTEMS
    Vol. {17}({2}), pp. {41-88} 
    article  
    Abstract: There are many methods for making a multicast protocol ``reliable.'' At one end of the spectrum, a reliable multicast protocol might offer atomicity guarantees, such as all-or-nothing delivery, delivery ordering, and perhaps additional properties such as virtually synchronous addressing. At the other are protocols that use local repair to overcome transient packet loss in the network, offering ``best effort'' reliability. Yet none of this prior work has treated stability of multicast delivery as a basic reliability property, such as might be needed in an internet radio, television, or conferencing application. This article looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a ``bimodal multicast'' in reference to its reliability model, which corresponds to a family of bimodal probability distributions. Here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. These confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput.
    BibTeX:
    @article{Birman1999,
      author = {Birman, KP and Hayden, M and Ozkasap, O and Xiao, Z and Budiu, M and Minsky, Y},
      title = {Bimodal multicast},
      journal = {ACM TRANSACTIONS ON COMPUTER SYSTEMS},
      year = {1999},
      volume = {17},
      number = {2},
      pages = {41-88}
    }
    
    Birnbaum, M. Human research and data collection via the Internet {2004} ANNUAL REVIEW OF PSYCHOLOGY
    Vol. {55}, pp. {803-832} 
    article DOI  
    Abstract: Advantages and disadvantages of Web and lab research are reviewed. Via the World Wide Web, one can efficiently recruit large, heterogeneous samples quickly, recruit specialized samples (people with rare characteristics), and standardize procedures, making studies easy to replicate. Alternative programming techniques (procedures for data collection) are compared, including client-side as opposed to server-side programming. Web studies have methodological problems; for example, higher rates of drop out and of repeated participation. Web studies must be thoroughly analyzed and tested before launching on-line. Many studies compared data obtained in Web versus lab. These two methods usually reach the same conclusions; however, there are significant differences between college students tested in the lab and people recruited and tested via the Internet. Reasons that Web researchers are enthusiastic about the potential of the new methods are discussed.
    BibTeX:
    @article{Birnbaum2004,
      author = {Birnbaum, MH},
      title = {Human research and data collection via the Internet},
      journal = {ANNUAL REVIEW OF PSYCHOLOGY},
      year = {2004},
      volume = {55},
      pages = {803-832},
      doi = {{10.1146/annurev.psych.55.090902.141601}}
    }
    
    Blankertz, B., Muller, K., Curio, G., Vaughan, T., Schalk, G., Wolpaw, J., Schlogl, A., Neuper, C., Pfurtscheller, G., Hinterberger, T., Schroder, M. & Birbaumer, N. The BCI competition 2003: Progress and perspectives in detection and discrimination of EEG single trials {2004} IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING
    Vol. {51}({6}), pp. {1044-1051} 
    article DOI  
    Abstract: Interest in developing a new method of man-to-machine communication-a brain-computer interface (BCI)-has grown steadily over the past few decades. BCIs create a new communication channel between the brain and an output device by bypassing conventional motor output pathways of nerves and muscles. These systems use signals recorded from the scalp, the surface of the cortex, or from inside the brain to enable users to control a variety of applications including simple word-processing software and orthotics. BCI technology could therefore provide a new communication and control option for individuals who cannot otherwise express their wishes to the outside world. Signal processing and classification methods are essential tools in the development of improved BCI technology. We organized the BCI Competition 2003 to evaluate the current state of the art of these tools. Four laboratories well versed in EEG-based BCI research provided six data sets in a documented format. We made these data sets (i.e., labeled training sets and unlabeled test sets) and their descriptions available on the Internet. The goal in the competition was to maximize the performance measure for the test labels. Researchers worldwide tested their algorithms and competed for the best classification results. This paper describes the six data sets and the results and function of the most successful algorithms.
    BibTeX:
    @article{Blankertz2004,
      author = {Blankertz, B and Muller, KR and Curio, G and Vaughan, TM and Schalk, G and Wolpaw, JR and Schlogl, A and Neuper, C and Pfurtscheller, G and Hinterberger, T and Schroder, M and Birbaumer, N},
      title = {The BCI competition 2003: Progress and perspectives in detection and discrimination of EEG single trials},
      journal = {IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING},
      year = {2004},
      volume = {51},
      number = {6},
      pages = {1044-1051},
      doi = {{10.1109/TBME.2004.826692}}
    }
    
    Blanton, H. & Jaccard, J. Arbitrary metrics in psychology {2006} AMERICAN PSYCHOLOGIST
    Vol. {61}({1}), pp. {27-41} 
    article DOI  
    Abstract: Many psychological tests have arbitrary metrics but are appropriate for testing psychological theories. Metric arbitrariness is a concern, however, when researchers wish to draw inferences about the true, absolute standing of a group or individual on the latent psychological dimension being measured. The authors illustrate this in the context Of 2 case studies in which psychologists need to develop inventories with nonarbitrary metrics. One example comes from social psychology, where researchers have begun using the Implicit Association Test to provide the lay public with feedback about their ``hidden biases'' via popular Internet Web pages. The other example comes from clinical psychology, where researchers often wish to evaluate the real-world importance of interventions. As the authors show, both pursuits require researchers to conduct formal research that makes their metrics nonarbitrary by linking test scores to meaningful real-world events.
    BibTeX:
    @article{Blanton2006,
      author = {Blanton, H and Jaccard, J},
      title = {Arbitrary metrics in psychology},
      journal = {AMERICAN PSYCHOLOGIST},
      year = {2006},
      volume = {61},
      number = {1},
      pages = {27-41},
      doi = {{10.1037/0003-066X.61.1.27}}
    }
    
    Blinov, B., Moehring, D., Duan, L. & Monroe, C. Observation of entanglement between a single trapped atom and a single photon {2004} NATURE
    Vol. {428}({6979}), pp. {153-157} 
    article DOI  
    Abstract: An outstanding goal in quantum information science is the faithful mapping of quantum information between a stable quantum memory and a reliable quantum communication channel(1). This would allow, for example, quantum communication over remote distances(2), quantum teleportation(3) of matter and distributed quantum computing over a `quantum internet'. Because quantum states cannot in general be copied, quantum information can only be distributed in these and other applications by entangling the quantum memory with the communication channel. Here we report quantum entanglement between an ideal quantum memory-represented by a single trapped Cd-111(+) ion-and an ideal quantum communication channel, provided by a single photon that is emitted spontaneously from the ion. Appropriate coincidence measurements between the quantum states of the photon polarization and the trapped ion memory are used to verify their entanglement directly. Our direct observation of entanglement between stationary and `flying' qubits(4) is accomplished without using cavity quantum electrodynamic techniques(5-7) or prepared non-classical light sources(3). We envision that this source of entanglement used for a variety of quantum communication protocols(2,8) and for seeding large-scale entangled states of trapped ion qubits for scalable quantum computing(9).
    BibTeX:
    @article{Blinov2004,
      author = {Blinov, BB and Moehring, DL and Duan, LM and Monroe, C},
      title = {Observation of entanglement between a single trapped atom and a single photon},
      journal = {NATURE},
      year = {2004},
      volume = {428},
      number = {6979},
      pages = {153-157},
      doi = {{10.1038/nature02377}}
    }
    
    Blom, N., Gammeltoft, S. & Brunak, S. Sequence and structure-based prediction of eukaryotic protein phosphorylation sites {1999} JOURNAL OF MOLECULAR BIOLOGY
    Vol. {294}({5}), pp. {1351-1362} 
    article  
    Abstract: Protein phosphorylation at serine, threonine or tyrosine residues affects a multitude of cellular signaling processes. How is specificity in substrate recognition and phosphorylation by protein kinases achieved? Here, we present an artificial neural network method that predicts phosphorylation sites in independent sequences with a sensitivity in the range from 69 % to 96 As an example, we predict novel phosphorylation sites in the p300/CBP protein that may regulate interaction with transcription factors and histone acetyltransferase activity. In addition, serine and threonine residues in p300/CBP that can be modified by O-linked glycosylation with N-acetylglucosamine are identified. Glycosylation may prevent phosphorylation at these sites, a mechanism named yin-yang regulation. The prediction server is available on the Internet at http://www.cbs.dtu.dk/services/NetPhos/ or via e-mail to NetPhos@cbs.dtu.dk. (C) 1999 Academic Press.
    BibTeX:
    @article{Blom1999,
      author = {Blom, N and Gammeltoft, S and Brunak, S},
      title = {Sequence and structure-based prediction of eukaryotic protein phosphorylation sites},
      journal = {JOURNAL OF MOLECULAR BIOLOGY},
      year = {1999},
      volume = {294},
      number = {5},
      pages = {1351-1362}
    }
    
    Blumenthal, D., Olsson, B., Rossi, G., Dimmick, T., Ran, L., Masanovi, M., Lavrova, O., Doshi, R., Jerphagnon, O., Bowers, J., Kaman, V., Coldren, L. & Barton, J. All-optical label swapping networks and technologies {2000} JOURNAL OF LIGHTWAVE TECHNOLOGY
    Vol. {18}({12}), pp. {2058-2075} 
    article  
    Abstract: All-optical label swapping is a promising approach to ultra-high packet-rate routing and forwarding directly in the optical layer. In this paper, we review results of the DARPA Next Generation Internet program in all-optical label swapping at University of California at Santa Barbara (UCSB), We describe the overall network approach to encapsulate packets; with optical labels and process forwarding and routing functions independent of packet bit rate and format. Various approaches to label coding using serial and subcarrier multiplexing addressing and the associated techniques for label erasure and rewriting, packet regeneration and packet-rate wavelength conversion are reviewed. These functions have been implemented using both fiber and semiconductor-based technologies and the ongoing effort at UCSB to integrate these functions is reported. We described experimental results for various components and label swapping functions and demonstration of 40 Gb/s optical label swapping, The advantages and disadvantages of using the various coding techniques and implementation technologies are discussed.
    BibTeX:
    @article{Blumenthal2000,
      author = {Blumenthal, DJ and Olsson, BE and Rossi, G and Dimmick, TE and Ran, L and Masanovi, M and Lavrova, O and Doshi, R and Jerphagnon, O and Bowers, JE and Kaman, V and Coldren, LA and Barton, J},
      title = {All-optical label swapping networks and technologies},
      journal = {JOURNAL OF LIGHTWAVE TECHNOLOGY},
      year = {2000},
      volume = {18},
      number = {12},
      pages = {2058-2075}
    }
    
    Boccaletti, S., Latora, V., Moreno, Y., Chavez, M. & Hwang, D.U. Complex networks: Structure and dynamics {2006} PHYSICS REPORTS-REVIEW SECTION OF PHYSICS LETTERS
    Vol. {424}({4-5}), pp. {175-308} 
    article DOI  
    Abstract: Coupled biological and chemical systems, neural networks, social interacting species, the Internet and the World Wide Web, are only a few examples of systems composed by a large number of highly interconnected dynamical units. The first approach to capture the global properties of such systems is to model them as graphs whose nodes represent the dynamical units, and whose links stand for the interactions between them. On the one hand, scientists have to cope with structural issues, such as characterizing the topology of a complex wiring architecture, revealing the unifying principles that are at the basis of real networks, and developing models to mimic the growth of a network and reproduce its structural properties. On the other hand, many relevant questions arise when studying complex networks' dynamics, such as learning how a large ensemble of dynamical systems that interact through a complex wiring topology can behave collectively. We review the major concepts and results recently achieved in the study of the structure and dynamics of complex networks, and summarize the relevant applications of these ideas in many different disciplines, ranging from nonlinear science to biology, from statistical mechanics to medicine and engineering. (c) 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Boccaletti2006,
      author = {Boccaletti, S. and Latora, V. and Moreno, Y. and Chavez, M. and Hwang, D. -U.},
      title = {Complex networks: Structure and dynamics},
      journal = {PHYSICS REPORTS-REVIEW SECTION OF PHYSICS LETTERS},
      year = {2006},
      volume = {424},
      number = {4-5},
      pages = {175-308},
      doi = {{10.1016/j.physrep.2005.10.009}}
    }
    
    Boguna, M. & Pastor-Satorras, R. Class of correlated random networks with hidden variables {2003} PHYSICAL REVIEW E
    Vol. {68}({3, Part 2}) 
    article DOI  
    Abstract: We study a class of models of correlated random networks in which vertices are characterized by hidden variables controlling the establishment of edges between pairs of vertices. We find analytical expressions for the main topological properties of these models as a function of the distribution of hidden variables and the probability of connecting vertices. The expressions obtained are checked by means of numerical simulations in a particular example. The general model is extended to describe a practical algorithm to generate random networks with an a priori specified correlation structure. We also present an extension of the class, to map nonequilibrium growing networks to networks with hidden variables that represent the time at which each vertex was introduced in the system.
    BibTeX:
    @article{Boguna2003,
      author = {Boguna, M and Pastor-Satorras, R},
      title = {Class of correlated random networks with hidden variables},
      journal = {PHYSICAL REVIEW E},
      year = {2003},
      volume = {68},
      number = {3, Part 2},
      doi = {{10.1103/PhysRevE.68.036112}}
    }
    
    Borowitz, S. & Wyatt, J. The origin, content, and workload of e-mail consultations {1998} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {280}({15}), pp. {1321-1324} 
    article  
    Abstract: Context.-Despite the common use of e-mail, little beyond anecdote or impressions has been published on patient-clinician e-mail consultation. Objective.-To report our experiences with free-of-charge e-mail consultations. Design.-Retrospective review of all e-mail consultation requests received between November 1, 1995, and June 31, 1998. Setting and Participants.-Consecutive e-mail consultation requests sent to the Division of Pediatric Gastroenterology at the Children's Medical Center of the University of Virginia in Charlottesville. Main Outcome Measures.-Number of consultation requests per month, time required to respond, who initiated the request and their geographic origin, and the kind of information requested in the consultation. Results.-During the 33-month period studied, we received 1239 requests, an average (SD) of 37.6 (1 5.9) each month. A total of 1001 consultation requests (81 were initiated by parents, relatives, or guardians, 126 (10 by physicians, and 112 (9 by other health care professionals. Consultation requests were received from 39 states and 37 other countries. In 855 requests (69, there was a specific question about the cause of a particular child's symptoms, diagnostic tests, and/or therapeutic interventions. In 112 (9, the requester sought a second opinion about diagnosis or treatment for a particular child, and 272 consultations (22 requested general information concerning a disorder, treatment, or medication without reference to a particular child. A total of 1078 requests (87 were answered within 48 hours of the initial request. On average, reading and responding to each e-mail took slightly less than 4 minutes. Conclusion.-E-mail provides a means for parents, guardians, and health care professionals to obtain patient and disease-specific information from selected medical consultants in a timely manner.
    BibTeX:
    @article{Borowitz1998,
      author = {Borowitz, SM and Wyatt, JC},
      title = {The origin, content, and workload of e-mail consultations},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1998},
      volume = {280},
      number = {15},
      pages = {1321-1324}
    }
    
    Boyd, S., Ghosh, A., Prabhakar, B. & Shah, D. Randomized gossip algorithms {2006} IEEE TRANSACTIONS ON INFORMATION THEORY
    Vol. {52}({6}), pp. {2508-2530} 
    article DOI  
    Abstract: Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of ``gossip'' algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.
    BibTeX:
    @article{Boyd2006,
      author = {Boyd, S and Ghosh, A and Prabhakar, B and Shah, D},
      title = {Randomized gossip algorithms},
      journal = {IEEE TRANSACTIONS ON INFORMATION THEORY},
      year = {2006},
      volume = {52},
      number = {6},
      pages = {2508-2530},
      doi = {{10.1109/TIT.2006.874516}}
    }
    
    BRAKMO, L. & PETERSON, L. TCP VEGAS - END-TO-END CONGESTION AVOIDANCE ON A GLOBAL INTERNET {1995} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {13}({8}), pp. {1465-1480} 
    article  
    Abstract: Vegas is an implementation of TCP that achieves between 37 and 71% better throughput on the Internet, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study-using both simulations and measurements on the Internet-of the Vegas and Reno implementations of TCP.
    BibTeX:
    @article{BRAKMO1995,
      author = {BRAKMO, LS and PETERSON, LL},
      title = {TCP VEGAS - END-TO-END CONGESTION AVOIDANCE ON A GLOBAL INTERNET},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1995},
      volume = {13},
      number = {8},
      pages = {1465-1480}
    }
    
    Brodie, M., Flournoy, R., Altman, D., Blendon, R., Benson, J. & Rosenbaum, M. Health information, the Internet, and the digital divide {2000} HEALTH AFFAIRS
    Vol. {19}({6}), pp. {255-265} 
    article  
    Abstract: Through an analysis of recent data on adults' and children's computer use and experiences, this DataWatch shows that use of computers and the Internet is widespread and that significant percentages of the public are already using the Internet to get health information. The surveys also show that the Internet is already a useful vehicle for reaching large numbers of lower-income, less-educated, and minority Americans. However, a substantial digital divide continues to characterize computer and Internet use, with lower-income blacks especially affected. Implications for the future of health communication on the Internet also are explored.
    BibTeX:
    @article{Brodie2000,
      author = {Brodie, M and Flournoy, RE and Altman, DE and Blendon, RJ and Benson, JM and Rosenbaum, MD},
      title = {Health information, the Internet, and the digital divide},
      journal = {HEALTH AFFAIRS},
      year = {2000},
      volume = {19},
      number = {6},
      pages = {255-265}
    }
    
    Bruno, R., Conti, M. & Gregori, E. Mesh networks: Commodity multihop ad hoc networks {2005} IEEE COMMUNICATIONS MAGAZINE
    Vol. {43}({3}), pp. {123-131} 
    article  
    Abstract: In spite of the massive efforts in researching and developing mobile ad hoc networks in the last decade, this type of network has not yet witnessed mass market deployment. The low commercial penetration of products based on ad hoc networking technology could be explained by noting that the ongoing research is mainly focused on implementing military or specialized civilian applications. On the other hand, users are interested in general-purpose applications where high bandwidth and open access to the Internet are consolidated and cheap commodities. To turn mobile ad hoc networks into a commodity, we should move to more pragmatic ``opportunistic ad hoc networking'' in which multihop ad hoc networks are not isolated self-configured networks, but rather emerge as a flexible and low-cost extension of wired infrastructure networks coexisting with them. Indeed, a new class of networks is emerging from this view: mesh networks. This article provides an overview of mesh networking technology. In particular, starting from commercial case studies we describe the core building blocks and distinct features on which wireless mesh networks should be based. We provide a survey of the current state of the art in off-the-shelf and proprietary solutions to build wireless mesh networks. Finally, we address the challenges of designing a high-performance, scalable, and cost-effective wireless mesh network.
    BibTeX:
    @article{Bruno2005,
      author = {Bruno, R and Conti, M and Gregori, E},
      title = {Mesh networks: Commodity multihop ad hoc networks},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2005},
      volume = {43},
      number = {3},
      pages = {123-131}
    }
    
    Brusic, V., Rudy, G. & Harrison, L. MHCPEP, a database of MHC-binding peptides: update 1997 {1998} NUCLEIC ACIDS RESEARCH
    Vol. {26}({1}), pp. {368-371} 
    article  
    Abstract: MHCPEP (http://wehih.wehi.edu.au/mhcpep/) is a curated database comprising over 13 000 peptide sequences known to bind MHC molecules, Entries are compiled from published reports as well as from direct submissions of experimental data, Each entry contains the peptide sequence, its MHC specificity and where available, experimental method, observed activity, binding affinity, source protein and anchor positions, as well as publication references, The present format of the database allows text string matching searches but can easily be converted for use in conjunction with sequence analysis packages. The database can be accessed via Internet using WWW or FTP.
    BibTeX:
    @article{Brusic1998,
      author = {Brusic, V and Rudy, G and Harrison, LC},
      title = {MHCPEP, a database of MHC-binding peptides: update 1997},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {1998},
      volume = {26},
      number = {1},
      pages = {368-371}
    }
    
    Bruzzone, R., White, T. & Goodenough, D. The cellular Internet: On-line with connexins {1996} BIOESSAYS
    Vol. {18}({9}), pp. {709-718} 
    article  
    Abstract: Most cells communicate with their immediate neighbors through the exchange of cytosolic molecules such as ions, second messengers and small metabolites. This activity is made possible by clusters of intercellular channels called gap junctions, which connect adjacent cells. In terms of molecular architecture, intercellular channels consist of two channels, called connexons, which interact to span the plasma membranes of two adjacent cells and directly join the cytoplasm of one cell to another. Connexons are made of structural proteins named connexins, which compose a multigene family. Connexin channels participate in the regulation of signaling between developing and differentiated cell types, and recently there have been some unexpected findings. First, unique ionic- and size-selectivities are determined by each connexin; second, the establishment of intercellular communication is defined by the expression of compatible connexins; third, the discovery of connexin mutations associated with human diseases and the study of knockout mice have illustrated the vital role of cell-cell communication in a diverse array of tissue functions.
    BibTeX:
    @article{Bruzzone1996,
      author = {Bruzzone, R and White, TW and Goodenough, DA},
      title = {The cellular Internet: On-line with connexins},
      journal = {BIOESSAYS},
      year = {1996},
      volume = {18},
      number = {9},
      pages = {709-718}
    }
    
    Brynjolfsson, E. & Smith, M. Frictionless commerce? A comparison of Internet and conventional retailers {2000} MANAGEMENT SCIENCE
    Vol. {46}({4}), pp. {563-585} 
    article  
    Abstract: here have been many claims that the Internet represents a new nearly ``frictionless market.'' Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products-books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping, and shopping costs are included in the price. Additionally, we find that Internet retailers' price adjustments over time are up to 100 times smaller than conventional retailers' price adjustments-presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.
    BibTeX:
    @article{Brynjolfsson2000,
      author = {Brynjolfsson, E and Smith, MD},
      title = {Frictionless commerce? A comparison of Internet and conventional retailers},
      journal = {MANAGEMENT SCIENCE},
      year = {2000},
      volume = {46},
      number = {4},
      pages = {563-585}
    }
    
    Buchanan, T. & Smith, J. Using the Internet for psychological research: Personality testing on the World Wide Web {1999} BRITISH JOURNAL OF PSYCHOLOGY
    Vol. {90}({Part 1}), pp. {125-144} 
    article  
    Abstract: The Internet is increasingly being used as a medium for psychological research. To assess the validity of such efforts, an electronic version of Gangestad & Snyder's (1985) revised self-monitoring questionnaire was placed at a site on the World Wide Web. In all, 963 responses were obtained through the Internet and these were compared with those from a group of 224 undergraduates who completed a paper-and-pencil version. Comparison of model fit indices obtained through confirmatory factor analyses indicated that the Internet-mediated version had similar psychometric properties to its conventional equivalent and compared favourably as a measure of self-monitoring. Reasons for possible superiority of Internet data are discussed. Results support the notion that Web-based personality assessment is possible, but stringent validation of test instruments is urged.
    BibTeX:
    @article{Buchanan1999,
      author = {Buchanan, T and Smith, JL},
      title = {Using the Internet for psychological research: Personality testing on the World Wide Web},
      journal = {BRITISH JOURNAL OF PSYCHOLOGY},
      year = {1999},
      volume = {90},
      number = {Part 1},
      pages = {125-144}
    }
    
    Bulun, S., Sebastian, S., Takayama, K., Suzuki, T., Sasano, H. & Shozu, M. The human CYP19 (aromatase P450) gene: update on physiologic roles and genomic organization of promoters {2003} JOURNAL OF STEROID BIOCHEMISTRY AND MOLECULAR BIOLOGY
    Vol. {86}({3-5}), pp. {219-224} 
    article DOI  
    Abstract: The human CYP19 (P450arom) gene is located in the chromosome 15q21.2 region and is comprised of a 30 kb coding region and a 93 kb regulatory region. The Internet-based Human Genome Project data enabled us to elucidate its complex organization. The unusually large regulatory region contains 10 tissue-specific promoters that are alternatively used in various cell types. Each promoter is regulated by a distinct set of regulatory sequences in DNA and transcription factors that bind to these specific sequences. In most mammals, P450arom expression is under the control of gonad- and brain-specific promoters. In the human, however, there are at least eight additional promoters that seemed to have been recruited throughout the evolution possibly via alterations in DNA. One of the key mechanisms that permit the recruitment of such a large number of promoters seems to be the extremely promiscuous nature of the common splice acceptor site, since activation of each promoter gives rise splicing of an untranslated first exon onto this common junction immediately upstream of the translation start site in the coding region. These partially tissue-specific promoters are used in the gonads, bone, brain, vascular tissue, adipose tissue, skin, fetal liver and placenta for physiologic estrogen biosynthesis. The most recently characterized promoter (1.7) was cloned by analyzing P450arom mRNA in breast cancer tissue. This TATA-less promoter accounts for the transcription of 29-54% of P450arom mRNAs in breast cancer tissues and contains endothelial-type cis-acting elements that interact with endothelial-type transcription factors, e.g. GATA-2. We hypothesize that this promoter may upregulate aromatase expression in vascular endothelial cells. The in vivo cellular distribution and physiologic roles of promoter 1.7 in healthy tissues, however, are not known. The gonads use the proximally located promoter II. The normal breast adipose tissue, on the other hand, maintains low levels of aromatase expression primarily via promoter 1.4 that lies 73 kb upstream of the common coding region. Promoters 1.3 and II are used only minimally in normal breast adipose tissue. Promoters II and 1.3 activities in the breast cancer, however, are strikingly increased. Additionally, the endothelial-type promoter 1.7 is also upregulated in breast cancer. Thus, it appears that the prototype estrogen-dependent malignancy breast cancer takes advantage of four promoters (II, 1.3,1.7 and 1.4) for aromatase expression. The sum of P450arom mRNA species arising from these four promoters markedly increase total P450arom mRNA levels in breast cancer compared with the normal breast. (C) 2003 Published by Elsevier Ltd.
    BibTeX:
    @article{Bulun2003,
      author = {Bulun, SE and Sebastian, S and Takayama, K and Suzuki, T and Sasano, H and Shozu, M},
      title = {The human CYP19 (aromatase P450) gene: update on physiologic roles and genomic organization of promoters},
      journal = {JOURNAL OF STEROID BIOCHEMISTRY AND MOLECULAR BIOLOGY},
      year = {2003},
      volume = {86},
      number = {3-5},
      pages = {219-224},
      note = {6th International Aromatase Conference, KYOTO, JAPAN, OCT 26-30, 2002},
      doi = {{10.1016/S0960-0760(03)00359-5}}
    }
    
    Buyya, R. & Murshed, M. GridSim: a toolkit for the modeling and simulation of distributed resource management and scheduling for Grid computing {2002} CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
    Vol. {14}({13-15}), pp. {1175-1220} 
    article DOI  
    Abstract: Clusters, Grids, and peer-to-peer (P2P) networks have emerged as popular paradigms for next generation parallel and distributed computing. They enable aggregation of distributed resources for solving large-scale problems in science, engineering, and commerce. In Grid and P2P computing environments, the resources are usually geographically distributed in multiple administrative domains, managed and owned by different organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, continuously changing resource conditions, and politics. The resource management and scheduling systems for Grid computing need to manage resources and application execution depending on either resource consumers' or owners' requirements, and continuously adapt to changes in resource availability. The management of resources and scheduling of applications in such large-scale distributed systems is a complex undertaking. In order to prove the effectiveness of resource brokers and associated scheduling algorithms, their performance needs to be evaluated under different scenarios such as varying number of resources and users with different requirements. In a Grid environment, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. To overcome this limitation, we have developed a Java-based discrete-event Grid simulation toolkit called GridSim. The toolkit supports modeling and simulation of heterogeneous Grid resources (both time- and space-shared), users and application models. It provides primitives for creation of application tasks, mapping of tasks to resources, and their management. To demonstrate suitability of the GridSim toolkit, we have simulated a Nimrod-G like Grid resource broker and evaluated the performance of deadline and budget constrained cost- and time-minimization scheduling algorithms. Copyright (C) 2002 John Wiley Sons, Ltd.
    BibTeX:
    @article{Buyya2002,
      author = {Buyya, R and Murshed, M},
      title = {GridSim: a toolkit for the modeling and simulation of distributed resource management and scheduling for Grid computing},
      journal = {CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE},
      year = {2002},
      volume = {14},
      number = {13-15},
      pages = {1175-1220},
      doi = {{10.1002/cpe.710}}
    }
    
    Caldarelli, G., Capocci, A., De Los Rios, P. & Munoz, M. Scale-free networks from varying vertex intrinsic fitness {2002} PHYSICAL REVIEW LETTERS
    Vol. {89}({25}) 
    article DOI  
    Abstract: A new mechanism leading to scale-free networks is proposed in this Letter. It is shown that, in many cases of interest, the connectivity power-law behavior is neither related to dynamical properties nor to preferential attachment. Assigning a quenched fitness value x(i) to every vertex, and drawing links among vertices with a probability depending on the fitnesses of the two involved sites, gives rise to what we call a good-get-richer mechanism, in which sites with larger fitness are more likely to become hubs (i.e., to be highly connected).
    BibTeX:
    @article{Caldarelli2002,
      author = {Caldarelli, G and Capocci, A and De Los Rios, P and Munoz, MA},
      title = {Scale-free networks from varying vertex intrinsic fitness},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2002},
      volume = {89},
      number = {25},
      doi = {{10.1103/PhysRevLett.89.258702}}
    }
    
    Callaway, D., Hopcroft, J., Kleinberg, J., Newman, M. & Strogatz, S. Are randomly grown graphs really random? {2001} PHYSICAL REVIEW E
    Vol. {64}({4, Part 1}) 
    article  
    Abstract: We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta =1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta =1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
    BibTeX:
    @article{Callaway2001,
      author = {Callaway, DS and Hopcroft, JE and Kleinberg, JM and Newman, MEJ and Strogatz, SH},
      title = {Are randomly grown graphs really random?},
      journal = {PHYSICAL REVIEW E},
      year = {2001},
      volume = {64},
      number = {4, Part 1}
    }
    
    Callaway, D., Newman, M., Strogatz, S. & Watts, D. Network robustness and fragility: Percolation on random graphs {2000} PHYSICAL REVIEW LETTERS
    Vol. {85}({25}), pp. {5468-5471} 
    article  
    Abstract: Recent work on the Internet, social networks, and the power grid has addressed the resilience of these networks to either random or targeted deletion of network nodes or links. Such deletions include, for example, the failure of Internet routers or power transmission Lines. Percolation models on random graphs provide a simple representation of this process but have typically been limited to graphs with Poisson degree distribution at their vertices. Such graphs are quite unlike real-world networks, which often possess power-law or other highly skewed degree distributions. In this paper we study percolation on graphs with completely general degree distribution, giving exact solutions for a variety of cases, including site percolation, bond percolation, and models in which occupation probabilities depend on vertex degree. We discuss the application of our theory to the understanding of network resilience.
    BibTeX:
    @article{Callaway2000,
      author = {Callaway, DS and Newman, MEJ and Strogatz, SH and Watts, DJ},
      title = {Network robustness and fragility: Percolation on random graphs},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2000},
      volume = {85},
      number = {25},
      pages = {5468-5471}
    }
    
    Calvert, K., Doar, M. & Zegura, E. Modeling Internet topology {1997} IEEE COMMUNICATIONS MAGAZINE
    Vol. {35}({6}), pp. {160-163} 
    article  
    Abstract: The topology of a network, or a group of networks such as the Internet, has a strong bearing on many management and performance issues. Good models of the topological structure of a network are essential for developing and analyzing internetworking technology. This article discusses how graph-based models can be used to represent the topology of large networks, particularly aspects of locality and hierarchy present in the Internet. Two implementations that generate networks whose topology resembles that of typical internetworks are described, together with publicly available source code.
    BibTeX:
    @article{Calvert1997,
      author = {Calvert, KL and Doar, MB and Zegura, EW},
      title = {Modeling Internet topology},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {1997},
      volume = {35},
      number = {6},
      pages = {160-163}
    }
    
    Campbell, A., Gomez, J., Kim, S., Valko, A., Wan, C. & Turanyi, Z. Design, implementation. and evaluation of cellular IP {2000} IEEE PERSONAL COMMUNICATIONS
    Vol. {7}({4}), pp. {42-49} 
    article  
    Abstract: Wireless access to Internet services will become typical, rather than the exception as it is today. Such a vision presents great demands on mobile networks. Mobile IP represents a simple and scalable global mobility solution but lacks the support for fast handoff control and paging found in cellular telephony networks. In contrast, second- and third-generation cellular systems offer seamless mobility support but are built on complex and costly connection-oriented networking infrastructure that lacks the inherent flexibility, robustness, and scalability found in IP networks. In this article we present Cellular IP, a micro-mobility protocol that provides seamless mobility support in limited geographical areas. Cellular IP, which incorporates a number of important cellular system design principles such as paging in support of passive connectivity, is built on a foundation of IP forwarding, minimal signaling and soft-state location management. We discuss the design, implementation, and evaluation of a Cellular IP testbed developed at Columbia University over the past several years. Built on a simple, low-cost, plug-and-play systems paradigm, Cellular IP software enables the construction of arbitrary-sized access networks scaling from picocellular to metropolitan area networks. The source code for Cellular IP is freely available from the Web (comet.columbia.edu/cellularip).
    BibTeX:
    @article{Campbell2000,
      author = {Campbell, AT and Gomez, J and Kim, S and Valko, AG and Wan, CY and Turanyi, ZR},
      title = {Design, implementation. and evaluation of cellular IP},
      journal = {IEEE PERSONAL COMMUNICATIONS},
      year = {2000},
      volume = {7},
      number = {4},
      pages = {42-49}
    }
    
    Carter, R. & Crovella, M. Measuring bottleneck link speed in packet-switched networks {1996} PERFORMANCE EVALUATION
    Vol. {27-8}, pp. {297-318} 
    article  
    Abstract: The quality of available network connections can have a large impact on the performance of distributed applications, For example, for document transfer applications such as FTP, Gopher and the World Wide Web, document transfer time is often directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path between client and server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of paths could be calculated. Network utilization could then be used as a basis for selection from a set of alternative paths or servers, providing reduced response time. Such a dynamic server selection scheme would be especially useful in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: BPROBE, which provides an estimate of the uncongested bandwidth of a path; and CPROBE, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client, thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of the probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their accuracy in the face of actual Internet conditions. Finally, we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique.
    BibTeX:
    @article{Carter1996,
      author = {Carter, RL and Crovella, ME},
      title = {Measuring bottleneck link speed in packet-switched networks},
      journal = {PERFORMANCE EVALUATION},
      year = {1996},
      volume = {27-8},
      pages = {297-318},
      note = {18th International Symposium of the IFIP Working Group on Information Processing System Modeling, Measurement and Evaluation (PERFORMANCE 96), LAUSANNE, SWITZERLAND, OCT 07-12, 1996}
    }
    
    Carzaniga, A., Rosenblum, D. & Wolf, A. Design and evaluation of a wide-area event notification service {2001} ACM TRANSACTIONS ON COMPUTER SYSTEMS
    Vol. {19}({3}), pp. {332-383} 
    article  
    Abstract: The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i.e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service's clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service's interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used to optimize performance. We also present results of simulation studies that examine the scalability and performance of the service.
    BibTeX:
    @article{Carzaniga2001,
      author = {Carzaniga, A and Rosenblum, DS and Wolf, AL},
      title = {Design and evaluation of a wide-area event notification service},
      journal = {ACM TRANSACTIONS ON COMPUTER SYSTEMS},
      year = {2001},
      volume = {19},
      number = {3},
      pages = {332-383}
    }
    
    Castro, M., Druschel, P., Kermarrec, A. & Rowstron, A. Scribe: A large-scale and decentralized application-level multicast infrastructure {2002} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {20}({8}), pp. {1489-1499} 
    article DOI  
    Abstract: This paper presents Scribe, a scalable application-level multicast infrastructure. Scribe supports large numbers of groups, with a potentially large number of members per group. Scribe is built on top of Pastry, a generic peer-to-peer object location and routing substrate overlayed on the Internet, and lever-ages Pastry's reliability, self-organization, and locality properties. Pastry is used to create and manage groups and to build efficient multicast trees for the dissemination of messages to each group. Scribe provides best-effort reliability guarantees, and we outline how an application can extend Scribe to provide stronger reliability. Simulation results, based on a realistic network topology model, show that Scribe scales across a wide range of groups and group sizes. Also, it balances the load on the nodes while achieving acceptable delay and link stress when compared with Internet protocol multicast.
    BibTeX:
    @article{Castro2002,
      author = {Castro, M and Druschel, P and Kermarrec, AM and Rowstron, AIT},
      title = {Scribe: A large-scale and decentralized application-level multicast infrastructure},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2002},
      volume = {20},
      number = {8},
      pages = {1489-1499},
      doi = {{10.1109/JSAC.2002.803069}}
    }
    
    Chainani-Wu, N. Safety and anti-inflammatory activity of curcumin: A component of tumeric (Curcuma longa) {2003} JOURNAL OF ALTERNATIVE AND COMPLEMENTARY MEDICINE
    Vol. {9}({1}), pp. {161-168} 
    article  
    Abstract: Introduction: Tumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin. Objectives: The goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin. Methods: A search of the computerized database MEDLINE(TM) (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched. Results: A large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12). Conclusions: Curcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.
    BibTeX:
    @article{Chainani-Wu2003,
      author = {Chainani-Wu, N},
      title = {Safety and anti-inflammatory activity of curcumin: A component of tumeric (Curcuma longa)},
      journal = {JOURNAL OF ALTERNATIVE AND COMPLEMENTARY MEDICINE},
      year = {2003},
      volume = {9},
      number = {1},
      pages = {161-168}
    }
    
    Chen, H., Houston, A., Sewell, R. & Schatz, B. Internet browsing and searching: User evaluations of category map and concept space techniques {1998} JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE
    Vol. {49}({7}), pp. {582-603} 
    article  
    Abstract: The Internet provides an exceptional testbed for developing algorithms that can improve browsing and searching large information spaces, Browsing and searching tasks are susceptible to problems of information overload and vocabulary differences. Much of the current research is aimed at the development and refinement of algorithms to improve browsing and searching by addressing these problems. Our research was focused an discovering whether two of the algorithms our research group has developed, a Kohonen algorithm category map for browsing, and an automatically generated concept space algorithm for searching, can help improve browsing and/or searching the Internet. Our results indicate that a Kohonen self-organizing map (SOM)-based algorithm can successfully categorize a large and eclectic Internet Information space (the Entertainment subcategory of Yahoo!) into manageable sub-spaces that users can successfully navigate to locate a homepage of interest Po them. The SOM algorithm worked best with browsing tasks that were very broad, and in which subjects skipped around between categories. Subjects especially liked the visual and graphical aspects of the map. Subjects who tried to do a directed search, and those that wanted to use the more familiar mental models (alphabetic or hierarchical organization) far browsing, found that the map did not work well. The results from the concept space experiment: were especially encouraging. There were no significant differences among the precision measures far the set of documents identified by subject-suggested teams, thesaurus-suggested terms, and the combination of subject- and thesaurus-suggested terms. The recall measures indicated that the combination of subject-and thesaurus-suggested terms exhibited significantly better recall than subject-suggested terms alone, Furthermore, analysis of the homepages indicated that there was limited overlap between the homepages retrieved by the subject-suggested and thesaurus-suggested terms. Since the retrieved homepages for the most part were different, this suggests that a user can enhance a keyword-based search by using an automatically generated concept space. Subjects especially liked the level of control that they could exert over the search, and the fact that the terms suggested by the thesaurus were ``real'' (i.e., originating in the homepages) and therefore guaranteed to have retrieval success.
    BibTeX:
    @article{Chen1998,
      author = {Chen, HC and Houston, AL and Sewell, RR and Schatz, BR},
      title = {Internet browsing and searching: User evaluations of category map and concept space techniques},
      journal = {JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE},
      year = {1998},
      volume = {49},
      number = {7},
      pages = {582-603}
    }
    
    Chen, H., Schuffels, C. & Orwig, R. Internet categorization and search: A self-organizing approach {1996} JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
    Vol. {7}({1}), pp. {88-102} 
    article  
    Abstract: The problems of information overload and vocabulary differences have become more pressing with the emergence of increasingly popular Internet services. The main information retrieval mechanisms provided by the prevailing Internet WWW software are based on either keyword search (e.g., the Lycos server at CNU, the Yahoo server at Stanford) or hypertext browsing (e.g., Mosaic and Netscape). This research aims to provide an alternative concept-based categorization and search capability for WWW servers based on selected machine learning algorithms. Our proposed approach, which is grounded on automatic textual analysis of Internet documents (homepages), attempts to address the Internet search problem by first categorizing the content of Internet documents. We report results of our recent testing of a multilayered neural network clustering algorithm employing the Kohonen self-organizing feature map to categorize (classify) Internet homepages according to their content. The category hierarchies created could serve to partition the vast Internet services into subject-specific categories and databases and improve Internet keyword searching and/or browsing. (C) 1996 Academic Press, Inc.
    BibTeX:
    @article{Chen1996a,
      author = {Chen, HC and Schuffels, C and Orwig, R},
      title = {Internet categorization and search: A self-organizing approach},
      journal = {JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION},
      year = {1996},
      volume = {7},
      number = {1},
      pages = {88-102}
    }
    
    Chen, L., Gillenson, M. & Sherrell, D. Enticing online consumers: an extended technology acceptance perspective {2002} INFORMATION & MANAGEMENT
    Vol. {39}({8}), pp. {705-719} 
    article  
    Abstract: The business-to-consumer aspect of electronic commerce (EC) is the most visible business use of the World Wide Web (WWW). A virtual store allows companies to provide product information and offer direct sales to their customers through an electronic channel. The fundamental problem motivating this study is that: in order for a virtual store to compete effectively with both physical stores and other online retailers, there is an urgent need to understand the factors that entice consumers to use it. This research attempted to provide both theoretical and empirical analyses to explain consumers' use of a virtual store and its antecedents. By applying the technology acceptance model (TAM) and innovation diffusion theory (IDT), this research took an extended perspective to examine consumer behavior in the virtual store context. The data from a survey of online consumers was used empirically to test the proposed research model. Confirmatory factor analysis (CFA) was performed to examine the reliability and validity of the measurement model, and the structural equation modeling (SEM) technique was used to evaluate the causal model. The implication of the work to both researchers and practitioners is discussed. (C) 2002 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Chen2002,
      author = {Chen, LD and Gillenson, ML and Sherrell, DL},
      title = {Enticing online consumers: an extended technology acceptance perspective},
      journal = {INFORMATION & MANAGEMENT},
      year = {2002},
      volume = {39},
      number = {8},
      pages = {705-719}
    }
    
    Chen, M., Han, J. & Yu, P. Data mining: An overview from a database perspective {1996} IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
    Vol. {8}({6}), pp. {866-883} 
    article  
    Abstract: Mining information and knowledge from large databases has been recognized by many researchers as a key research topic in database systems and machine learning, and by many industrial companies as an important area with an opportunity of major revenues. Researchers in many different fields have shown great interest in data mining. Several emerging applications in information providing services, such as data warehousing and on-line services over the Internet, also call for various data mining techniques to better understand user behavior, to improve the service provided, and to increase the business opportunities. In response to such a demand, this article is to provide a survey, from a database researcher's point of view, on the data mining techniques developed recently. A classification of the available data mining techniques is provided, and a comparative study of such techniques is presented.
    BibTeX:
    @article{Chen1996,
      author = {Chen, MS and Han, JW and Yu, PS},
      title = {Data mining: An overview from a database perspective},
      journal = {IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING},
      year = {1996},
      volume = {8},
      number = {6},
      pages = {866-883}
    }
    
    Cherry, J., Adler, C., Ball, C., Chervitz, S., Dwight, S., Hester, E., Jia, Y., Juvik, G., Roe, T., Schroeder, M., Weng, S. & Botstein, D. SGD: Saccharomyces Genome Database {1998} NUCLEIC ACIDS RESEARCH
    Vol. {26}({1}), pp. {73-79} 
    article  
    Abstract: The Saccharomyces Genome Database (SGD) provides Internet access to the complete Saccharomyces cerevisiae genomic sequence, its genes and their products, the phenotypes of its mutants, and the literature supporting these data, The amount of information and the number of features provided by SGD have increased greatly following the release of the S.cerevisiae genomic sequence, which is currently the only complete sequence of a eukaryotic genome, SGD aids researchers by providing not only basic information, but also tools such as sequence similarity searching that lead to detailed information about features of the genome and relationships between genes. SGD presents information using a variety of user-friendly, dynamically created graphical displays illustrating physical, genetic and sequence feature maps, SGD can be accessed via the World Wide Web at http://genome-www.stanford.edu/Saccharomyces/
    BibTeX:
    @article{Cherry1998,
      author = {Cherry, JM and Adler, C and Ball, C and Chervitz, SA and Dwight, SS and Hester, ET and Jia, YK and Juvik, G and Roe, T and Schroeder, M and Weng, SA and Botstein, D},
      title = {SGD: Saccharomyces Genome Database},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {1998},
      volume = {26},
      number = {1},
      pages = {73-79}
    }
    
    Chiang, M. Balancing transport and physical layers in wireless multihop networks: Jointly optimal congestion control and power control {2005} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {23}({1}), pp. {104-116} 
    article DOI  
    Abstract: In a wireless network with multihop transmissions and interference-limited link rates, can we balance power control in the physical layer and congestion control in the transport layer to enhance the overall network performance while maintaining the architectural modularity between the layers? We answer this question by presenting a distributed power control algorithm that couples with existing transmission control protocols (TCPs) to increase end-to-end throughput and energy efficiency of the network. Under the rigorous framework of nonlinearly constrained utility maximization, we prove the convergence of this coupled algorithm to the global optimum of joint power control and congestion control, for both synchronized and asynchronous implementations. The rate of convergence is geometric and a desirable modularity between the transport and physical layers is maintained. In particular, when congestion control uses TCP Vegas, a simple utilization in the physical layer of the queueing delay information suffices to achieve the joint optimum. Analytic results and simulations illustrate other desirable properties of the proposed algorithm, including robustness to channel outage and to path loss estimation errors, and flexibility in trading off performance optimality for implementation simplicity. This paper presents a step toward a systematic understanding of ``layering'' as ``optimization decomposition,'' where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as the optimization variables coordinating the subproblems. In the case of the transport and physical layers, link congestion prices turn out to be the optimal ``layering prices.''
    BibTeX:
    @article{Chiang2005,
      author = {Chiang, M},
      title = {Balancing transport and physical layers in wireless multihop networks: Jointly optimal congestion control and power control},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2005},
      volume = {23},
      number = {1},
      pages = {104-116},
      doi = {{10.1109/JSAC.2004.837347}}
    }
    
    Chiang, M., Low, S.H., Calderbank, A.R. & Doyle, J.C. Layering as optimization decomposition: A mathematical theory of network architectures {2007} PROCEEDINGS OF THE IEEE
    Vol. {95}({1}), pp. {255-312} 
    article DOI  
    Abstract: Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent cross-layer designs are also conducted through piecemeal approaches. Network protocol stacks may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of ``layering'' as ``optimization decomposition,'' where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, leading to a choice of different layering architectures. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent works are summarized, and open issues discussed. Through case studies, it is illustrated how ``Layering as optimization Decomposition'' provides a common language to think about modularization in the face of complex, networked interactions, a unifying, top-down approach to design protocol stacks, and a mathematical theory of network architectures.
    BibTeX:
    @article{Chiang2007,
      author = {Chiang, Mung and Low, Steven H. and Calderbank, A. Robert and Doyle, John C.},
      title = {Layering as optimization decomposition: A mathematical theory of network architectures},
      journal = {PROCEEDINGS OF THE IEEE},
      year = {2007},
      volume = {95},
      number = {1},
      pages = {255-312},
      doi = {{10.1109/JPROC.2006.887322}}
    }
    
    Childers, T., Carr, C., Peck, J. & Carson, S. Hedonic and utilitarian motivations for online retail shopping behavior {2001} JOURNAL OF RETAILING
    Vol. {77}({4}), pp. {511-535} 
    article  
    Abstract: Motivations to engage in retail shopping include both utilitarian and hedonic dimensions. Business to consumer e-commerce conducted via the mechanism of web-shopping provides an expanded opportunity for companies to create a cognitively and esthetically rich shopping environment in ways not readily imitable in the nonelectronic shopping world. In this article an attitudinal model is developed and empirically tested integrating constructs from technology acceptance research and constructs derived from models of web behavior. Results of two studies from two distinct categories of the interactive shopping environment support the differential importance of immersive, hedonic aspects of the new media as well as the more traditional utilitarian motivations. In addition, navigation, convenience, and the substitutability of the electronic environment to personally examining products were found to be important predictors of online shopping attitudes. Results are discussed in terms of insights for the creation of the online shopping webmosphere through more effective design of interactive retail shopping environments. (C) 2001 by New York University. All rights reserved.
    BibTeX:
    @article{Childers2001,
      author = {Childers, TL and Carr, CL and Peck, J and Carson, S},
      title = {Hedonic and utilitarian motivations for online retail shopping behavior},
      journal = {JOURNAL OF RETAILING},
      year = {2001},
      volume = {77},
      number = {4},
      pages = {511-535}
    }
    
    Christensen, H., Griffiths, K. & Jorm, A. Delivering interventions for depression by using the internet: randomised controlled trial {2004} BRITISH MEDICAL JOURNAL
    Vol. {328}({7434}), pp. {265-268A} 
    article DOI  
    Abstract: Objective To evaluate the efficacy of two internet interventions for community-dwelling individuals with symptoms of depression-a psychoeducation website offering information about depression,and an interactive website offering cognitive behaviour therapy. Design Randomised controlled trial. Setting Internet users in the community, in Canberra, Australia. Participants 525 individuals with increased depressive symptoms recruited by survey and randomly allocated to a website offering information about depression (n = 166) or a cognitive behaviour therapy website (n = 182), or a control intervention using an attention placebo (n = 178). Main outcome measures Change in depression, dysfunctional thoughts; knowledge of medical, psychological, and lifestyle treatments; and knowledge of cognitive behaviour therapy Results Intention to treat analyses indicated that information about depression and interventions that used cognitive behaviour therapy and were delivered via the internet were more effective than a credible control intervention in reducing symptoms of depression in a community sample. For the intervention that delivered cognitive behaviour therapy the reduction in score on the depression scale of the Center for Epidemiologic Studies was 3.2 (95% confidence interval 0.9 to 5.4). For the ``depression literacy'' site (BluePages), the reduction was 3.0 (95% confidence interval 0.6 to 5.2). Cognitive behaviour therapy (MoodGYM) reduced dysfunctional thinking and increased knowledge of cognitive behaviour therapy. Depression literacy (BluePages) significantly improved participants' understanding of effective evidence based treatments for depression (P < 0.05). Conclusions Both cognitive behaviour therapy and psychoeducation delivered via the internet are effective in reducing symptoms of depression.
    BibTeX:
    @article{Christensen2004,
      author = {Christensen, H and Griffiths, KM and Jorm, AF},
      title = {Delivering interventions for depression by using the internet: randomised controlled trial},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2004},
      volume = {328},
      number = {7434},
      pages = {265-268A},
      doi = {{10.1136/bmj.37945.566632.EE}}
    }
    
    Christopoulos, C., Skodras, A. & Ebrahimi, T. The JPEG2000 still image coding system: An overview {2000} IEEE TRANSACTIONS ON CONSUMER ELECTRONICS
    Vol. {46}({4}), pp. {1103-1127} 
    article  
    Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative features. It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.
    BibTeX:
    @article{Christopoulos2000,
      author = {Christopoulos, C and Skodras, A and Ebrahimi, T},
      title = {The JPEG2000 still image coding system: An overview},
      journal = {IEEE TRANSACTIONS ON CONSUMER ELECTRONICS},
      year = {2000},
      volume = {46},
      number = {4},
      pages = {1103-1127}
    }
    
    Chu, Y., Rao, S., Seshan, S. & Zhang, H. A case for end system multicast {2002} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {20}({8}), pp. {1456-1471} 
    article DOI  
    Abstract: The conventional wisdom has been that Internet protocol (IP) is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP multicast is still plagued with concerns pertaining to scalability, network management, deployment, and support for higher layer functionality such as error, flow, and congestion control. In this paper, we explore an alternative architecture that we term End System Multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP multicast. However, the key concern is the performance penalty associated with such a model. In particular, End System Multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP multicast. In this paper, we study these performance concerns in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred.
    BibTeX:
    @article{Chu2002,
      author = {Chu, YH and Rao, SG and Seshan, S and Zhang, H},
      title = {A case for end system multicast},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2002},
      volume = {20},
      number = {8},
      pages = {1456-1471},
      doi = {{10.1109/JSAC.2002.803066}}
    }
    
    Chu, Y., Rao, S. & Zhang, H. A case for end system multicast {2000}
    Vol. {28}({1})PERFORMANCE EVALUATION REVIEW, SPECIAL ISSUE, VOL 28 NO 1, JUNE 2000 - ACM SIGMETRICS `2000, PROCEEDINGS , pp. {1-12} 
    inproceedings  
    Abstract: The conventional wisdom has been that IP is the natural protocol layer for implementing multicast related functionality. However, ten years after its initial proposal, IP Multicast is still plagued with concerns pertaining to scalability, network management, deployment and support for higher layer functionality such as error, flow and congestion control. In this paper, we explore an alternative architecture for small and sparse groups, where end systems implement all multicast related functionality including membership management and packet replication. We call such a scheme End System Multicast. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP Multicast. However, the key concern is the performance penalty associated with such a model. In particular, End System Multicast introduces duplicate packets on physical links and incurs larger end-to-end delay than IP Multicast. In this paper, we study this question in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. In addition, Narada attempts to optimize the efficiency of the overlay based on end-to-end measurements. We present details of Narada and evaluate it using both simulation and Internet experiments. Preliminary results are encouraging. In most simulations and Internet experiments, the delay and bandwidth penalty are low. We believe the potential benefits of repartitioning multicast functionality between end systems and routers significantly outweigh the performance penalty incurred.
    BibTeX:
    @inproceedings{Chu2000,
      author = {Chu, YH and Rao, SG and Zhang, H},
      title = {A case for end system multicast},
      booktitle = {PERFORMANCE EVALUATION REVIEW, SPECIAL ISSUE, VOL 28 NO 1, JUNE 2000 - ACM SIGMETRICS `2000, PROCEEDINGS },
      year = {2000},
      volume = {28},
      number = {1},
      pages = {1-12},
      note = {International Conference on Measurement and Modeling of Computer Systems (ACM SIGMETRICS 2000), SANTA CLARA, CA, JUN 17-21, 2000}
    }
    
    Chuang, J. & Sollenberger, N. Beyond 3G: Wideband wireless data access based on OFDM and dynamic packet assignment {2000} IEEE COMMUNICATIONS MAGAZINE
    Vol. {38}({7}), pp. {78-87} 
    article  
    Abstract: The rapid growth of wireless voice subscribers, the growth of the Internet, and the increasing use of portable computing devices suggest that wireless Internet access will rise rapidly over the next few years. Rapid progress in digital and RF technology is making possible highly compact and integrated terminal devices, and the introduction of sophisticated wireless data software is making wireless Internet access more user-friendly and providing more value. Transmission rates are currently only about 10 kb/s for large cell systems. Third-generation wireless access such as WCDMA and the evolution of second-generation systems such as TDMA IS-136+, EDGE, and CDMA IS-95 will provide nominal bit rates of 50-384 kb/s in macrocellular systems. [1] This article discusses packet data transmission rates of 2-5 Mb/s in macrocellular environments and up to 10 Mb/s in microcellular and indoor environments as a complementary service to evolving second- and third-generation wireless systems. Dynamic packet assignment for high-efficiency resource management and packet admission; OFDM at the physical layer with interference suppression, space-time coding, and frequency diversity; as well as smart antennas to obtain good power and spectral efficiency are discussed in this proposal. Flexible allocation of both large and small resources also permits provisioning of services for different delay and throughput requirements.
    BibTeX:
    @article{Chuang2000,
      author = {Chuang, J and Sollenberger, N},
      title = {Beyond 3G: Wideband wireless data access based on OFDM and dynamic packet assignment},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2000},
      volume = {38},
      number = {7},
      pages = {78-87}
    }
    
    Chuang, S., Goel, A., McKeown, N. & Prabhakar, B. Matching output queueing with a combined input/output-queued switch {1999} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {17}({6}), pp. {1030-1039} 
    article  
    Abstract: The Internet is facing two problems simultaneously: there is a need for a faster switching/routing infrastructure and a need to introduce guaranteed qualities-of-service (QoS). Each problem can be solved independently: switches and routers can be made faster by using input-queued crossbars instead of shared memory systems; QoS can be provided using weighted fair queueing (WFQ)-based packet scheduling. Until now, however, the two solutions have been mutually exclusive-ah of the work on WFQ-based scheduling algorithms has required that switches/routers use output-queueing or centralized shared memory, This paper demonstrates that a combined input/output-queueing (CIOQ) switch running twice as fast as an input-queued switch can provide precise emulation of a broad class of packet-scheduling algorithms, including WFQ and strict priorities. More precisely, we show that for an N x N switch, a ``speedup'' of 2 - 1/N is necessary, and a speedup of two is sufficient for this exact emulation. Perhaps most interestingly, this result holds for all traffic arrival patterns. On its own, the result is primarily a theoretical observation; it shows that it is possible to emulate purely OQ switches with CIOQ switches running at approximately twice the line rate, To make the result more practical, we introduce several scheduling algorithms that with a speedup of two can emulate an OQ switch. we focus our attention on the simple-st of these algorithms, critical cells first (CCF), and consider its running time and implementation complexity. We conclude that additional techniques are required to make the scheduling algorithms implementable at a high speed and propose two specific strategies.
    BibTeX:
    @article{Chuang1999,
      author = {Chuang, ST and Goel, A and McKeown, N and Prabhakar, B},
      title = {Matching output queueing with a combined input/output-queued switch},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1999},
      volume = {17},
      number = {6},
      pages = {1030-1039},
      note = {Infocom 99 Meeting, NEW YORK, NEW YORK, 1999}
    }
    
    CIMINO, J., SOCRATOUS, S. & CLAYTON, P. INTERNET AS CLINICAL INFORMATION-SYSTEM - APPLICATION DEVELOPMENT USING THE WORLD-WIDE-WEB {1995} JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION
    Vol. {2}({5}), pp. {273-284} 
    article  
    Abstract: Clinical computing application development at Columbia-Presbyterian Medical Center has been limited by the lack of a flexible programming environment that supports multiple client user platforms. The World Wide Web offers a potential solution, with its multifunction servers, multiplatform clients, and use of standard protocols for displaying information. The authors are now using the Web, coupled with their own local clinical data server and vocabulary server, to carry out rapid prototype development of clinical information systems. They have developed one such prototype system that can be run on most popular computing platforms from anywhere on the Internet. The Web paradigm allows easy integration of clinical information with other local and Internet-based information sources. The Web also simplifies many aspects of application design; for example, it includes facilities for the use of encryption to meet the authors' security and confidentiality requirements. The prototype currently runs on only the Web server in the Department of Medical Informatics at Columbia University, but it could be run on other Web servers that access the authors' clinical data and vocabulary servers. It could also be adapted to access clinical information from other systems with similar server capabilities. This approach may be adaptable for use in developing institution-independent standards for data and application sharing.
    BibTeX:
    @article{CIMINO1995,
      author = {CIMINO, JJ and SOCRATOUS, SA and CLAYTON, PD},
      title = {INTERNET AS CLINICAL INFORMATION-SYSTEM - APPLICATION DEVELOPMENT USING THE WORLD-WIDE-WEB},
      journal = {JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION},
      year = {1995},
      volume = {2},
      number = {5},
      pages = {273-284},
      note = {1995 Spring Congress of the American-Medical-Informatics-Association, BOSTON, MA, 1995}
    }
    
    CLAFFY, K., BRAUN, H. & POLYZOS, G. A PARAMETERIZABLE METHODOLOGY FOR INTERNET TRAFFIC FLOW PROFILING {1995} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {13}({8}), pp. {1481-1494} 
    article  
    Abstract: We present a parameterizable methodology for profiling Internet traffic flows at a variety of granularities, Our methodology differs from many previous studies that have concentrated on end-point definitions of hows in terms of state derived from observing the explicit opening and closing of TCP connections. Instead, our model defines hows based on traffic satisfying various temporal acid spatial locality conditions, as observed at internal points of the network, This approach to how characterization helps address some central problems in networking based on the Internet model, Among them are route caching, resource reservation at multiple service levels, usage based accounting, and the integration of IP traffic over an ATM fabric, We first define the parameter space and then concentrate on metrics characterizing both individual flows as well as the aggregate flow profile, We consider various granularities of the definition of a flow, such as by destination network, host-pair, or host and port quadruple, We include some measurements based on case studies we undertook, which yield significant insights into some aspects of Internet traffic, including demonstrating i) the brevity of a significant fraction of IP flows at a variety of traffic aggregation granularities, ii) that the number of host-pair IP hows is not significantly larger than the number of destination network flows, and iii) that schemes for caching traffic information could significantly benefit from using application information.
    BibTeX:
    @article{CLAFFY1995,
      author = {CLAFFY, KC and BRAUN, HW and POLYZOS, GC},
      title = {A PARAMETERIZABLE METHODOLOGY FOR INTERNET TRAFFIC FLOW PROFILING},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1995},
      volume = {13},
      number = {8},
      pages = {1481-1494}
    }
    
    Clark, D. & Fang, W. Explicit allocation of best-effort packet delivery service {1998} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {6}({4}), pp. {362-373} 
    article  
    Abstract: This paper presents the ``allocated-capacity'' framework for providing different levels of best-effort service in times of network congestion, The ``allocated-capacity'' framework-extensions to the Internet protocols and algorithms-can allocate bandwidth to different users in a controlled and predictable way during network congestion. The framework supports two complementary prays of controlling the bandwidth allocation: sender-based and receiver-based. In today's heterogeneous and commercial Internet the framework can serve as a basis for charging for usage and for more efficiently utilizing the network resources. We focus on algorithms for essential components of the framework: a differential dropping algorithm for network routers and a tagging algorithm for profile meters at the edge of the network for bulk-data transfers. We present simulation results to illustrate the effectiveness of the combined algorithms in controlling transmission control protocol (TCP) traffic to achieve certain targeted sending rates.
    BibTeX:
    @article{Clark1998,
      author = {Clark, DD and Fang, WJ},
      title = {Explicit allocation of best-effort packet delivery service},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1998},
      volume = {6},
      number = {4},
      pages = {362-373}
    }
    
    Clauset, A., Newman, M. & Moore, C. Finding community structure in very large networks {2004} PHYSICAL REVIEW E
    Vol. {70}({6, Part 2}) 
    article DOI  
    Abstract: The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(md log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with msimilar ton and dsimilar tolog n, in which case our algorithm runs in essentially linear time, O(n log(2) n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web site of a large on-line retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400 000 vertices and 2x10(6) edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.
    BibTeX:
    @article{Clauset2004,
      author = {Clauset, A and Newman, MEJ and Moore, C},
      title = {Finding community structure in very large networks},
      journal = {PHYSICAL REVIEW E},
      year = {2004},
      volume = {70},
      number = {6, Part 2},
      doi = {{10.1103/PhysRevE.70.066111}}
    }
    
    Cleeland, C., Mendoza, T., Wang, X., Chou, C., Harle, M., Morrissey, M. & Engstrom, M. Assessing symptom distress in cancer patients - The M. D. Anderson Symptom Inventory {2000} CANCER
    Vol. {89}({7}), pp. {1634-1646} 
    article  
    Abstract: BACKGROUND. The purpose of this project was to develop the M. D. Anderson Symptom Inventory (MDASI), a brief measure of the severity and impact of cancer-related symptoms. METHODS. A list of symptoms was generated from symptom inventories and by panels of clinicians. Twenty-six symptoms and 6 interference items were rated by a validation sample of 527 outpatients, a sample of 30 inpatients from the blood and bone marrow transplantation service, and a cross-validation sample of 113 outpatients. Clinical judgment and statistical techniques were used to reduce the number of symptoms. Reliability, validity, and sensitivity of the MDASI were examined. RESULTS. Cluster analysis, best subset analysis, and clinical judgment reduced the number of symptoms to a ``core'' list of 13 that accounted for 64% of the variance in symptom distress. Factor analysis demonstrated a similar pattern in both outpatient samples, and two symptom factors and the interference scale were reliable. Expected differences in symptom pattern and severity were found between patients with ``good'' versus ``poor'' performance status and between patients in active therapy and patients who were seen for follow-up. Patients rated fatigue-related symptoms as the most severe. Groups of patients classified by disease or treatment had severe symptoms that were not on the ``core'' list. CONCLUSIONS. The core items of the MDASI accounted for the majority of symptom distress reported by cancer patients in active treatment and those who were followed after treatment. The MDASI should prove useful for symptom surveys, clinical trials, and patient monitoring, and its format should allow Internet or telephone administration. Cancer 2000;89:1634-46. (C) 2000 American Cancer Society.
    BibTeX:
    @article{Cleeland2000,
      author = {Cleeland, CS and Mendoza, TR and Wang, XS and Chou, C and Harle, MT and Morrissey, M and Engstrom, MC},
      title = {Assessing symptom distress in cancer patients - The M. D. Anderson Symptom Inventory},
      journal = {CANCER},
      year = {2000},
      volume = {89},
      number = {7},
      pages = {1634-1646},
      note = {9th World Congress on Pain, VIENNA, AUSTRIA, AUG 22-27, 1999}
    }
    
    Cline, R. & Haynes, K. Consumer health information seeking on the Internet: the state of the art {2001} HEALTH EDUCATION RESEARCH
    Vol. {16}({6}), pp. {671-692} 
    article  
    Abstract: Increasingly, consumers engage in health information seeking via the Internet. Taking a communication perspective, this review argues why public health professionals should be concerned about the topic, considers potential benefits, synthesizes quality concerns, identifies criteria for evaluating online health information and critiques the literature. More than 70 000 websites disseminate health information; in excess of 50 million people seek health information online, with likely consequences for the health care system. The Internet offers widespread access to health information, and the advantages of interactivity, information tailoring and anonymity. However, access is inequitable and use is hindered further by navigational challenges due to numerous design features (e.g. disorganization, technical language and lack of permanence). Increasingly, critics question the quality of online health information; limited research indicates that much is inaccurate. Meager information-evaluation skills add to consumers' vulnerability, and reinforce the need for quality standards and widespread criteria for evaluating health information. Extant literature can be characterized as speculative, comprised of basic `how to' presentations, with little empirical research. Future research needs to address the Internet as part of the larger health communication system and take advantage of incorporating extant communication concepts. Not only should research focus on the `net-gap' and information quality, it also should address the inherently communicative and transactional quality of Internet use. Both interpersonal and mass communication concepts open avenues for investigation and understanding the influence of the Internet on health beliefs and behaviors, health care, medical outcomes, and the health care system.
    BibTeX:
    @article{Cline2001,
      author = {Cline, RJW and Haynes, KM},
      title = {Consumer health information seeking on the Internet: the state of the art},
      journal = {HEALTH EDUCATION RESEARCH},
      year = {2001},
      volume = {16},
      number = {6},
      pages = {671-692}
    }
    
    Cohen, R., Erez, K., ben Avraham, D. & Havlin, S. Breakdown of the internet under intentional attack {2001} PHYSICAL REVIEW LETTERS
    Vol. {86}({16}), pp. {3682-3685} 
    article  
    Abstract: We study the tolerance of random networks to intentional attack, whereby a fraction p of the most connected sites is removed. We focus on scale-free networks, having connectivity distribution P(k) similar to k(-alpha), and use percolation theory to study analytically and numerically the critical fraction p(c) needed for the disintegration of the network, as well as the size of the largest connected cluster We find that even networks with alpha less than or equal to 3, known to be resilient to random removal of sites, are sensitive to intentional attack. We also argue that, near criticality, the average distance between sites in the spanning (largest) cluster scales with its mass, M, as rootM rather than as log(k) M, as expected for random networks away from criticality.
    BibTeX:
    @article{Cohen2001,
      author = {Cohen, R and Erez, K and ben-Avraham, D and Havlin, S},
      title = {Breakdown of the internet under intentional attack},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2001},
      volume = {86},
      number = {16},
      pages = {3682-3685}
    }
    
    Cohen, R., Erez, K., ben Avraham, D. & Havlin, S. Resilience of the Internet to random breakdowns {2000} PHYSICAL REVIEW LETTERS
    Vol. {85}({21}), pp. {4626-4628} 
    article  
    Abstract: A common property of many large networks, including the Internet, is that the connectivity of the various nodes follows a scale-free power-law distribution, P(k) = ck(-alpha). We study the stability of such networks with respect to crashes, such as random removal of sites; Our approach, based on percolation theory, leads to a general condition for the critical fraction of nodes, p(c), that needs to be removed before the network disintegrates. We show analytically and numerically that for alpha less than or equal to 3 the transition never takes place, unless the network is finite. In the special case of the physical structure of the Internet (alpha approximate to 2.5), we find that it is impressively robust, with p(c) > 0.99.
    BibTeX:
    @article{Cohen2000,
      author = {Cohen, R and Erez, K and ben-Avraham, D and Havlin, S},
      title = {Resilience of the Internet to random breakdowns},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2000},
      volume = {85},
      number = {21},
      pages = {4626-4628}
    }
    
    Cohen, R. & Havlin, S. Scale-free networks are ultrasmall {2003} PHYSICAL REVIEW LETTERS
    Vol. {90}({5}) 
    article DOI  
    Abstract: We study the diameter, or the mean distance between sites, in a scale-free network, having N sites and degree distribution p(k)proportional tok(-lambda), i.e., the probability of having k links outgoing from a site. In contrast to the diameter of regular random networks or small-world networks, which is known to be dsimilar tolnN, we show, using analytical arguments, that scale-free networks with 23, dsimilar tolnN. We also show that, for any lambda>2, one can construct a deterministic scale-free network with dsimilar tolnlnN, which is the lowest possible diameter.
    BibTeX:
    @article{Cohen2003,
      author = {Cohen, R and Havlin, S},
      title = {Scale-free networks are ultrasmall},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2003},
      volume = {90},
      number = {5},
      doi = {{10.1103/PhysRevLett.90.058701}}
    }
    
    Cohen, R., Havlin, S. & ben Avraham, D. Efficient immunization strategies for computer networks and populations {2003} PHYSICAL REVIEW LETTERS
    Vol. {91}({24}) 
    article DOI  
    Abstract: We present an effective immunization strategy for computer networks and populations with broad and, in particular, scale-free degree distributions. The proposed strategy, acquaintance immunization, calls for the immunization of random acquaintances of random nodes (individuals). The strategy requires no knowledge of the node degrees or any other global knowledge, as do targeted immunization strategies. We study analytically the critical threshold for complete immunization. We also study the strategy with respect to the susceptible-infected-removed epidemiological model. We show that the immunization threshold is dramatically reduced with the suggested strategy, for all studied cases.
    BibTeX:
    @article{Cohen2003a,
      author = {Cohen, R and Havlin, S and ben-Avraham, D},
      title = {Efficient immunization strategies for computer networks and populations},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2003},
      volume = {91},
      number = {24},
      doi = {{10.1103/PhysRevLett.91.247901}}
    }
    
    Collins, A., Frezal, J., Teague, J. & Morton, N. A metric map of humans: 23,500 loci in 850 bands {1996} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {93}({25}), pp. {14771-14775} 
    article  
    Abstract: High-resolution maps integrated with the enhanced location data base software (LDB+) give improved estimates of genetic parameters and reveal characteristics of cytogenetic bands, Chiasma interference is intermediate between Kosambi and Carter-Falconer levels, as in Drosophila and the mouse. The autosomal genetic map is 2832 and 4348 centimorgans in males and females, respectively, Telomeric T-bands are strikingly associated with male recombination and gene density, Position and centromeric heterochromatin have large effects, but nontelomeric R-bands are not significantly different from G-bands, Several possible reasons are discussed. These regularities validate the maps, despite their high resolution and inevitable local errors. No other approach has been demonstrated to integrate such a large number of loci, which are increasing at about 45% per year, The maps and the data and software from which they are constructed are available through the Internet (http://cedar.genetics.soton.ac.uk/publichtml). Successive versions of this location data base may also be accessed on CD-ROM.
    BibTeX:
    @article{Collins1996,
      author = {Collins, A and Frezal, J and Teague, J and Morton, NE},
      title = {A metric map of humans: 23,500 loci in 850 bands},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {1996},
      volume = {93},
      number = {25},
      pages = {14771-14775}
    }
    
    Collins, D., Zijdenbos, A., Kollokian, V., Sled, J., Kabani, N., Holmes, C. & Evans, A. Design and construction of a realistic digital brain phantom {1998} IEEE TRANSACTIONS ON MEDICAL IMAGING
    Vol. {17}({3}), pp. {463-468} 
    article  
    Abstract: After conception and implementation of any new medical image processing algorithm, validation is an important step to ensure that the procedure fulfills all requirements set forth at the initial design stage. Although the algorithm must be evaluated on real data, a comprehensive validation requires the additional use of simulated data since it is impossible to establish ground truth with in vivo data. Experiments with simulated data permit controlled evaluation over a wide range of conditions (e.g., different levels of noise, contrast, intensity artefacts, or geometric distortion). Such considerations have become increasingly important with the rapid growth of neuroimaging, i.e., computational analysis of brain structure and function using brain scanning methods such as positron emission tomography and magnetic resonance imaging. Since simple objects such as ellipsoids or parellelepipedes do not reflect the complexity of natural brain anatomy, we present the design and creation of a realistic, high-resolution, digital, volumetric phantom of the human brain. This three-dimensional digital brain phantom is made up of ten volumetric data sets that define the spatial distribution for different tissues (e.g., grey matter, white matter, muscle, skin, etc.), where voxel intensity is proportional to the fraction of tissue within the voxel, The digital brain phantom can be used to simulate tomographic images of the head. Since the contribution of each tissue type to each voxel in the brain phantom is known, it can be used as the gold standard to test analysis algorithms such as classification procedures which seek to identify the tissue ``type'' of each image voxel. Furthermore, since the same anatomical phantom may be used to drive simulators for different modalities, it is the ideal tool to test intermodality registration algorithms. The brain phantom and simulated MR images have been made publicly available on the Internet (http ://www.bic.mni.mcgill.ca/brainweb).
    BibTeX:
    @article{Collins1998,
      author = {Collins, DL and Zijdenbos, AP and Kollokian, V and Sled, JG and Kabani, NJ and Holmes, CJ and Evans, AC},
      title = {Design and construction of a realistic digital brain phantom},
      journal = {IEEE TRANSACTIONS ON MEDICAL IMAGING},
      year = {1998},
      volume = {17},
      number = {3},
      pages = {463-468}
    }
    
    Compton, W. & Volkow, N. Major increases in opioid analgesic abuse in the United States: Concerns and strategies {2006} DRUG AND ALCOHOL DEPENDENCE
    Vol. {81}({2}), pp. {103-107} 
    article DOI  
    Abstract: The problem of abuse of and addiction to opioid analgesics has emerged as a major issue for the United States in the past decade and has worsened over the past few years. The increases in abLISO of these opioids appear to reflect, in part, changes in medication prescribing practices, changes in drug formulations its well as relatively easy access via the internet. Though the use of opioid analgesics for the treatment of acute pain appears to be generally benign, long-term administration of opioids has been associated with clinically meaningful rates of abuse or addiction. Important areas of research to help with the problem of opioid analgesic abuse include the identification of clinical practices that minimize the risks of addiction, the development of guidelines for early detection and management of addiction, the development of opioid analgesics that minimize the risks for abuse, and the development of safe and effective non-opioid analgesics. With high rates of abuse of opiate analgesics among teenagers in the United States, a particularly urgent priority is the investigation of best practices for treating pain in adolescents as well as the development of prevention strategies to reduce diversion and abuse. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
    BibTeX:
    @article{Compton2006,
      author = {Compton, WM and Volkow, ND},
      title = {Major increases in opioid analgesic abuse in the United States: Concerns and strategies},
      journal = {DRUG AND ALCOHOL DEPENDENCE},
      year = {2006},
      volume = {81},
      number = {2},
      pages = {103-107},
      doi = {{10.1016/j.drugalcdep.2005.05.009}}
    }
    
    Cook, C., Heath, F. & Thompson, R. A meta-analysis of response rates in Web- or internet-based surveys {2000} EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
    Vol. {60}({6}), pp. {821-836} 
    article  
    Abstract: Response representativeness is more important than response rate in survey research. However, response rate is important if it bears on representativeness. The present meta-analysis explores factors associated with higher response rates in electronic surveys reported in both published and unpublished research. The number of contacts, personalized contacts, and precontacts are the factors most associated with higher response rates in the Web studies that are analyzed.
    BibTeX:
    @article{Cook2000,
      author = {Cook, C and Heath, F and Thompson, RL},
      title = {A meta-analysis of response rates in Web- or internet-based surveys},
      journal = {EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT},
      year = {2000},
      volume = {60},
      number = {6},
      pages = {821-836}
    }
    
    Corpet, D. & Pierre, F. Point: From animal models to prevention of colon cancer. Systematic review of chemoprevention in Min mice and choice of the model system {2003} CANCER EPIDEMIOLOGY BIOMARKERS & PREVENTION
    Vol. {12}({5}), pp. {391-400} 
    article  
    Abstract: The Apc(Min/+) mouse model and the azoxymethane (AOM) rat model are the main animal models used to study the effect of dietary agents on colorectal cancer. We reviewed recently the potency of chemopreventive agents in the AOM rat model (D. E. Corpet and S. Tache, Nutr. Cancer, 43: 1-21, 2002). Here we add the results of a systematic review of the effect of dietary and chemopreventive agents on the tumor yield in Min mice. The review is based on the results of 179 studies from 71 articles and is displayed also on the internet httpd/corpet.net/min(2). We compared the efficacy of agents in the Min mouse model and the AOM rat model, and found that they were correlated (r = 0.66; P < 0.001), although some agents that afford strong protection in the AOM rat and the Min mouse small bowel increase the tumor yield in the large bowel of mutant mice. The agents included piroxicam, sulindac, celecoxib, difluoromethylornithine, and polyethylene glycol. The reason for this discrepancy is not known. We also compare the results of rodent studies with those of clinical intervention studies of polyp recurrence. We found that the effect of most of the agents tested was consistent across the animal and clinical models. Our point is thus: rodent models can provide guidance in the selection of prevention approaches to human colon cancer, in particular they suggest that polyethylene glycol, hesperidin, protease inhibitor, sphingomyelin, physical exercise, epidermal growth factor receptor kinase inhibitor, (+)-catechin, resveratrol, fish oil, curcumin, caffeate, and thiosulfonate are likely important preventive agents.
    BibTeX:
    @article{Corpet2003,
      author = {Corpet, DE and Pierre, F},
      title = {Point: From animal models to prevention of colon cancer. Systematic review of chemoprevention in Min mice and choice of the model system},
      journal = {CANCER EPIDEMIOLOGY BIOMARKERS & PREVENTION},
      year = {2003},
      volume = {12},
      number = {5},
      pages = {391-400}
    }
    
    Costa, L.D.F., Rodrigues, F.A., Travieso, G. & Boas, P.R.V. Characterization of complex networks: A survey of measurements {2007} ADVANCES IN PHYSICS
    Vol. {56}({1}), pp. {167-242} 
    article DOI  
    Abstract: Each complex network ( or class of networks) presents specific topological features which characterize its connectivity and highly influence the dynamics of processes executed on the network. The analysis, discrimination, and synthesis of complex networks therefore rely on the use of measurements capable of expressing the most relevant topological features. This article presents a survey of such measurements. It includes general considerations about complex network characterization, a brief review of the principal models, and the presentation of the main existing measurements. Important related issues covered in this work comprise the representation of the evolution of complex networks in terms of trajectories in several measurement spaces, the analysis of the correlations between some of the most traditional measurements, perturbation analysis, as well as the use of multivariate statistics for feature selection and network classification. Depending on the network and the analysis task one has in mind, a specific set of features may be chosen. It is hoped that the present survey will help the proper application and interpretation of measurements.
    BibTeX:
    @article{Costa2007,
      author = {Costa, L. Da F. and Rodrigues, F. A. and Travieso, G. and Boas, P. R. Villas},
      title = {Characterization of complex networks: A survey of measurements},
      journal = {ADVANCES IN PHYSICS},
      year = {2007},
      volume = {56},
      number = {1},
      pages = {167-242},
      doi = {{10.1080/00018730601170527}}
    }
    
    Cowan, C., Pu, C., Maier, D., Hinton, H., Walpole, J., Bakke, P., Beattie, S., Grier, A., Wagle, P. & Zhang, Q. StackGuard: Automatic adaptive detection and prevention of buffer-overflow attacks {1998} PROCEEDINGS OF THE SEVENTH USENIX SECURITY SYMPOSIUM, pp. {63-77}  inproceedings  
    Abstract: This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attacks gained notoriety in 1988 as part of the Morris Worm incident on the Internet. While it is fairly simple to fix individual buffer overflow vulnerabilities, buffer overflow attacks continue to this day. Hundreds of attacks have been discovered, and while most of the obvious vulnerabilities have now been patched, more sophisticated buffer overflow attacks continue to emerge. We describe StackGuard: a simple compiler technique that virtually eliminates buffer overflow vulnerabilities with only modest performance penalties. Privileged programs that are recompiled with the StackGuard compiler extension no longer yield control to the attacker, but rather enter a fail-safe state. These programs require no source code changes at all, and are binary-compatible with existing operating systems and libraries. We describe the compiler technique (a simple patch to gee), as well as a set of variations on the technique that tradeoff between penetration resistance and performance. We present experimental results of both the penetration resistance and the performance impact of this technique.
    BibTeX:
    @inproceedings{Cowan1998,
      author = {Cowan, C and Pu, C and Maier, D and Hinton, H and Walpole, J and Bakke, P and Beattie, S and Grier, A and Wagle, P and Zhang, Q},
      title = {StackGuard: Automatic adaptive detection and prevention of buffer-overflow attacks},
      booktitle = {PROCEEDINGS OF THE SEVENTH USENIX SECURITY SYMPOSIUM},
      year = {1998},
      pages = {63-77},
      note = {7th USENIX Security Symposium, SAN ANTONIO, TX, JAN 26-29, 1998}
    }
    
    Cramer, W., Kicklighter, D., Bondeau, A., Moore, B., Churkina, G., Nemry, B., Ruimy, A., Schloss, A. & Participants Potsdam NPP Model Intercompariso Comparing global models of terrestrial net primary productivity (NPP): overview and key results {1999} GLOBAL CHANGE BIOLOGY
    Vol. {5}({Suppl. 1}), pp. {1-15} 
    article  
    Abstract: Seventeen global models of terrestrial biogeochemistry were compared with respect to annual and seasonal fluxes of net primary productivity (NPP) for the land biosphere. The comparison, sponsored by IGBP-GAIM/DIS/GCTE, used standardized input variables wherever possible and was carried out through two international workshops and over the Internet. The models differed widely in complexity and original purpose, but could be grouped in three major categories: satellite-based models that use data from the NOAA/AVHRR sensor as their major input stream (CASA, GLO-PEM, SDBM, SIB2 and TURC), models that simulate carbon fluxes using a prescribed vegetation structure (BIOME-BGC, CARAIB 2.1, CENTURY 4.0, FBM 2.2 HRBM 3.0, KGBM, PLAI 0.2, SILVAN 2.2 and TEM 4.0), and models that simulate both vegetation structure and carbon fluxes (BIOME3, DOLY and HYBRID 3.0). The simulations resulted in a range of total NPP values (44.4-66.3 Pg C year(-1)), after removal of two outliers (which produced extreme results as artefacts due to the comparison). The broad global pattern of NPP and the relationship of annual NPP to the major climatic variables coincided in most areas. Differences could not be attributed to the fundamental modelling strategies, with the exception that nutrient constraints generally produced lower NPP. Regional and global NPP were sensitive to the simulation method for the water balance. Seasonal variation among models was high, both globally and locally, providing several indications for specific deficiencies in some models.
    BibTeX:
    @article{Cramer1999,
      author = {Cramer, W and Kicklighter, DW and Bondeau, A and Moore, B and Churkina, G and Nemry, B and Ruimy, A and Schloss, AL and Participants Potsdam NPP Model Intercompariso},
      title = {Comparing global models of terrestrial net primary productivity (NPP): overview and key results},
      journal = {GLOBAL CHANGE BIOLOGY},
      year = {1999},
      volume = {5},
      number = {Suppl. 1},
      pages = {1-15}
    }
    
    Crovella, M. & Bestavros, A. Self-similarity in World Wide Web traffic: Evidence and possible causes {1997} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {5}({6}), pp. {835-846} 
    article  
    Abstract: Recently, the notion of self-similarity has been shown to apply to wide-area and local-area network traffic, In this paper, we show evidence that the subset of network traffic that is due to World Wide Web (WWW) transfers can show characteristics that are consistent with self-similarity, and we present a hypothesized explanation for that self-similarity. Using a set of traces of actual user executions of NCSA Mosaic, we examine the dependence structure of WWW traffic, First, we show evidence that WWW traffic exhibits behavior that is consistent with self-similar traffic models, Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user ``think time,'' and the superimposition of many such transfers in a local-area network, To do this, we rely on empirically measured distributions both from client traces and from data independently collected at WWW servers.
    BibTeX:
    @article{Crovella1997,
      author = {Crovella, ME and Bestavros, A},
      title = {Self-similarity in World Wide Web traffic: Evidence and possible causes},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1997},
      volume = {5},
      number = {6},
      pages = {835-846}
    }
    
    Crucitti, P., Latora, V. & Marchiori, M. Model for cascading failures in complex networks {2004} PHYSICAL REVIEW E
    Vol. {69}({4, Part 2}) 
    article DOI  
    Abstract: Large but rare cascades triggered by small initial shocks are present in most of the infrastructure networks. Here we present a simple model for cascading failures based on the dynamical redistribution of the flow on the network. We show that the breakdown of a single node is sufficient to collapse the efficiency of the entire system if the node is among the ones with largest load. This is particularly important for real-world networks with a highly hetereogeneous distribution of loads as the Internet and electrical power grids.
    BibTeX:
    @article{Crucitti2004,
      author = {Crucitti, P and Latora, V and Marchiori, M},
      title = {Model for cascading failures in complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2004},
      volume = {69},
      number = {4, Part 2},
      doi = {{10.1103/PhysRevE.69.045104}}
    }
    
    Cuff, J., Clamp, M., Siddiqui, A., Finlay, M. & Barton, G. JPred: a consensus secondary structure prediction server {1998} BIOINFORMATICS
    Vol. {14}({10}), pp. {892-893} 
    article  
    Abstract: An interactive protein secondary structure prediction Internet server is presented. The server allows a single sequence or multiple alignment to be submitted, and returns predictions from six secondary structure prediction algorithms that exploit evolutionary information front multiple sequences. A consensus prediction is also returned which improves the average Q(3) accuracy of prediction by 1% to 72.9 The server simplifies the use of current prediction algorithms and allows conservation patterns important to structure and function to be identified.
    BibTeX:
    @article{Cuff1998,
      author = {Cuff, JA and Clamp, ME and Siddiqui, AS and Finlay, M and Barton, GJ},
      title = {JPred: a consensus secondary structure prediction server},
      journal = {BIOINFORMATICS},
      year = {1998},
      volume = {14},
      number = {10},
      pages = {892-893}
    }
    
    CUNTO, W., MENDOZA, C., OCHSENBEIN, E. & ZEIPPEN, C. TOPBASE AT THE CDS {1993} ASTRONOMY AND ASTROPHYSICS
    Vol. {275}({1}), pp. {L5-L8} 
    article  
    Abstract: TOPbase, the Opacity Project atomic database, has been set up at the Centre de Donnees Astronomiques de Strasbourg (CDS), France, for general access via Internet. This database contains accurately calculated energy levels, f-values and photoionisation cross sections for astrophysically abundant ions. We briefly describe the physical model and computational method used, the atomic data specifications, the database management system and the network access arrangements. The new facility should be of value to the astronomical community.
    BibTeX:
    @article{CUNTO1993,
      author = {CUNTO, W and MENDOZA, C and OCHSENBEIN, E and ZEIPPEN, CJ},
      title = {TOPBASE AT THE CDS},
      journal = {ASTRONOMY AND ASTROPHYSICS},
      year = {1993},
      volume = {275},
      number = {1},
      pages = {L5-L8}
    }
    
    Davenport, T., De Long, D. & Beers, M. Successful knowledge management projects {1998} SLOAN MANAGEMENT REVIEW
    Vol. {39}({2}), pp. {43+} 
    article  
    Abstract: In a study of thirty-one knowledge management projects in twenty-four companies, the authors examine the differences and similarities of the projects, from which they develop a typology. All the projects had someone responsible for the initiative, a commitment of human and capital resources, and four similar kinds of objectives: (1) they created repositories by storing knowledge and making it easily available to users; (2) they provided access to knowledge and facilitated its transfer, (3) they established an environment that encourages the creation, transfer and use of knowledge, and (4) they managed knowledge as an asset on the balance sheet. The authors identify eight factors that seem to characterize a successful project: 1. The project involves money saved or earned, such as the Dow Chemical project that better managed company patents. 2. The project uses a broad infrastructure of both technology and organization. A technology infrastructure includes common technologies for desktop computing and communications. an organizational infrastructure establishes roles for people and groups to serve as resources for particular projects. 3. The project has a balanced structure that, while flexible and evolutionary, still makes knowledge easy to access. 4. Within the organization, people are positive about creating, using, and sharing knowledge. 5. The purpose of the project is clear, and the language that knowledge managers use in describing it is framed in terms common to the company's culture. 6. The project motivates people to create, share, and use knowledge (for example, giving awards to the top ``knowledge sharers''). 7. There are many ways to transfer knowledge, such as the Internet, lotus Notes, and global communications systems, but also including face-to-face communication. 8. The project has senior managers' support and commitment. An organization's knowledge-oriented culture, senior managers committed to the ``knowledge business,'' a sense of how the customer will use the knowledge, and the human factors involved in creating knowledge are most important to effective knowledge management.
    BibTeX:
    @article{Davenport1998,
      author = {Davenport, TH and De Long, DW and Beers, MC},
      title = {Successful knowledge management projects},
      journal = {SLOAN MANAGEMENT REVIEW},
      year = {1998},
      volume = {39},
      number = {2},
      pages = {43+}
    }
    
    Degeratu, A., Rangaswamy, A. & Wu, J. Consumer choice behavior in online and traditional supermarkets: The effects of brand name, price, and other search attributes {2000} INTERNATIONAL JOURNAL OF RESEARCH IN MARKETING
    Vol. {17}({1}), pp. {55-78} 
    article  
    Abstract: Are brand names more valuable online or in traditional supermarkets? Does the increasing availability of comparative price information online make consumers more price-sensitive? We address these and related questions by first conceptualizing how different store environments (online and traditional stores) can differentially affect consumer choices. We use the Liquid detergent, soft margarine spread, and paper towel categories to test our hypotheses. Our hypotheses and the empirical results from our choice models indicate that: (1) Brand names become more important online in some categories but not in others depending on the extent of information available to consumers - brand names are more valuable when information on fewer attributes is available online. (2) Sensory search attributes, particularly visual cues about the product (e.g., paper towel design), have lower impact on choices online, and factual information (i.e., non-sensory attributes, such as the fat content of margarine) have higher impact on choices online. (3) Price sensitivity is higher online, but this is due to online promotions being stronger signals of price discounts. The combined effect of price and promotion on choice is weaker online than offline, (C) 2000 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Degeratu2000,
      author = {Degeratu, AM and Rangaswamy, A and Wu, JN},
      title = {Consumer choice behavior in online and traditional supermarkets: The effects of brand name, price, and other search attributes},
      journal = {INTERNATIONAL JOURNAL OF RESEARCH IN MARKETING},
      year = {2000},
      volume = {17},
      number = {1},
      pages = {55-78}
    }
    
    Dellarocas, C. The digitization of word of mouth: Promise and challenges of online feedback mechanisms {2003} MANAGEMENT SCIENCE
    Vol. {49}({10}), pp. {1407-1424} 
    article  
    Abstract: Online feedback mechanisms harness the bidirectional communication capabilities of the Internet to engineer large-scale, word-of-mouth networks. Best known so far as a technology for building trust and fostering cooperation in online marketplaces, such as eBay, these mechanisms are poised to have a much wider impact on organizations. Their growing popularity has potentially important implications for a wide range of management activities such as brand building, customer acquisition and retention, product development, and quality assurance. This paper surveys our progress in understanding the new possibilities and challenges that these mechanisms represent. It discusses some important dimensions in which Internet-based feedback mechanisms differ from traditional word-of-mouth networks and surveys the most important issues related to their design, evaluation, and use. It provides an overview of relevant work in game theory and economics on the topic of reputation. It discusses how this body of work is being extended and combined with insights from computer science, management science, sociology, and psychology to take into consideration the special properties of online environments. Finally, it identifies opportunities that this new area presents for operations research/management science (OR/MS) research.
    BibTeX:
    @article{Dellarocas2003,
      author = {Dellarocas, C},
      title = {The digitization of word of mouth: Promise and challenges of online feedback mechanisms},
      journal = {MANAGEMENT SCIENCE},
      year = {2003},
      volume = {49},
      number = {10},
      pages = {1407-1424}
    }
    
    Demirev, P., Ho, Y., Ryzhov, V. & Fenselau, C. Microorganism identification by mass spectrometry and protein database searches {1999} ANALYTICAL CHEMISTRY
    Vol. {71}({14}), pp. {2732-2738} 
    article  
    Abstract: A method for rapid identification of microorganisms is presented, which exploits the wealth of information contained in prokaryotic genome and protein sequence databases. The method is based on determining the masses of a set of ions by MALDI TOF mass spectrometry of intact or treated cells. Subsequent correlation of each ion in the set to a protein, along with the organismic source of the protein, is performed by searching an Internet-accessible protein database. Convoluting the lists for all ions and ranking the organisms corresponding to matched ions results in the identification of the microorganism. The method has been successfully demonstrated on B, subtilis and E, coli, two organisms with completely sequenced genomes, The method has been also tested for identification from mass spectra of mixtures of microorganisms, from spectra of an organism at different growth stages, and from spectra originating at other laboratories. Experimental factors such as MALDI matrix preparation, spectral reproducibility, contaminants, mass range, and measurement accuracy on the database search procedure are addressed too. The proposed method has several advantages over other MS methods for microorganism identification.
    BibTeX:
    @article{Demirev1999,
      author = {Demirev, PA and Ho, YP and Ryzhov, V and Fenselau, C},
      title = {Microorganism identification by mass spectrometry and protein database searches},
      journal = {ANALYTICAL CHEMISTRY},
      year = {1999},
      volume = {71},
      number = {14},
      pages = {2732-2738}
    }
    
    Dezso, Z. & Barabasi, A. Halting viruses in scale-free networks {2002} PHYSICAL REVIEW E
    Vol. {65}({5, Part 2}) 
    article DOI  
    Abstract: vanishing epidemic threshold for viruses spreading on scale-free networks indicate that traditional methods, aiming to decrease a virus' spreading rate cannot succeed in eradicating an epidemic. We demonstrate that policies that discriminate between the nodes, curing mostly the highly connected nodes, can restore a finite epidemic threshold and potentially eradicate a virus. We find that the more biased a policy is towards the hubs, the more chance it has to bring the epidemic threshold above the virus' spreading rate. Furthermore, such biased policies are more cost effective, requiring less cures to eradicate the virus.
    BibTeX:
    @article{Dezso2002,
      author = {Dezso, Z and Barabasi, AL},
      title = {Halting viruses in scale-free networks},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {65},
      number = {5, Part 2},
      doi = {{10.1103/PhysRevE.65.055103}}
    }
    
    Diaz, J., Griffith, R., Ng, J., Reinert, S., Friedmann, P. & Moulton, A. Patients' use of the Internet for medical information {2002} JOURNAL OF GENERAL INTERNAL MEDICINE
    Vol. {17}({3}), pp. {180-185} 
    article  
    Abstract: Objectives: To determine the percentage of patients enrolled in a primary care practice who use the Internet for health information, to describe the types of information sought, to evaluate patients' perceptions of the quality of this information, and to determine if patients who use the Internet for health information discuss this with their doctors. Design: Self-administered mailed survey. Setting: Patients from a primary care internal medicine private practice. Participants. Randomly selected patients (N=1,000) were mailed a confidential survey between December 1999 and March 2000. The response rate was 56.2 Measurements and main results: Of the 512 patients who returned the survey, 53.5% (274) stated that they used the Internet for medical information. Those using the Internet for medical information were more educated (P<.001) and had higher incomes (P<.001). Respondents used the Internet for information on a broad range of medical topics. Sixty percent felt that the information on the Internet was the ``same as'' or ``better than'' information from their doctors. Of those using the Internet for health information, 59% did not discuss this information with their doctor. Neither gender, education level, nor age less than 60 years was associated with patients sharing their Web searches with their physicians. However, patients who discussed this information with their doctors rated the quality of information higher than those who did not share this information with their providers. Conclusions. Primary care providers should recognize that patients are using the World Wide Web as a source of medical and health information and should be prepared to offer suggestions for Web-based health resources and to assist patients in evaluating the quality of medical information available on the Internet.
    BibTeX:
    @article{Diaz2002,
      author = {Diaz, JA and Griffith, RA and Ng, JJ and Reinert, SE and Friedmann, PD and Moulton, AW},
      title = {Patients' use of the Internet for medical information},
      journal = {JOURNAL OF GENERAL INTERNAL MEDICINE},
      year = {2002},
      volume = {17},
      number = {3},
      pages = {180-185},
      note = {23rd Annual Meeting of the Society-for-General-Internal-Medicine, BOSTON, MASSACHUSETTS, MAY 02-06, 2000}
    }
    
    DiMaggio, P., Hargittai, E., Neuman, W. & Robinson, J. Social implications of the Internet {2001} ANNUAL REVIEW OF SOCIOLOGY
    Vol. {27}, pp. {307-336} 
    article  
    Abstract: The Internet is a critically important research site for sociologists testing theories of technology diffusion and media effects, particularly because it is a medium uniquely capable of integrating modes of communication and forms of content. Current research tends to focus on the Internet's implications in five domains: 1) inequality (the ``digital divide''); 2) community and social capital; 3) political participation; 4) organizations and other economic institutions; and 5) cultural participation and cultural diversity. A recurrent theme across domains is that the Internet tends to complement rather than displace existing media and patterns of behavior. Thus in each domain, utopian claims and dystopic warnings based on extrapolations from technical possibilities have given way to more nuanced and circumscribed understandings of how Internet use adapts to existing patterns, permits certain innovations, and reinforces particular kinds of change. Moreover, in each domain the ultimate social implications of this new technology depend on economic, legal, and policy decisions that are shaping the Internet as it becomes institutionalized. Sociologists need to study the Internet more actively and, particularly, to synthesize research findings on individual user behavior with macroscopic analyses of institutional and political-economic factors that constrain that behavior.
    BibTeX:
    @article{DiMaggio2001,
      author = {DiMaggio, P and Hargittai, E and Neuman, WR and Robinson, JP},
      title = {Social implications of the Internet},
      journal = {ANNUAL REVIEW OF SOCIOLOGY},
      year = {2001},
      volume = {27},
      pages = {307-336}
    }
    
    Dingle, K., Colles, F., Wareing, D., Ure, R., Fox, A., Bolton, F., Bootsma, H., Willems, R., Urwin, R. & Maiden, M. Multilocus sequence typing system for Campylobacter jejuni {2001} JOURNAL OF CLINICAL MICROBIOLOGY
    Vol. {39}({1}), pp. {14-23} 
    article  
    Abstract: The gram-negative bacterium Campylobacter jejuni has extensive reservoirs in livestock and the environment and is a frequent cause of gastroenteritis in humans. To date, the lack of (i) methods suitable for population genetic analysis and (II) a universally accepted nomenclature has hindered studies of the epidemiology and population biology of this organism. Here, a multilocus sequence typing (MLST) system for this organism is described, which exploits the genetic variation present in seven housekeeping loci to determine the genetic relationships among isolates, The MLST system was established using 194 C. jejuni isolates of diverse origins, from humans, animals, and the environment. The allelic profiles, or sequence types (STs), of these isolates were deposited on the Internet (http://mlst.zoo.ox.ac.uk), forming a virtual isolate collection which could be continually expanded. These data indicated that C. jejuni is genetically diverse, with a weakly clonal population structure, and that intra- and interspecies horizontal genetic exchange was common. Of the 155 STs observed, 51 (26% of the isolate collection) were unique, with the remainder of the collection being categorized into 11 lineages or clonal complexes of related STs with between 2 and 56 members. In some cases membership in a given lineage or ST correlated with the possession of a particular Penner IIS serotype, Application of this approach to further isolate collections will enable an integrated global picture of C. jejuni epidemiology to be established and will permit more detailed studies of the population genetics of this organism.
    BibTeX:
    @article{Dingle2001,
      author = {Dingle, KE and Colles, FM and Wareing, DRA and Ure, R and Fox, AJ and Bolton, FE and Bootsma, HJ and Willems, RJL and Urwin, R and Maiden, MCJ},
      title = {Multilocus sequence typing system for Campylobacter jejuni},
      journal = {JOURNAL OF CLINICAL MICROBIOLOGY},
      year = {2001},
      volume = {39},
      number = {1},
      pages = {14-23}
    }
    
    Diot, C., Dabbous, W. & Crowcroft, J. Multipoint communication: A survey of protocols, functions, and mechanisms {1997} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {15}({3}), pp. {277-290} 
    article  
    Abstract: Group communication supports information transfer between a set of participants. It is becoming more and more relevant in distributed environments, For distributed or replicated data, it provides efficient communication without overloading the network. For some types of multimedia applications, it is the only way to control data transmission to group members. This paper surveys protocol functions and mechanisms for data transmission within a group, from multicast routing problems up to end-to-end multipoint transmission control, We provide a bibliography which is organized by topic, This paper is intended to introduce this special issue with the necessary background on recent and ongoing research.
    BibTeX:
    @article{Diot1997,
      author = {Diot, C and Dabbous, W and Crowcroft, J},
      title = {Multipoint communication: A survey of protocols, functions, and mechanisms},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1997},
      volume = {15},
      number = {3},
      pages = {277-290}
    }
    
    Diot, C., Levine, B., Lyles, B., Kassem, H. & Balensiefen, D. Deployment issues for the IP multicast service and architecture {2000} IEEE NETWORK
    Vol. {14}({1}), pp. {78-88} 
    article  
    Abstract: IP multicast offers the scalable point-to-multipoint delivery necessary for using group communication applications on the Internet. However, the IP multicast service has seen slow commercial deployment ISPs and carriers. The original service model was designed without a clear understanding of commercial requirements or a robust implementation strategy. The very limited number of applications and the complexity of the architectural design - which we believe is a consequence of the open service model - have deterred widespread deployment as well. We examine the issues that have limited the commercial deployment of IP multicast from the viewpoint of carriers. We analyze where the model fails and what it does not offer, and we discuss requirements for successful deployment of multicast services.
    BibTeX:
    @article{Diot2000,
      author = {Diot, C and Levine, BN and Lyles, B and Kassem, H and Balensiefen, D},
      title = {Deployment issues for the IP multicast service and architecture},
      journal = {IEEE NETWORK},
      year = {2000},
      volume = {14},
      number = {1},
      pages = {78-88}
    }
    
    Dittmann, L., Develder, C., Chiaroni, D., Neri, F., Callegati, F., Koerber, W., Stavdas, A., Renaud, M., Rafel, A., Sole-Pareta, J., Cerroni, W., Leligou, N., Dembeck, L., Mortensen, B., Pickavet, M., Le Sauze, N., Mahony, A., Berde, B. & Eilenberger, G. The European IST project DAVID: A viable approach toward optical packet switching {2003} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {21}({7}), pp. {1026-1040} 
    article DOI  
    Abstract: In this paper, promising technologies and a network architecture. are presented for future optical packet switched networks. The overall network concept is presented and the major choices are highlighted and compared with alternative solutions. Both long and shorter term approaches are considered, as well as both the wide-area network and multiple-area networks parts. of the network. The results presented in this paper were developed in the frame of the research project DAVID (Data And Voice Integration over DWDM) project,funded by the European Commission through the IST-framework.
    BibTeX:
    @article{Dittmann2003,
      author = {Dittmann, L and Develder, C and Chiaroni, D and Neri, F and Callegati, F and Koerber, W and Stavdas, A and Renaud, M and Rafel, A and Sole-Pareta, J and Cerroni, W and Leligou, N and Dembeck, L and Mortensen, B and Pickavet, M and Le Sauze, N and Mahony, A and Berde, B and Eilenberger, G},
      title = {The European IST project DAVID: A viable approach toward optical packet switching},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2003},
      volume = {21},
      number = {7},
      pages = {1026-1040},
      doi = {{10.1109/JSAC.2003.816388}}
    }
    
    Dodds, P., Muhamad, R. & Watts, D. An experimental study of search in global social networks {2003} SCIENCE
    Vol. {301}({5634}), pp. {827-829} 
    article  
    Abstract: We report on a global social-search experiment in which more than 60,000 e-mail users attempted to reach one of 18 target persons in 13 countries by forwarding messages to acquaintances. We find that successful social search is conducted primarily through intermediate to weak strength ties, does not require highly connected ``hubs'' to succeed, and, in contrast to unsuccessful social search, disproportionately relies on professional relationships. By accounting for the attrition of message chains, we estimate that social searches can reach their targets in a median of five to seven steps, depending on the separation of source and target, although small variations in chain lengths and participation rates generate large differences in target reachability. We conclude that although global social networks are, in principle, searchable, actual success depends sensitively on individual incentives.
    BibTeX:
    @article{Dodds2003,
      author = {Dodds, PS and Muhamad, R and Watts, DJ},
      title = {An experimental study of search in global social networks},
      journal = {SCIENCE},
      year = {2003},
      volume = {301},
      number = {5634},
      pages = {827-829}
    }
    
    Donthu, N. & Garcia, A. The Internet shopper {1999} JOURNAL OF ADVERTISING RESEARCH
    Vol. {39}({3}), pp. {52-58} 
    article  
    Abstract: Based on a telephone survey, the authors found that Internet shoppers are older and make more money than Internet non-shoppers. Internet shoppers are more convenience seekers, innovative, impulsive, variety seekers, and less risk averse than Internet non-shoppers are. Internet shoppers are also less brand and price conscious than Internet non-shoppers are. Internet shoppers have a more positive attitude toward advertising and direct marketing than non-shoppers do. Implications of these findings are discussed.
    BibTeX:
    @article{Donthu1999,
      author = {Donthu, N and Garcia, A},
      title = {The Internet shopper},
      journal = {JOURNAL OF ADVERTISING RESEARCH},
      year = {1999},
      volume = {39},
      number = {3},
      pages = {52-58}
    }
    
    Dorogovtsev, S., Goltsev, A. & Mendes, J. Pseudofractal scale-free web {2002} PHYSICAL REVIEW E
    Vol. {65}({6, Part 2}) 
    article DOI  
    Abstract: We find that scale-free random networks are excellently modeled by simple deterministic graphs. Our graph has a discrete degree distribution (degree is the number of connections of a vertex), which is characterized by a power law with exponent gamma=1+ln 3/ln 2. Properties of this compact structure are surprisingly close to those of growing random scale-free networks with gamma in the most interesting region, between 2 and 3. We succeed to find exactly and numerically with high precision all main characteristics of the graph. In particular, we obtain the exact shortest-path-length distribution. For a large network (ln N>>1) the distribution tends to a Gaussian of width similar torootln N centered at (-)lsimilar toln N. We show that the eigenvalue spectrum of the adjacency matrix of the graph has a power-law tail with exponent 2+gamma.
    BibTeX:
    @article{Dorogovtsev2002,
      author = {Dorogovtsev, SN and Goltsev, AV and Mendes, JFF},
      title = {Pseudofractal scale-free web},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {65},
      number = {6, Part 2},
      doi = {{10.1103/PhysRevE.65.066122}}
    }
    
    Dorogovtsev, S., Goltsev, A. & Mendes, J. Ising model on networks with an arbitrary distribution of connections {2002} PHYSICAL REVIEW E
    Vol. {66}({1, Part 2}) 
    article DOI  
    Abstract: We find the exact critical temperature T-c of the nearest-neighbor ferromagnetic Ising model on an ``equilibrium'' random graph with an arbitrary degree distribution P(k). We observe an anomalous behavior of the magnetization, magnetic susceptibility and specific heat, when P(k) is fat tailed, or, loosely speaking, when the fourth moment of the distribution diverges in infinite networks. When the second moment becomes divergent, T-c approaches infinity, the phase transition is of infinite order, and size effect is anomalously strong.
    BibTeX:
    @article{Dorogovtsev2002a,
      author = {Dorogovtsev, SN and Goltsev, AV and Mendes, JFF},
      title = {Ising model on networks with an arbitrary distribution of connections},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {66},
      number = {1, Part 2},
      doi = {{10.1103/PhysRevE.66.016104}}
    }
    
    Dorogovtsev, S. & Mendes, J. Evolution of networks with aging of sites {2000} PHYSICAL REVIEW E
    Vol. {62}({2, Part A}), pp. {1842-1845} 
    article  
    Abstract: We study the growth of a network with aging of sites. Each new site of the network is connected to some old site with probability proportional (i) to the connectivity of the old site as in the Barabasi-Albert's model and (ii) to tau(-alpha), where tau is the age of the old site. We find both from simulation and analytically that the network shows scaling behavior only in the region alpha<1. When alpha increases from -infinity to 0, the exponent gamma of the distribution of connectivities [P(k)infinity k(-gamma) for large k] grows from 2 to the value for the network without aging. The ensuing increase of alpha to 1 causes gamma to grow to infinity. For alpha>1, the distribution P(k) is exponentional.
    BibTeX:
    @article{Dorogovtsev2000a,
      author = {Dorogovtsev, SN and Mendes, JFF},
      title = {Evolution of networks with aging of sites},
      journal = {PHYSICAL REVIEW E},
      year = {2000},
      volume = {62},
      number = {2, Part A},
      pages = {1842-1845}
    }
    
    Dorogovtsev, S., Mendes, J. & Samukhin, A. Structure of growing networks with preferential linking {2000} PHYSICAL REVIEW LETTERS
    Vol. {85}({21}), pp. {4633-4636} 
    article  
    Abstract: The model of growing networks with the preferential attachment of new links is generalized to include initial attractiveness of sites. We find the exact form of the stationary distribution of the number of incoming links of sites in the limit of long times, P(q), and the long-time limit of the average connectivity (q) over bar (s, t) of a site s at time t (one site is added per unit of time). At long times, P(q) similar to q(-gamma) at q --> infinity and (q) over bar (s, t) similar to (s/t)(-beta) at s/t --> 0, where the exponent gamma varies from 2 to infinity depending on the initial attractiveness of sites. We show that the relation beta(gamma - 1) = 1 between the exponents is universal.
    BibTeX:
    @article{Dorogovtsev2000,
      author = {Dorogovtsev, SN and Mendes, JFF and Samukhin, AN},
      title = {Structure of growing networks with preferential linking},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2000},
      volume = {85},
      number = {21},
      pages = {4633-4636}
    }
    
    Dovrolis, C. & Ramanathan, P. A case for relative differentiated services and the proportional differentiation model {1999} IEEE NETWORK
    Vol. {13}({5}), pp. {26-34} 
    article  
    Abstract: Internet applications and users have very diverse quality of service expectations, making the same-service-to-all model of the current Internet inadequate and limiting. There is a widespread consensus today that the Internet architecture has to be extended with service differentiation mechanisms so that certain users and applications can get better service than others at a higher cost. One approach, referred to as absolute differentiated services, is based on sophisticated admission control and resource reservation mechanisms in order to provide guarantees or statistical assurances for absolute performance measures, such as a minimum service rate or maximum end-to-end delay. Another approach, which is simpler in terms of implementation, deployment, and network manageability, is to offer relative differentiated services between a small number of service classes. These classes are ordered based on their packet forwarding quality, in terms of per-hop metrics for the queuing delays and packet losses, giving the assurance that higher classes are better than lower classes. The applications and users, in this context, can dynamically select the class that best meets their quality and pricing constraints, without a priori guarantees for the actual performance level of each class. The relative differentiation approach can be further refined and quantified using the proportional differentiation model. This model aims to provide the network operator with the ``tuning knobs'' for adjusting the quality spacing between classes, independent of the class loads. When this spacing is Feasible in short timescales, ii can lead to predictable and controllable class differentiation, which are two important Features for any relative differentiation model. The proportional differentiation model can be approximated in practice with simple forwarding mechanisms (packet scheduling and buffer management) that we briefly describe here.
    BibTeX:
    @article{Dovrolis1999,
      author = {Dovrolis, C and Ramanathan, P},
      title = {A case for relative differentiated services and the proportional differentiation model},
      journal = {IEEE NETWORK},
      year = {1999},
      volume = {13},
      number = {5},
      pages = {26-34}
    }
    
    Dunn, A., Trivedi, M. & O'Neal, H. Physical activity dose-response effects on outcomes of depression and anxiety {2001} MEDICINE AND SCIENCE IN SPORTS AND EXERCISE
    Vol. {33}({6, Suppl. S}), pp. {S587-S597} 
    article  
    Abstract: Purpose: The purpose of this study was to examine the scientific evidence for a dose-response relation of physical activity with depressive and anxiety disorders. Methods: Computer database searches of MEDLINE, PsychLit, and Internet and personal retrieval systems to locate population studies, randomized controlled trials (RCTs), observational studies, and consensus panel judgments were conducted. Results: Observational studies demonstrate that greater amounts of occupational and leisure time physical activity are generally associated with reduced symptoms of depression. Quasi-experimental studies show that light-, moderate-, and vigorous-intensify exercise can reduce symptoms of depression. However, no RCTs have varied frequency or duration of exercise and controlled for total energy expenditure in studies of depression or anxiety. Quasi-experimental and RCTs demonstrate that both resistance training and aerobic exercise can reduce symptoms of depression. Finally, the relation of exercise dose to changes in cardiorespiratory fitness is equivocal with some studies showing Chat fitness is associated with reduction of symptoms and others that have demonstrated reduction in symptoms without increases in fitness. Conclusion: All evidence for dose-response effects of physical activity and exercise come from B and C levels of evidence. There is little evidence for dose-response effects, though this is largely because of a lack of studies rather than a lack of evidence. A dose-response relation does, however, remain plausible.
    BibTeX:
    @article{Dunn2001,
      author = {Dunn, AL and Trivedi, MH and O'Neal, HA},
      title = {Physical activity dose-response effects on outcomes of depression and anxiety},
      journal = {MEDICINE AND SCIENCE IN SPORTS AND EXERCISE},
      year = {2001},
      volume = {33},
      number = {6, Suppl. S},
      pages = {S587-S597},
      note = {Symposium on Dose-Response Issues Concerning Physical Activity and health, ONTARIO, CANADA, OCT 11-15, 2000}
    }
    
    Dunn, S. Hydrogen futures: toward a sustainable energy system {2002} INTERNATIONAL JOURNAL OF HYDROGEN ENERGY
    Vol. {27}({3}), pp. {235-264} 
    article  
    Abstract: Fueled by concerns about urban air pollution, energy security, and climate change, the notion of a ``hydrogen economy'' is moving beyond the realm of scientists and engineers and into the lexicon of political and business leaders. Interest in hydrogen, the simplest and most abundant element in the universe, is also rising due to technical advances in fuel cells - the potential successors to batteries in portable electronics, power plants, and the internal combustion engine. But where will the hydrogen come from? Government and industry, keeping one foot in the hydrocarbon economy, are pursuing an incremental route, using gasoline or methanol as the source of the hydrogen, with the fuel reformed on board vehicles. A cleaner path, deriving hydrogen from natural gas and renewable energy and using the fuel directly on board vehicles, has received significantly less support, in part because the cost of building a hydrogen infrastructure is widely viewed as prohibitively high. Yet a number of recent studies suggest that moving to the direct use of hydrogen may be much cleaner and far less expensive. Just as government played a catalytic role in the creation of the Internet, government will have an essential part in building a hydrogen economy. Research and development, incentives and regulations, and partnerships with industry have sparked isolated initiatives. But stronger public policies and educational efforts are needed to accelerate the process. Choices made today will likely determine which countries and companies seize the enormous political power and economic prizes associated with the hydrogen age now dawning. (C) 2002 International Association for Hydrogen Energy. Published by Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Dunn2002,
      author = {Dunn, S},
      title = {Hydrogen futures: toward a sustainable energy system},
      journal = {INTERNATIONAL JOURNAL OF HYDROGEN ENERGY},
      year = {2002},
      volume = {27},
      number = {3},
      pages = {235-264}
    }
    
    Dupont, W. & Plummer, W. Power and sample size calculations for studies involving linear regression {1998} CONTROLLED CLINICAL TRIALS
    Vol. {19}({6}), pp. {589-601} 
    article  
    Abstract: This article presents methods for sample size and power calculations for studies involving linear regression. These approaches are applicable to clinical trials designed to detect a regression slope of a given magnitude or to studies that test whether the slopes or intercepts of two independent regression lines differ by a given amount. The investigator may either specify the values of the independent (x) variable(s) of the regression line(s) or determine them observationally when the study is performed. In the latter case, the investigator must estimate the standard deviation(s) of the independent variable(s). This study gives examples using this method for both experimental and observational study designs. Cohen's method of power calculations for multiple linear regression models is also discussed and contrasted with the methods of this study. We have posted a computer program to perform these and other sample size calculations on the Internet (see http: //www.mc.vanderbilt.edu/prevmed/psintro.htm). This program can determine the sample size needed to detect a specified alternative hypothesis with the required power, the power with which a specific alternative hypothesis can be detected with a given sample size, or the specific alternative hypotheses that can be detected with a given power and sample size. Context-specific help messages available on request make the use of this software largely self-explanatory. Controlled Clin Trials 1998;19:589-601 (C) Elsevier Science Inc. 1998.
    BibTeX:
    @article{Dupont1998,
      author = {Dupont, WD and Plummer, WD},
      title = {Power and sample size calculations for studies involving linear regression},
      journal = {CONTROLLED CLINICAL TRIALS},
      year = {1998},
      volume = {19},
      number = {6},
      pages = {589-601}
    }
    
    Dupont, W. & Plummer, W. PS power and sample size program available for free on the Internet {1997} CONTROLLED CLINICAL TRIALS
    Vol. {18}({3}), pp. {274} 
    article  
    BibTeX:
    @article{Dupont1997,
      author = {Dupont, WD and Plummer, WD},
      title = {PS power and sample size program available for free on the Internet},
      journal = {CONTROLLED CLINICAL TRIALS},
      year = {1997},
      volume = {18},
      number = {3},
      pages = {274}
    }
    
    Elford, J., Bolding, G. & Sherr, L. High-risk sexual behaviour increases among London gay men between 1998 and 2001: what is the role of HIV optimism? {2002} AIDS
    Vol. {16}({11}), pp. {1537-1544} 
    article  
    Abstract: Objective: To examine whether HIV optimism (i.e. optimism in the light of new HIV drug therapies) can account for the recent increase in high-risk sexual behaviour among London gay men. Methods: Gay men (n = 2938) using London gyms were surveyed annually between 1998 and 2001. Information was collected on HIV status, unprotected anal intercourse (UAI) in the previous 3 months, and agreement with two statements concerning the severity of and susceptibility to HIV infection. Those who agreed were classified as `optimistic'. Results: Between 1998 and 2001, the percentage of men reporting high-risk UAI (i.e. UAI with a casual partner of unknown or discordant HIV status) increased: HIV-positive men 15.3-38.8 HIV-negative men 6.8-12.1 never-tested men 2.1-7.7 (P < 0.01). Overall, less than a third were optimistic. In cross-sectional analysis, optimistic HIV-positive and -negative men were more likely to report high-risk UAI than other men (P < 0.05). However, the increase in high-risk UAI between 1998 and 2001 was seen in those who were optimistic and those who were not (P < 0.05). In multivariate analysis, the modelled increase in high-risk UAI over time remained significant after controlling for HIV optimism (P < 0.01), with no significant interaction between optimism and time. Conclusion: Among London gay men, no difference was detected between those who were optimistic and those who were not in the rate of increase in high-risk sexual behaviour between 1998 and 2001. Our findings suggest that HIV optimism is unlikely to explain the recent increase in high-risk sexual behaviour in these men. (C) 2002 Lippincott Williams Wilkins.
    BibTeX:
    @article{Elford2002,
      author = {Elford, J and Bolding, G and Sherr, L},
      title = {High-risk sexual behaviour increases among London gay men between 1998 and 2001: what is the role of HIV optimism?},
      journal = {AIDS},
      year = {2002},
      volume = {16},
      number = {11},
      pages = {1537-1544}
    }
    
    Elwyn, G., O'Connor, A., Stacey, D., Volk, R., Edwards, A., Coulter, A. & IPDAS Collaboration Developing a quality criteria framework for patient decision aids: online international Delphi consensus process {2006} BRITISH MEDICAL JOURNAL
    Vol. {333}({7565}), pp. {417-419} 
    article DOI  
    Abstract: Objective To develop a set of quality criteria for patient decision support technologies (decision aids). Design and setting Two stage web, based Delphi process using online rating process to enable international collaboration. Participants Individuals from four stakeholder groups (researchers, practitioners, patients, policy makers) representing 14 countries reviewed evidence summaries and rated the importance of 80 criteria in 12 quality domains on a I to 9 scale. Second round participants received feedback from the first round and repeated their assessment of the 80 criteria plus three new ones. Main outcome measure Aggregate ratings for each criterion calculated using medians weighted to compensate for different numbers in stakeholder groups; criteria rated between 7 and 9 were retained. Results 212 nominated people were invited to participate. Of those invited, 122 participated in the first round (77 researchers, 21 patients, 10 practitioners, 14 policy makers); 104/122 (85 participated in the second round. 74 of 83 criteria were retained in the following domains: systematic development process (9/9 criteria); providing information about options (13/13); presenting probabilities (11/13); clarifying and expressing values (3/3); using patient stories (2/5); guiding/coaching (3/5); disclosing conflicts of interest (5/5); providing internet access (6/6); balanced presentation of options (3/3); using plain language (4/6); basing information on tip to date evidence (7/7); and establishing effectiveness (8/8). Conclusions Criteria were given the highest ratings where evidence existed, and these were retained. Gaps in research were highlighted. Developers, users, and purchasers of patient decision aids now have a checklist for appraising quality. An instrument for measuring quality of decision aids is being developed.
    BibTeX:
    @article{Elwyn2006,
      author = {Elwyn, Glyn and O'Connor, Annette and Stacey, Dawn and Volk, Robert and Edwards, Adrian and Coulter, Angela and IPDAS Collaboration},
      title = {Developing a quality criteria framework for patient decision aids: online international Delphi consensus process},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2006},
      volume = {333},
      number = {7565},
      pages = {417-419},
      doi = {{10.1136/bmj.38926.629329.AE}}
    }
    
    Emanuelsson, O., Brunak, S., von Heijne, G. & Nielsen, H. Locating proteins in the cell using TargetP, SignalP and related tools {2007} NATURE PROTOCOLS
    Vol. {2}({4}), pp. {953-971} 
    article DOI  
    Abstract: Determining the subcellular localization of a protein is an important first step toward understanding its function. Here, we describe the properties of three well-known N-terminal sequence motifs directing proteins to the secretory pathway, mitochondria and chloroplasts, and sketch a brief history of methods to predict subcellular localization based on these sorting signals and other sequence properties. We then outline how to use a number of internet-accessible tools to arrive at a reliable subcellular localization prediction for eukaryotic and prokaryotic proteins. In particular, we provide detailed step-by-step instructions for the coupled use of the amino-acid sequence-based predictors TargetP, SignalP, ChloroP and TMHMM, which are all hosted at the Center for Biological Sequence Analysis, Technical University of Denmark. In addition, we describe and provide web references to other useful subcellular localization predictors. Finally, we discuss predictive performance measures in general and the performance of TargetP and SignalP in particular.
    BibTeX:
    @article{Emanuelsson2007,
      author = {Emanuelsson, Olof and Brunak, Soren and von Heijne, Gunnar and Nielsen, Henrik},
      title = {Locating proteins in the cell using TargetP, SignalP and related tools},
      journal = {NATURE PROTOCOLS},
      year = {2007},
      volume = {2},
      number = {4},
      pages = {953-971},
      doi = {{10.1038/nprot.2007.131}}
    }
    
    Eng, T., Maxfield, A., Patrick, K., Deering, M., Ratzan, S. & Gustafson, D. Access to health information and support - A public highway or a private road? {1998} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {280}({15}), pp. {1371-1375} 
    article  
    Abstract: Information and communication technologies may help reduce health disparities through their potential for promoting health, preventing disease, and supporting clinical care for all. Unfortunately, those who have preventable health problems and lack health insurance coverage are the least likely to have access to such technologies. Barriers to access include cost, geographic location, illiteracy, disability, and factors related to the capacity of people to use these technologies appropriately and effectively. A goal of universal access to health information and support is proposed to augment existing initiatives to improve the health of individuals and the public. Both public- and private-sector stakeholders, particularly government agencies and private corporations, will need to collaboratively reduce the gap between the health information ``haves'' and ``have-nots, This will include supporting health information technology access in homes and public places, developing applications for the growing diversity of users, funding research on access-related issues, ensuring the quality of health information and support, enhancing literacy in health and technology, training health information intermediaries, and integrating the concept of universal access to health information and support into health planning processes.
    BibTeX:
    @article{Eng1998,
      author = {Eng, TR and Maxfield, A and Patrick, K and Deering, MJ and Ratzan, SC and Gustafson, DH},
      title = {Access to health information and support - A public highway or a private road?},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1998},
      volume = {280},
      number = {15},
      pages = {1371-1375}
    }
    
    ENGELS, W. CONTRIBUTING SOFTWARE TO THE INTERNET - THE AMPLIFY PROGRAM {1993} TRENDS IN BIOCHEMICAL SCIENCES
    Vol. {18}({11}), pp. {448-450} 
    article  
    BibTeX:
    @article{ENGELS1993,
      author = {ENGELS, WR},
      title = {CONTRIBUTING SOFTWARE TO THE INTERNET - THE AMPLIFY PROGRAM},
      journal = {TRENDS IN BIOCHEMICAL SCIENCES},
      year = {1993},
      volume = {18},
      number = {11},
      pages = {448-450}
    }
    
    Enright, M., Day, N., Davies, C., Peacock, S. & Spratt, B. Multilocus sequence typing for characterization of methicillin-resistant and methicillin-susceptible clones of Staphylococcus aureus {2000} JOURNAL OF CLINICAL MICROBIOLOGY
    Vol. {38}({3}), pp. {1008-1015} 
    article  
    Abstract: A multilocus sequence typing (MLST) scheme has been developed for Staphylococcus aureus. The sequences of internal fragments of seven housekeeping genes were obtained for 155 S. aureus isolates from patients,vith community-acquired and hospital-acquired invasive disease in the Oxford, United Kingdom, area. Fifty-three different allelic profiles were identified, and 17 of these were represented by at least two isolates. The MLST scheme was highly discriminatory and was validated by showing that pairs of isolates with the same allelic profile produced very similar SmaI restriction fragment patterns by pulsed-field gel electrophoresis. All 22 isolates with the most prevalent allelic profile were methicillin-resistant S. aureas (MRSA) isolates and had allelic profiles identical to that of a reference strain of the epidemic MRSA clone 16 (EMRSA-16). Pour MRSA isolates that were identical in allelic profile to the other major epidemic MRSA clone prevalent in British hospitals (clone EMRSA-15) were also identified. The majority of isolates (81 were methicillin-susceptible S. aureas (MSSA) isolates, and seven MSSA clones included five or more isolates. Three of the MSSA clones included at least five isolates from patients with community-acquired invasive disease and may represent virulent clones with an increased ability to cause disease in otherwise healthy individuals. The most prevalent MSSA clone (17 isolates) was very closely related to EMRSA-16, and the success of the latter clone at causing disease in hospitals may be due to its emergence from a virulent MSSA clone that was already a major cause of invasive disease in both the community and hospital settings. MLST provides an unambiguous method for assigning MRSA and MSSA isolates to known clones or assigning them as novel clones via the Internet.
    BibTeX:
    @article{Enright2000,
      author = {Enright, MC and Day, NPJ and Davies, CE and Peacock, SJ and Spratt, BG},
      title = {Multilocus sequence typing for characterization of methicillin-resistant and methicillin-susceptible clones of Staphylococcus aureus},
      journal = {JOURNAL OF CLINICAL MICROBIOLOGY},
      year = {2000},
      volume = {38},
      number = {3},
      pages = {1008-1015}
    }
    
    Enright, M. & Spratt, B. Multilocus sequence typing {1999} TRENDS IN MICROBIOLOGY
    Vol. {7}({12}), pp. {482-487} 
    article  
    Abstract: Multilocus sequence typing (MLST) provides a new approach to molecular epidemiology that can identify and track the global spread of virulent or antibiotic-resistant isolates of bacterial pathogens using the Internet. MLST databases, together with interrogation software, are available for Neisseria meninigitidis and Streptococcus pneumoniae and databases for Streptococcus pyogenes and Staphylococcus aureus will be released shortly.
    BibTeX:
    @article{Enright1999,
      author = {Enright, MC and Spratt, BG},
      title = {Multilocus sequence typing},
      journal = {TRENDS IN MICROBIOLOGY},
      year = {1999},
      volume = {7},
      number = {12},
      pages = {482-487}
    }
    
    Enright, M., Spratt, B., Kalia, A., Cross, J. & Bessen, D. Multilocus sequence typing of Streptococcus pyogenes and the relationships between emm type and clone {2001} INFECTION AND IMMUNITY
    Vol. {69}({4}), pp. {2416-2427} 
    article  
    Abstract: Multilocus sequence typing (MLST) is a tool that can be used to study the molecular epidemiology and population genetic structure of microorganisms. A MLST scheme was developed for Streptococcus pyogenes and the nucleotide sequences of internal fragments of seven selected housekeeping loci were obtained for 212 isolates. A total of 100 unique combinations of housekeeping alleles (allelic profiles) were identified. The MLST scheme was highly concordant with several other typing methods. The emm type, corresponding to a locus that is subject to host immune selection, was determined for each isolate; of the > 150 distinct emm types identified to date, 78 are represented in this report. For a given emm type, the majority of isolates shared five or more of the seven housekeeping alleles. Stable associations between emm type and MLST were documented by comparing isolates obtained decades apart and/or from different continents. For the 33 emm types for which more than one isolate was examined, only five emm types were present on widely divergent backgrounds, differing at four or more of the housekeeping loci. The findings indicate that the majority of emm types examined define clones or clonal complexes. In addition, an MLST database is made accessible to investigators who seek to characterize other isolates of this species via the internet (http://www.mlst.net).
    BibTeX:
    @article{Enright2001,
      author = {Enright, MC and Spratt, BG and Kalia, A and Cross, JH and Bessen, DE},
      title = {Multilocus sequence typing of Streptococcus pyogenes and the relationships between emm type and clone},
      journal = {INFECTION AND IMMUNITY},
      year = {2001},
      volume = {69},
      number = {4},
      pages = {2416-2427}
    }
    
    ETZIONI, O. A SOFTBOT-BASED INTERFACE TO THE INTERNET {1994} COMMUNICATIONS OF THE ACM
    Vol. {37}({7}), pp. {72-76} 
    article  
    BibTeX:
    @article{ETZIONI1994,
      author = {ETZIONI, O},
      title = {A SOFTBOT-BASED INTERFACE TO THE INTERNET},
      journal = {COMMUNICATIONS OF THE ACM},
      year = {1994},
      volume = {37},
      number = {7},
      pages = {72-76}
    }
    
    ETZIONI, O. & WELD, D. INTELLIGENT AGENTS ON THE INTERNET - FACT, FICTION, AND FORECAST {1995} IEEE EXPERT-INTELLIGENT SYSTEMS & THEIR APPLICATIONS
    Vol. {10}({4}), pp. {44-49} 
    article  
    BibTeX:
    @article{ETZIONI1995,
      author = {ETZIONI, O and WELD, DS},
      title = {INTELLIGENT AGENTS ON THE INTERNET - FACT, FICTION, AND FORECAST},
      journal = {IEEE EXPERT-INTELLIGENT SYSTEMS & THEIR APPLICATIONS},
      year = {1995},
      volume = {10},
      number = {4},
      pages = {44-49}
    }
    
    Euzeby, J. List of bacterial names with standing in nomenclature: A folder available on the Internet {1997} INTERNATIONAL JOURNAL OF SYSTEMATIC BACTERIOLOGY
    Vol. {47}({2}), pp. {590-592} 
    article  
    Abstract: The List of Bacterial Names with Standing in Nomenclature includes, alphabetically and chronologically, the official names of bacteria as published or validated in the International Journal of Systematic Bacteriology. It encompasses 5,569 taxa (as of 31 December 1996) and is available on the Internet (URL: ftp://ftp.cict.fr/pub/bacterio/).
    BibTeX:
    @article{Euzeby1997,
      author = {Euzeby, JP},
      title = {List of bacterial names with standing in nomenclature: A folder available on the Internet},
      journal = {INTERNATIONAL JOURNAL OF SYSTEMATIC BACTERIOLOGY},
      year = {1997},
      volume = {47},
      number = {2},
      pages = {590-592}
    }
    
    Evans, W., De Vuyst, E. & Leybaert, L. The gap junction cellular internet: connexin hemichannels enter the signalling limelight {2006} BIOCHEMICAL JOURNAL
    Vol. {397}({Part 1}), pp. {1-14} 
    article DOI  
    Abstract: Cxs (connexins), the protein subunits forming gap junction intercellular communication channels, are transported to the plasma membrane after oligomerizing into hexameric assemblies called connexin hemichannels (CxHcs) or connexons, which dock head-to-head with partner hexameric channels positioned on neighbouring cells. The double membrane channel or gap junction generated directly couples the cytoplasms of interacting cells and underpins the integration and co-ordination of cellular metabolism, signalling and functions, such as secretion or contraction in cell assemblies. In contrast, CxHcs prior to forming gap junctions provide a pathway for the release from cells of ATP, glutamate, NAD(+) and prostaglandin E-2, which act as paracrine messengers. ATP activates purinergic receptors on neighbouring cells and forms the basis of intercellular Ca2+ signal propagation, complementing that occuring more directly via gap junctions. CxHcs open in response to various types of external changes, including mechanical, shear, ionic and ischaemic stress. In addition, CxHcs are influenced by intracellular signals, such as membrane potential, phosphorylation and redox status, which translate external stresses to CxHc responses. Also, recent studies demonstrate that cytoplasmic Ca2+ changes in the physiological range act to trigger CxHc opening, indicating their involvement under normal non-pathological conditions. CxHcs not only respond to cytoplasmic Ca2+, but also determine cytoplasmic Ca2+, as they are large conductance channels, suggesting a prominent role in cellular Ca2+ homoeostasis and signalling. The functions of gap-junction channels and CxHcs have been difficult to separate, but synthetic peptides, that mimic short sequences in the Cx subunit are emerging as promising tools to determine the role of CxHcs in physiology and pathology.
    BibTeX:
    @article{Evans2006,
      author = {Evans, WH and De Vuyst, E and Leybaert, L},
      title = {The gap junction cellular internet: connexin hemichannels enter the signalling limelight},
      journal = {BIOCHEMICAL JOURNAL},
      year = {2006},
      volume = {397},
      number = {Part 1},
      pages = {1-14},
      doi = {{10.1042/BJ20060175}}
    }
    
    Eyrich, V., Marti-Renom, M., Przybylski, D., Madhusudhan, M., Fiser, A., Pazos, F., Valencia, A., Sali, A. & Rost, B. EVA: continuous automatic evaluation of protein structure prediction servers {2001} BIOINFORMATICS
    Vol. {17}({12}), pp. {1242-1243} 
    article  
    Abstract: Evaluation of protein structure prediction methods is difficult and time-consuming. Here, we describe EVA, a web server for assessing protein structure prediction methods, in an automated, continuous and large-scale fashion. Currently, EVA evaluates the performance of a variety of prediction methods available through the internet. Every week, the sequences of the latest experimentally determined protein structures are sent to prediction servers, results are collected, performance is evaluated, and a summary is published on the web. EVA has so far collected data for more than 3000 protein chains. These results may provide valuable insight to both developers and users of prediction methods.
    BibTeX:
    @article{Eyrich2001,
      author = {Eyrich, VA and Marti-Renom, MA and Przybylski, D and Madhusudhan, MS and Fiser, A and Pazos, F and Valencia, A and Sali, A and Rost, B},
      title = {EVA: continuous automatic evaluation of protein structure prediction servers},
      journal = {BIOINFORMATICS},
      year = {2001},
      volume = {17},
      number = {12},
      pages = {1242-1243}
    }
    
    Eysenbach, G. The Law of Attrition {2005} JOURNAL OF MEDICAL INTERNET RESEARCH
    Vol. {7}({1}) 
    article DOI  
    Abstract: In an ongoing effort of this Journal to develop and further the theories, models, and best practices around eHealth research, this paper argues for the need for a ``science of attrition'', that is, a need to develop models for discontinuation of eHealth applications and the related phenomenon of participants dropping out of eHealth trials. What I call ``law of attrition'' here is the observation that in any eHealth trial a substantial proportion of users drop out before completion or stop using the appplication. This feature of eHealth trials is a distinct characteristic compared to, for example, drug trials. The traditional clinical trial and evidence-based medicine paradigm stipulates that high dropout rates make trials less believable. Consequently eHealth researchers tend to gloss over high dropout rates, or not to publish their study results at all, as they see their studies as failures. However, for many eHealth trials, in particular those conducted on the Internet and in particular with self-help applications, high dropout rates may be a natural and typical feature. Usage metrics and determinants of attrition should be highlighted, measured, analyzed, and discussed. This also includes analyzing and reporting the characteristics of the subpopulation for which the application eventually ``works'', ie, those who stay in the trial and use it. For the question of what works and what does not, such attrition measures are as important to report as pure efficacy measures from intention-to-treat (ITT) analyses. In cases of high dropout rates efficacy measures underestimate the impact of an application on a population which continues to use it. Methods of analyzing attrition curves can be drawn from survival analysis methods, eg, the Kaplan-Meier analysis and proportional hazards regression analysis (Cox model). Measures to be reported include the relative risk of dropping out or of stopping the use of an application, as well as a ``usage half-life'', and models reporting demographic and other factors predicting usage discontinuation in a population. Differential dropout or usage rates between two interventions could be a standard metric for the ``usability efficacy'' of a system. A ``run-in and withdrawal'' trial design is suggested as a methodological innovation for Internet-based trials with a high number of initial dropouts/nonusers and a stable group of hardcore users.
    BibTeX:
    @article{Eysenbach2005,
      author = {Eysenbach, G},
      title = {The Law of Attrition},
      journal = {JOURNAL OF MEDICAL INTERNET RESEARCH},
      year = {2005},
      volume = {7},
      number = {1},
      doi = {{10.2196/jmir.7.1.e11}}
    }
    
    Eysenbach, G. The impact of the Internet on cancer outcomes {2003} CA-A CANCER JOURNAL FOR CLINICIANS
    Vol. {53}({6}), pp. {356-371} 
    article  
    Abstract: Each day, more than 12.5 million health-related computer searches are conducted on the World Wide Web. Based on a meta-analysis of 24 published surveys, the author estimates that in the developed world, about 39% of persons with cancer are using the Internet, and approximately 2.3 million persons living with cancer worldwide are online. In addition, 15% to 20% of persons with cancer use the Internet ``indirectly'' through family and friends. Based on a comprehensive review of the literature, the available evidence on how persons with cancer are using the Internet and the effect of Internet use on persons with cancer is summarized. The author distinguishes four areas of Internet use: communication (electronic mail), community (virtual support groups), content (health information on the World Wide Web), and e-commerce. A conceptual framework summarizing the factors involved in a possible link between Internet use and cancer outcomes is presented, and future areas for research are highlighted. (C) American Cancer Society, 2003.
    BibTeX:
    @article{Eysenbach2003,
      author = {Eysenbach, G},
      title = {The impact of the Internet on cancer outcomes},
      journal = {CA-A CANCER JOURNAL FOR CLINICIANS},
      year = {2003},
      volume = {53},
      number = {6},
      pages = {356-371}
    }
    
    Eysenbach, G. Recent advances - Consumer health informatics {2000} BRITISH MEDICAL JOURNAL
    Vol. {320}({7251}), pp. {1713-1716} 
    article  
    BibTeX:
    @article{Eysenbach2000,
      author = {Eysenbach, G},
      title = {Recent advances - Consumer health informatics},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2000},
      volume = {320},
      number = {7251},
      pages = {1713-1716}
    }
    
    Eysenbach, G. & Diepgen, T. Towards quality management of medical information on the Internet: evaluation, labelling, and filtering of information {1998} BRITISH MEDICAL JOURNAL
    Vol. {317}({7171}), pp. {1496-1500} 
    article  
    BibTeX:
    @article{Eysenbach1998,
      author = {Eysenbach, G and Diepgen, TL},
      title = {Towards quality management of medical information on the Internet: evaluation, labelling, and filtering of information},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {1998},
      volume = {317},
      number = {7171},
      pages = {1496-1500}
    }
    
    Eysenbach, G. & Kohler, C. How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews {2002} BRITISH MEDICAL JOURNAL
    Vol. {324}({7337}), pp. {573-577} 
    article  
    Abstract: Objectives To describe techniques for retrieval and appraisal used by consumers when they search for health information on the internet. Design Qualitative study using focus groups, naturalistic observation of consumers searching the world wide web in a usability laboratory, and in-depth interviews. Participants A total of 21 users of the internet participated in three focus group sessions. 17 participants were given a series of health questions and observed in a usability laboratory setting while retrieving health information from the web; this was followed by in-depth interviews. Setting Heidelberg, Germany. Results Although their search technique was often suboptimal, internet users successfully found health information to answer questions in an average of 5 minutes 42 seconds (median 4 minutes 18 seconds) per question. Participants in focus groups said that when assessing the credibility of a website they primarily looked for the source, a professional design, a scientific or official touch, language, and ease of use. However, in the observational study, no participants checked any ``about us'' sections of websites, disclaimers, or disclosure statements. In the post-search interviews, it emerged that very few participants had noticed and remembered which websites they had retrieved information from. Conclusions Further observational studies are needed to design and evaluate educational and technological innovations for guiding consumers to high quality health information on the web.
    BibTeX:
    @article{Eysenbach2002a,
      author = {Eysenbach, G and Kohler, C},
      title = {How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2002},
      volume = {324},
      number = {7337},
      pages = {573-577}
    }
    
    Eysenbach, G., Powell, J., Englesakis, M., Rizo, C. & Stern, A. Health related virtual communities and electronic support groups: systematic review of the effects of online peer to peer interactions {2004} BRITISH MEDICAL JOURNAL
    Vol. {328}({7449}), pp. {1166-1170A} 
    article  
    Abstract: Objective To compile and evaluate the evidence on the effects on health and social outcomes of computer based peer to peer communities and electronic self support groups, used by people to discuss health related issues remotely. Design and data sources Analysis of studies identified from Medline, Embase, CINAHL, PsycINFO, Evidence Based Medicine Reviews, Electronics and Communications Abstracts, Computer and Information Systems Abstracts, ERIC, LISA, ProQuest Digital Dissertations, Web of Science. Selection of studies We searched for before and after studies, interrupted time series, cohort studies, or studies with control groups; evaluating health or social outcomes of virtual peer to peer communities, either as stand alone interventions or in the context of more complex systems with peer to peer components. Main outcome measures Peer to peer interventions and co-interventions studied, general characteristics of studies, outcome measures used, and study results. Results 45 publications describing 38 distinct studies met our inclusion criteria: 20 randomised trials, three meta-analyses of n of 1 trials, three non-randomised controlled trials, one cohort study, and 11 before and after studies. Only six of these evaluated ``pure'' peer to peer communities, and one had a factorial design with a ``peer to peer only'' arm, whereas 31 studies evaluated complex interventions, which often included psychoeducational programmes or one to one communication with healthcare professionals, making it impossible to attribute intervention effects to the peer to peer community component. The outcomes measured most often were depression and social support; most studies did not show an effect. We found no evidence to support concerns over virtual communities harming people. Conclusions No robust evidence exists on the effects of consumer led peer to peer communities, partly because most peer to peer communities have been evaluated only in conjunction with more complex interventions or involvement with health professionals. Given the abundance of unmoderated peer to peer groups on the internet, research is required to evaluate under which conditions and for whom electronic support groups are effective and how effectiveness in delivering social support electronically can be maximised.
    BibTeX:
    @article{Eysenbach2004,
      author = {Eysenbach, G and Powell, J and Englesakis, M and Rizo, C and Stern, A},
      title = {Health related virtual communities and electronic support groups: systematic review of the effects of online peer to peer interactions},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2004},
      volume = {328},
      number = {7449},
      pages = {1166-1170A}
    }
    
    Eysenbach, G., Powell, J., Kuss, O. & Sa, E. Empirical studies assessing the quality of health information for consumers on the World Wide Web - A systematic review {2002} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {287}({20}), pp. {2691-2700} 
    article  
    Abstract: Context The quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis or the methods and evidence has been performed. Objectives To establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research. Data Sources We searched MEDLINE and PREMEDLINE (1966 through September 2001), Science Citation Index (1997 through September 2001), Social Sciences Citation Index (1997 through September 2001), Arts and Humanities Citation Index (1997 through September 2001), LISA (1969 through July 2001), CINAHL (1982 through July 2001), PsychINFO (1988 through September 2001), EMBASE (1988 through June 2001), and SIGLE (1980 through June 2001). We also conducted hand searches, general Internet searches, and a personal bibliographic database search. Study Selection We included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web pages, and reporting 408 evaluation results for 86 different quality criteria. Data Extraction Two reviewers independently extracted study characteristics, medical domains, search strategies used, methods and criteria of quality assessment, results (percentage of sites or pages rated as inadequate pertaining to a quality criterion), and quality and rigor of study methods and reporting. Data Synthesis Most frequently used quality criteria used include accuracy, completeness, readability, design, disclosures, and references provided. Fifty-five studies (70 concluded that quality is a problem on the Web, 17 (22 remained neutral, and 7 studies (9 came to a positive conclusion. Positive studies scored significantly lower in search (P=.02) and evaluation (P=.04) methods. Conclusions Due to differences in study methods and rigor, quality criteria, study population, and topic chosen, study results and conclusions on health-related Web sites vary widely. Operational definitions of quality criteria are needed.
    BibTeX:
    @article{Eysenbach2002,
      author = {Eysenbach, G and Powell, J and Kuss, O and Sa, ER},
      title = {Empirical studies assessing the quality of health information for consumers on the World Wide Web - A systematic review},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2002},
      volume = {287},
      number = {20},
      pages = {2691-2700}
    }
    
    Faloutsos, M., Faloutsos, P. & Faloutsos, C. On power-law relationships of the Internet topology {1999}
    Vol. {29}({4})ACM SIGCOMM'99 CONFERENCE: APPLICATIONS, TECHNOLOGIES, ARCHITECTURES, AND PROTOCOLS FOR COMPUTER COMMUNICATIONS, pp. {251-262} 
    inproceedings  
    Abstract: Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45% growth of its size during that period. We show that our power-laws fit the real data very well resulting in correlation coefficients of 96% or higher. Our observations provide a novel perspective of the structure of the Internet. The power-laws describe concisely skewed distributions of graph properties such as the node outdegree. In addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. Furthermore, we can use them to generate and select realistic topologies for simulation purposes.
    BibTeX:
    @inproceedings{Faloutsos1999,
      author = {Faloutsos, M and Faloutsos, P and Faloutsos, C},
      title = {On power-law relationships of the Internet topology},
      booktitle = {ACM SIGCOMM'99 CONFERENCE: APPLICATIONS, TECHNOLOGIES, ARCHITECTURES, AND PROTOCOLS FOR COMPUTER COMMUNICATIONS},
      year = {1999},
      volume = {29},
      number = {4},
      pages = {251-262},
      note = {ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM 99), CAMBRIDGE, MA, AUG 30-SEP 03, 1999}
    }
    
    Fan, L., Cao, P., Almeida, J. & Broder, A. Summary cache: A scalable wide-area Web cache sharing protocol {2000} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {8}({3}), pp. {281-293} 
    article  
    Abstract: The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it Is not widely deployed due to the overhead of existing protocols. In this paper we demonstrate the benefits of cache sharing, measure the overhead of the existing protocols, and propose a new protocol called ``summary cache,'' In this new protocol, each proxy keeps a summary of the cache directory of each participating proxy, and checks these summaries for potential hits before sending any queries. Two factors contribute to our protocol's low overhead: the summaries are updated only periodically, and the directory representations are very economical, as low as 8 bits per entry. Using trace-driven simulations and a prototype implementation, we show that, compared to existing protocols such as the internet cache protocol (ICP), summary cache reduces the number of intercache protocol messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50 eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratio as TCP. Hence summary cache scales to a large number of proxies. (This is a revision of [18], We add more data and analysis in this version.).
    BibTeX:
    @article{Fan2000,
      author = {Fan, L and Cao, P and Almeida, J and Broder, AZ},
      title = {Summary cache: A scalable wide-area Web cache sharing protocol},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2000},
      volume = {8},
      number = {3},
      pages = {281-293}
    }
    
    Feldmann, A. & Whitt, W. Fitting mixtures of exponentials to long-tail distributions to analyze network performance models {1998} PERFORMANCE EVALUATION
    Vol. {31}({3-4}), pp. {245-279} 
    article  
    Abstract: Traffic measurements from communication networks have shown that many quantities charecterizing network performance have long-tail probability distributions, i.e., with tails that decay more slowly than exponentially. File lengths, call holding times, scene lengths in MPEG video streams, and intervals between connection requests in Internet traffic all have been found to have long tail distributions, being well described by distributions such as the Pareto and Weibull. It is known that long-tail distributions can have a dramatic effect upon performance, e.g., long-tail service-time distributions cause long-tail waiting-time distributions in queues, but it is often difficult to describe this effect in detail, because performance models with component long-tail distributions tend to be difficult to analyze. We address this problem by developing an algorithm for approximating a long-tail distribution by a hyperexponential distribution (a finite mixture of exponentials). We first prove that. in prinicple, it is possible to approximate distributions from a large class, including the Pareto and Weibull distributions, arbitrarily closely by hyperexponential distributions. Then we develop a specific fitting alogrithm. Our fitting algorithm is recursive over time scales, starting with the largest time scale. At each stage, an exponential component is fit in the largest remaining time scale and then the fitted exponential component is subtracted from the distribution. Even though a mixture of exponentials has an exponential tail, it can match a long-tail distribution in the regions of primary interest when there an enough exponential components. When a good fit is achieved, the approximating hyperexponential distribution inherits many of the difficulties of the original long-tail distribution; e.g., it is still difficult to obtain reliable estimates from simulation experiments. However, some difficulties are avoided; e.g., it is possible to solve some queueing models that could not be solved before. We give examples showing that the fitting procedure is effective, both for directly matching a long-tail distribution and for predicting the performance in a queueing model with a long-tail service-time distribution. (C) 1998 Elsevier Science B.V.
    BibTeX:
    @article{Feldmann1998,
      author = {Feldmann, A and Whitt, W},
      title = {Fitting mixtures of exponentials to long-tail distributions to analyze network performance models},
      journal = {PERFORMANCE EVALUATION},
      year = {1998},
      volume = {31},
      number = {3-4},
      pages = {245-279}
    }
    
    Feng, W., Shin, K., Kandlur, D. & Saha, D. The blue active queue management algorithms {2002} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {10}({4}), pp. {513-528} 
    article DOI  
    Abstract: In order to stem the increasing packet loss rates caused by an exponential increase in network traffic, the IETF has been considering the deployment of active queue management,techniques such as RED [14]. While active queue management can potentially reduce packet loss rates in the Internet, we show that current techniques are ineffective in preventing high loss rates. The inherent problem with these queue management algorithms is that they use queue lengths as the indicator of the severity of congestion. In light of this observation, a fundamentally different active queue management algorithm, called BLUE, is proposed, implemented, and evaluated. BLUE uses packet loss and link idle events to manage congestion. Using both simulation and controlled experiments, BLUE is shown to perform significantly better than RED, both in terms of packet loss rates and buffer size requirements in the network. As an extension to BLUE, a novel technique based on Bloom filters [2] is described for enforcing fairness among a large number of flows. In particular, we propose and evaluate Stochastic Fair BLUE (SFB), a queue management algorithm which can identify and rate-limit. nonresponsive flows using a very small amount of state information.
    BibTeX:
    @article{Feng2002,
      author = {Feng, WC and Shin, KG and Kandlur, DD and Saha, D},
      title = {The blue active queue management algorithms},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2002},
      volume = {10},
      number = {4},
      pages = {513-528},
      doi = {{10.1109/TNET.2002.801399}}
    }
    
    Fiegler, H., Carr, P., Douglas, E., Burford, D., Hunt, S., Smith, J., Vetrie, D., Gorman, P., Tomlinson, I. & Carter, N. DNA microarrays for comparative genomic hybridization based on DOP-PCR amplification of BAC and PAC clones {2003} GENES CHROMOSOMES & CANCER
    Vol. {36}({4}), pp. {361-374} 
    article DOI  
    Abstract: We have designed DOP-PCR primers specifically for the amplification of large insert clones for use in the construction of DNA microarrays. A bioinformatic approach was used to construct primers that were efficient in the general amplification of human DNA but were poor at amplifying E. coli DNA, a common contaminant of DNA preparations from large insert clones. We chose the three most selective primers for use in printing DNA microarrays. DNA combined from the amplification of large insert clones by use of these three primers and spotted onto glass slides showed more than a sixfold increase in the human to E. coli hybridization ratio when compared to the standard DOP-PCR primer, 6MW. The microarrays reproducibly delineated previously characterized gains and deletions in a cancer cell line and identified a small gain not detected by use of conventional CGH. We also describe a method for the bulk testing of the hybridization characteristics of chromosome-specific clones spotted on microarrays by use of DNA amplified from flow-sorted chromosomes. Finally, we describe a set of clones selected from the publicly available Golden Path of the human genome at 1-Mb intervals and a view in the Ensembl genome browser from which data required for the use of these clones in array CGH and other experiments can be downloaded across the Internet. (C) 2003 Wiley-Liss, Inc.
    BibTeX:
    @article{Fiegler2003,
      author = {Fiegler, H and Carr, P and Douglas, EJ and Burford, DC and Hunt, S and Smith, J and Vetrie, D and Gorman, P and Tomlinson, IPM and Carter, NP},
      title = {DNA microarrays for comparative genomic hybridization based on DOP-PCR amplification of BAC and PAC clones},
      journal = {GENES CHROMOSOMES & CANCER},
      year = {2003},
      volume = {36},
      number = {4},
      pages = {361-374},
      doi = {{10.1002/gcc.10155}}
    }
    
    Fink, A., Campbell, D., Mentzer, R., Henderson, W., Daley, J., Bannister, J., Hur, K. & Khuri, S. The National Surgical Quality Improvement Program in non-veterans administration hospitals - Initial demonstration of feasibility {2002} ANNALS OF SURGERY
    Vol. {236}({3}), pp. {344-354} 
    article DOI  
    Abstract: Objective To assess the feasibility of implementing the National Surgical Quality Improvement Program (NSQIP) methodology in non-VA hospitals. Summary Background Data Using data adjusted for patient preoperative risk, the NSQIP compares the performance of all VA hospitals performing major surgery and anonymously compares these hospitals using the ratio of observed to expected adverse events. These results are provided to each hospital and used to identify areas for improvement. Since the NSQIP's inception in 1994, the VA has reported consistent improvements in all surgery performance measures. Given the success of the NSQIP within the VA, as well as the lack of a comparable system in non-VA hospitals, this pilot study was undertaken to test the applicability of the NSQIP models and methodology in the nonfederal sector. Methods Beginning in 1999, three academic medical centers (Emory University, Atlanta, GA; University of Michigan, Ann Arbor, MI; University of Kentucky, Lexington, KY) volunteered the time of a dedicated surgical nurse reviewer who was trained in NSQIP methodology. At each academic center, these nurse reviewers used NSQIP protocols to abstract clinical data from general surgery and vascular surgery patients. Data were manually collected and then transmitted via the Internet to a secure web site developed by the NSQIP. These data were compared to the data for general and vascular surgery patients collected during a concurrent time period (10/99 to 9/00) within the VA by the NSQIP. Logistic regression models were developed for both non-VA and VA hospital data. To assess the models' predictive values, C-indices (0.5 = no prediction; 1.0 = perfect prediction) were calculated after applying the models to the non-VA as well as the VA databases. Results Data from 2,747 (general surgery 2,251; vascular surgery 496) non-VA hospital cases were compared to data from 41,360 (general surgery 31,393; vascular surgery 9,967) VA cases. The bivariate relationships between individual risk factors and 30-day mortality or morbidity were similar in the non-VA and VA patient populations for over 66% of the risk variables. C-indices of 0.942 (general surgery), 0.915 (vascular surgery), and 0.934 (general plus vascular surgery) were obtained following application of the VA NSQIP mortality model to the non-VA patient data. Lower C-indices (0.778, general surgery; 0.638, vascular surgery; 0.760, general plus vascular surgery) were obtained following application of the VA NSQIP morbidity model to the non-VA patient data. Although the non-VA sample size was smaller than the VA, preliminary analysis suggested no differences in risk-adjusted mortality between the non-VA and VA cohorts. Conclusions With some adjustments, the NSQIP methodology can be implemented and generates reasonable predictive models within non-VA hospitals.
    BibTeX:
    @article{Fink2002,
      author = {Fink, AS and Campbell, DA and Mentzer, RM and Henderson, WG and Daley, J and Bannister, J and Hur, K and Khuri, SF},
      title = {The National Surgical Quality Improvement Program in non-veterans administration hospitals - Initial demonstration of feasibility},
      journal = {ANNALS OF SURGERY},
      year = {2002},
      volume = {236},
      number = {3},
      pages = {344-354},
      note = {122nd Annual Meeting of the American-Surgical-Association, HOT SPRINGS, VIRGINIA, APR 24-27, 2002},
      doi = {{10.1097/01.SLA.0000027082.79556.55}}
    }
    
    Floyd, S. & Fall, K. Promoting the use of end-to-end congestion control in the Internet {1999} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {7}({4}), pp. {458-472} 
    article  
    Abstract: This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet,(1) These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion, The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, ``not TCP-friendly,'' or simply using disproportionate bandwidth. A flow that is not ``TCP-friendly'' is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion.
    BibTeX:
    @article{Floyd1999,
      author = {Floyd, S and Fall, K},
      title = {Promoting the use of end-to-end congestion control in the Internet},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1999},
      volume = {7},
      number = {4},
      pages = {458-472}
    }
    
    FLOYD, S. & JACOBSON, V. LINK-SHARING AND RESOURCE-MANAGEMENT MODELS FOR PACKET NETWORKS {1995} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {3}({4}), pp. {365-386} 
    article  
    Abstract: This paper discusses the use of link-sharing mechanisms in packet networks and presents algorithms for hierarchical link-sharing. Hierarchical link-searing allows multiple agencies, protocol families, or traffic types to share the bandwidth on a link in a controlled fashion. Link-sharing and real-time services both require resource management mechanisms at the gateway, Rather than requiring a gateway to implement separate mechanisms for link-sharing and real-time services, the approach in this paper is to view link-sharing and real-time service requirements as simultaneous, and in some respect complementary, constraints at a gateway that can be implemented with a unified set of mechanisms. While it is not possible to completely predict the requirements that might evolve in the Internet over the next decade, we argue that controlled link-sharing is an essential component that can provide gateways with the flexibility to accommodate emerging applications and network protocols.
    BibTeX:
    @article{FLOYD1995,
      author = {FLOYD, S and JACOBSON, V},
      title = {LINK-SHARING AND RESOURCE-MANAGEMENT MODELS FOR PACKET NETWORKS},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1995},
      volume = {3},
      number = {4},
      pages = {365-386}
    }
    
    Floyd, S. & Paxson, V. Difficulties in simulating the Internet {2001} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {9}({4}), pp. {392-403} 
    article  
    Abstract: Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, the ``mix'' of different applications used at a site, and the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator.
    BibTeX:
    @article{Floyd2001,
      author = {Floyd, S and Paxson, V},
      title = {Difficulties in simulating the Internet},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2001},
      volume = {9},
      number = {4},
      pages = {392-403},
      note = {1997 Winter Simulation Conference, ATLANTA, GEORGIA, 1997}
    }
    
    Flynn, M., McNeil, D., Maloff, B., Mutasingwa, D., Wu, M., Ford, C. & Tough, S. Reducing obesity and related chronic disease risk in children and youth: a synthesis of evidence with `best practice' recommendations {2006} OBESITY REVIEWS
    Vol. {7}({Suppl. 1}), pp. {7-66} 
    article  
    Abstract: Childhood obesity is a global epidemic and rising trends in overweight and obesity are apparent in both developed and developing countries. Available estimates for the period between the 1980s and 1990s show the prevalence of overweight and obesity in children increased by a magnitude of two to five times in developed countries (e.g. from 11% to over 30% in boys in Canada), and up to almost four times in developing countries (e.g. from 4% to 14% in Brazil). The goal of this synthesis research study was to develop best practice recommendations based on a systematic approach to finding, selecting and critically appraising programmes addressing prevention and treatment of childhood obesity and related risk of chronic diseases. An international panel of experts in areas of relevance to obesity provided guidance for the study. This synthesis research encompassed a comprehensive search of medical/academic and grey literature and the Internet covering the years 1982-2003. The appraisal approach developed to identify best practice was unique, in that it considered not only methodological rigour, but also population health, immigrant health and programme development/evaluation perspectives in the assessment. Scores were generated based on pre-determined criteria with programmes scoring in the top tertile of the scoring range in any one of the four appraisal categories included for further examination. The synthesis process included identification of gaps and an analysis and summary of programme development and programme effectiveness to enable conclusions to be drawn and recommendations to be made. The results from the library database searches (13 158 hits), the Internet search and key informant surveys were reduced to a review of 982 reports of which 500 were selected for critical appraisal. In total 158 articles, representing t47 programmes, were included for further analysis. The majority of reports were included based on high appraisal scores in programme development and evaluation with limited numbers eligible based on scores in other categories of appraisal. While no single programme emerged as a model of best practice, synthesis of included programmes provided rich information on elements that represent innovative rather than best practice under particular circumstances that are dynamic (changing according to population subgroups, age, ethnicity, setting, leadership, etc.). Thus the findings of this synthesis review identifies areas for action, opportunities for programme development and research priorities to inform the development of best practice recommendations that will reduce obesity and chronic disease risk in children and youth. A lack of programming to address the particular needs of subgroups of children and youth emerged in this review. Although immigrants new to developed countries may be more vulnerable to the obesogenic environment, no programmes were identified that specifically targeted their potentially specialized needs (e.g. different food supply in a new country). Children 0-6 years of age and males represented other population subgroups where obesity prevention programmes and evidence of effectiveness were limited. These gaps are of concern because (i) the pre-school years may be a critical period for obesity prevention as indicated by the association of the adiposity rebound and obesity in later years; and (ii) although the growing prevalence of obesity affects mates and females equally; males may be more vulnerable to associated health risks such as cardiovascular disease. Other gaps in knowledge identified during synthesis include a limited number of interventions in home and community settings and a lack of upstream population-based interventions. The shortage of programmes in community and home settings limits our understanding of the effectiveness of interventions in these environments, while the lack of upstream investment indicates an opportunity to develop more upstream and population-focused interventions to balance and extend the current emphasis on individual -based programmes. The evidence reviewed indicates that current programmes lead to short-term improvements in outcomes relating to obesity and chronic disease prevention with no adverse effects noted. This supports the continuation and further development of programmes currently directed at children and youth, as further evidence for best practice accumulates. In this synthesis, schools were found to be a critical setting for programming where health status indicators, such as body composition, chronic disease risk factors and fitness, can all be positively impacted. Engagement in physical activity emerged as a critical intervention in obesity prevention and reduction programmes. While many programmes in the review had the potential to integrate chronic disease prevention, few did; therefore efforts could be directed towards better integration of chronic disease prevention programmes to minimize duplication and optimize resources. Programmes require sustained long-term resources to facilitate comprehensive evaluation that will ascertain if long-term impact such as sustained normal weight is maintained. Furthermore, involving stakeholders in programme design, implementation and evaluation could be crucial to the success of interventions, helping to ensure that needs are met. A number of methodological issues related to the assessment of obesity intervention and prevention programmes were identified and offer insight into how research protocols can be enhanced to strengthen evidence for obesity interventions. Further research is required to understand the merits of the various forms in which interventions (singly and in combination) are delivered and in which circumstances they are effective. There is a critical need for the development of consistent indicators to ensure that comparisons of programme outcomes can be made to better inform best practice.
    BibTeX:
    @article{Flynn2006,
      author = {Flynn, MAT and McNeil, DA and Maloff, B and Mutasingwa, D and Wu, M and Ford, C and Tough, SC},
      title = {Reducing obesity and related chronic disease risk in children and youth: a synthesis of evidence with `best practice' recommendations},
      journal = {OBESITY REVIEWS},
      year = {2006},
      volume = {7},
      number = {Suppl. 1},
      pages = {7-66}
    }
    
    Foulds, J., Ramstrom, L., Burke, M. & Fagerstrom, K. Effect of smokeless tobacco (snus) on smoking and public health in Sweden {2003} TOBACCO CONTROL
    Vol. {12}({4}), pp. {349-359} 
    article  
    Abstract: Objective: To review the evidence on the effects of moist smokeless tobacco (snus) on smoking and ill health in Sweden. Method: Narrative review of published papers and other data sources (for example, conference abstracts and internet based information) on snus use, use of other tobacco products, and changes in health status in Sweden. Results: Snus is manufactured and stored in a manner that causes it to deliver lower concentrations of some harmful chemicals than other tobacco products, although it can deliver high doses of nicotine. It is dependence forming, but does not appear to cause cancer or respiratory diseases. It may cause a slight increase in cardiovascular risks and is likely to be harmful to the unborn fetus, although these risks are lower than those caused by smoking. There has been a larger drop in male daily smoking (from 40% in 1976 to 15% in 2002) than female daily smoking (34% in 1976 to 20% in 2002) in Sweden, with a substantial proportion (around 30 of male ex-smokers using snus when quitting smoking. Over the same time period, rates of lung cancer and myocardial infarction have dropped significantly faster among Swedish men than women and remain at low levels as compared with other developed countries with a long history of tobacco use. Conclusions: Snus availability in Sweden appears to have contributed to the unusually low rates of smoking among Swedish men by helping them transfer to a notably less harmful form of nicotine dependence.
    BibTeX:
    @article{Foulds2003,
      author = {Foulds, J and Ramstrom, L and Burke, M and Fagerstrom, K},
      title = {Effect of smokeless tobacco (snus) on smoking and public health in Sweden},
      journal = {TOBACCO CONTROL},
      year = {2003},
      volume = {12},
      number = {4},
      pages = {349-359}
    }
    
    Fraleigh, C., Moon, S., Lyles, B., Cotton, C., Khan, M., Moll, D., Rockell, R., Seely, T. & Diot, C. Pocket-level traffic measurements from the Sprint IP backbone {2003} IEEE NETWORK
    Vol. {17}({6}), pp. {6-16} 
    article  
    Abstract: Network traffic measurements provide essential data for networking research and network management. In this article we describe a passive monitoring system designed to capture GPS synchronized packet-level traffic measurements on OC-3, OC-12, and OC-48 links. Our system is deployed in four POPS in the Sprint IP backbone. Measurement data is stored on a 10 Tbyte storage area network and analyzed on a computing cluster. We present a set of results to both demonstrate the strength of the system and identify recent changes in Internet traffic characteristics. The results include traffic workload analyses of TCP flow round-trip times, out-of-sequence packet rates, and packet delay. We also show that some links no longer carry Web traffic as their dominant component to the benefit of file sharing and media streaming. On most links we monitored, TCP flows exhibit low out-of-sequence packet rates, and backbone delays are dominated by the speed of light.
    BibTeX:
    @article{Fraleigh2003,
      author = {Fraleigh, C and Moon, S and Lyles, B and Cotton, C and Khan, M and Moll, D and Rockell, R and Seely, T and Diot, C},
      title = {Pocket-level traffic measurements from the Sprint IP backbone},
      journal = {IEEE NETWORK},
      year = {2003},
      volume = {17},
      number = {6},
      pages = {6-16}
    }
    
    Fu, C. & Liew, S. TCP veno: TCP enhancement for transmission over wireless access networks {2003} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {21}({2}), pp. {216-228} 
    article DOI  
    Abstract: Wireless access networks in the form of wireless local area networks, home networks, and cellular networks are becoming an integral part of the Internet. Unlike wired networks, random packet loss due to bit errors is not negligible in wireless networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: 1) it refines the multiplicative decrease algorithm of TCP Reno-the most widely deployed TCP version in practice-by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and 2) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized. Based on extensive network testbed experiments and live Internet measurements, we show that Veno can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections. In typical wireless access networks with 1% random packet loss rate, throughput improvement of up to 80% can be demonstrated. A salient feature of Veno is that it modifies only the sender-side protocol of Reno without changing the receiver-side protocol stack.
    BibTeX:
    @article{Fu2003,
      author = {Fu, CP and Liew, SC},
      title = {TCP veno: TCP enhancement for transmission over wireless access networks},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2003},
      volume = {21},
      number = {2},
      pages = {216-228},
      doi = {{10.1109/JSAC.2002.807336}}
    }
    
    Fuggetta, A., Picco, G. & Vigna, G. Understanding code mobility {1998} IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
    Vol. {24}({5}), pp. {342-361} 
    article  
    Abstract: The technologies, architectures, and methodologies traditionally used to develop distributed applications exhibit a variety of limitations and drawbacks when applied to large scale distributed settings (e.g., the Internet). In particular, they fail in providing the desired degree of configurability, scalability, and customizability. To address these issues, researchers are investigating a variety of innovative approaches. The most promising and intriguing ones are those based on the ability of moving code across the nodes of a network, exploiting the notion of mobile code. As an emerging research field, code mobility is generating a growing body of scientific literature and industrial developments. Nevertheless, the field is still characterized by the lack of a sound and comprehensive body of concepts and terms. As a consequence, it is rather difficult to understand, assess, and compare the existing approaches. In turn, this limits our ability to fully exploit them in practice, and to further promote the research work on mobile code. Indeed, a significant symptom of this situation is the lack of a commonly accepted and sound definition of the term ``mobile code'' itself. This paper presents a conceptual framework for understanding code mobility. The framework is centered around a classification that introduces three dimensions: technologies, design paradigms, and applications. The contribution of the paper is two-fold. First, it provides a set of terms and concepts to understand and compare the approaches based on the notion of mobile code. Second, it introduces criteria and guidelines that support the developer in the identification of the classes of applications that can leverage off of mobile code, in the design of these applications, and, finally, in the selection of the most appropriate implementation technologies. The presentation of the classification is intertwined with a review of state-of-the-art in the field. Finally, the use of the classification is exemplified in a case study.
    BibTeX:
    @article{Fuggetta1998,
      author = {Fuggetta, A and Picco, GP and Vigna, G},
      title = {Understanding code mobility},
      journal = {IEEE TRANSACTIONS ON SOFTWARE ENGINEERING},
      year = {1998},
      volume = {24},
      number = {5},
      pages = {342-361}
    }
    
    Fung, A.E., Rosenfeld, P.J. & Reichel, E. The International Intravitreal Bevacizumab Safety Survey: using the Internet to assess drug safety worldwide {2006} BRITISH JOURNAL OF OPHTHALMOLOGY
    Vol. {90}({11}), pp. {1344-1349} 
    article DOI  
    Abstract: Aim: Off-label intravitreal injections of bevacizumab (Avastin) have been given for the treatment of neovascular and exudative ocular diseases since May 2005. Since then, the use of intravitreal bevacizumab has spread worldwide, but the drug-related adverse events associated with its use have been reported only in a few retrospective reviews. The International Intravitreal Bevacizumab Safety Survey was initiated to gather timely information regarding adverse events from doctors around the world via the internet. Methods: An internet-based survey was designed to identify adverse events associated with intravitreal bevacizumab treatment. The survey web address was disseminated to the international vitreoretinal community via email. Rates of adverse events were calculated from participant responses. Results: 70 centres from 12 countries reported on 7113 injections given to 5228 patients. Doctor-reported adverse events included corneal abrasion, lens injury, endophthalmitis, retinal detachment, inflammation or uveitis, cataract progression, acute vision loss, central retinal artery occlusion, subretinal haemorrhage, retinal pigment epithelium tears, blood pressure elevation, transient ischaemic attack, cerebrovascular accident and death. None of the adverse event rates exceeded 0.21 Conclusion: Intravitreal bevacizumab is being used globally for ocular diseases. Self-reporting of adverse events after intravitreal bevacizumab injections did not show an increased rate of potential drug-related ocular or systemic events. These short-term results suggest that intravitreal bevacizumab seems to be safe.
    BibTeX:
    @article{Fung2006,
      author = {Fung, A. E. and Rosenfeld, P. J. and Reichel, E.},
      title = {The International Intravitreal Bevacizumab Safety Survey: using the Internet to assess drug safety worldwide},
      journal = {BRITISH JOURNAL OF OPHTHALMOLOGY},
      year = {2006},
      volume = {90},
      number = {11},
      pages = {1344-1349},
      note = {Annual Meeting of the Association-for-Research-in-Vision-and-Ophthalmology, Ft Lauderdale, FL, MAY, 2006},
      doi = {{10.1136/bjo.2006.099598}}
    }
    
    Gagliardi, A. & Jadad, A. Examination of instruments used to rate quality of health information on the internet: chronicle of a voyage with an unclear destination {2002} BRITISH MEDICAL JOURNAL
    Vol. {324}({7337}), pp. {569-573} 
    article  
    Abstract: Objective This study updates work published in 1998, which found that of 47 rating instruments appearing on websites offering health information, 14 described how they were developed, five provided instructions for use, and none reported the interobserver reliability and construct validity of the measurements. Design All rating instrument sites noted in the original study were visited to ascertain whether they were still operating. New rating instruments were identified by duplicating and enhancing the comprehensive search of the internet and the medical and information science literature used in the previous study. Eligible instruments were evaluated as in the original study. Results 98 instruments used to assess the quality of websites in the past five years were identified. Many of the rating instruments identified in the original study were no longer available. Of 51 newly identified rating instruments, only five provided some information by which they could be evaluated. As with the six sites identified in the original study that remained available, none of these five instruments seemed to have been validated. Conclusions Many incompletely developed rating instruments continue to appear on websites providing health information, even when the organisations that gave rise to those instruments no longer exist Many researchers, organisations, and website developers are exploring alternative ways of helping people to find and use high quality information available on the internet. Whether they are needed or sustainable and whether they make a difference remain to be shown.
    BibTeX:
    @article{Gagliardi2002,
      author = {Gagliardi, A and Jadad, AR},
      title = {Examination of instruments used to rate quality of health information on the internet: chronicle of a voyage with an unclear destination},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2002},
      volume = {324},
      number = {7337},
      pages = {569-573}
    }
    
    Gambini, P., Renaud, M., Guillemot, C., Callegati, F., Andonovic, I., Bostica, B., Chiaroni, D., Corazza, G., Danielsen, S., Gravey, P., Hansen, P., Henry, M., Janz, C., Kloch, A., Krahenbuhl, R., Raffaelli, C., Schilling, M., Talneau, A. & Zucchelli, L. Transparent optical packet switching: Network architecture and demonstrators in the KEOPS project {1998} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {16}({7}), pp. {1245-1259} 
    article  
    Abstract: This paper reviews the work carried out in the ACTS KEOPS (Keys to Optical Packet Switching) project, describing the results obtained to date. The main objective of the project is the definition, development, and assessment of optical packet switching and routing networks, capable of providing transparency to the payload bit rate, using optical packets of fixed duration and low bit rate headers in order to enable easier processing at the network/node interfaces. The feasibility of the KEOPS concept is assessed by modeling, laboratory experiments, and testbed implementation of optical packet switching nodes and network/node interfacing blocks, including a fully equipped demonstrator. The demonstration relies on advanced optoelectronic components, developed within the project, which are briefly described.
    BibTeX:
    @article{Gambini1998,
      author = {Gambini, P and Renaud, M and Guillemot, C and Callegati, F and Andonovic, I and Bostica, B and Chiaroni, D and Corazza, G and Danielsen, SL and Gravey, P and Hansen, PB and Henry, M and Janz, C and Kloch, A and Krahenbuhl, R and Raffaelli, C and Schilling, M and Talneau, A and Zucchelli, L},
      title = {Transparent optical packet switching: Network architecture and demonstrators in the KEOPS project},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1998},
      volume = {16},
      number = {7},
      pages = {1245-1259}
    }
    
    Gao, L. On inferring autonomous system relationships in the Internet {2001} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {9}({6}), pp. {733-745} 
    article  
    Abstract: The Internet consists of rapidly increasing number of hosts interconnected by constantly evolving networks of links and routers. Interdomain routing in the Internet is coordinated by the Border Gateway Protocol (BGP). BGP allows each autonomous system (AS) to choose its own administrative policy in selecting routes and propagating reachability information to others. These routing policies are constrained by the contractual commercial agreements between administrative domains. For example, an AS sets its policy so that it does not provide transit services between its providers. Such policies imply that AS relationships are an important aspect of Internet structure. We propose an augmented AS graph representation that classifies AS relationships into customer-provider, peering, and sibling relationships. We classify the types of routes that can appear in BGP routing tables based on the relationships between the ASs in the path and present heuristic algorithms that infer AS relationships from BGP routing tables. The algorithms are tested on publicly available BGP routing tables. We verify our inference results with AT&T internal information on its relationship with neighboring ASS. As much as 99.1% of our inference results are confirmed by the AT&T internal information. We also verify our inferred sibling relationships with the information acquired from the WHOIS lookup service. More than half of our inferred sibling-to-sibling relationships are confirmed by the WHOIS lookup service. To the best of our knowledge, there has been no publicly available information about AS relationships and this is the first attempt in understanding and inferring AS relationships in the Internet. We show evidence that some routing table entries stem from router misconfigurations.
    BibTeX:
    @article{Gao2001,
      author = {Gao, LX},
      title = {On inferring autonomous system relationships in the Internet},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2001},
      volume = {9},
      number = {6},
      pages = {733-745}
    }
    
    Garcia-Luna-Aceves, J. & Madruga, E. The core-assisted mesh protocol {1999} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {17}({8}), pp. {1380-1394} 
    article  
    Abstract: The core-assisted mesh protocol (CAMP) is introduced for multicast routing in ad hoc networks. CAMP generalizes the notion of core-based trees introduced for internet multicasting into multicast meshes that have much richer connectivity than trees. A shared multicast mesh is defined for each multicast group; the main goal of using such meshes is to maintain the connectivity of multicast groups even while network routers move frequently. CAMP consists of the maintenance of multicast meshes and loop-free packet forwarding over such meshes. Within the multicast mesh of a group, packets from any source in the group are forwarded along the reverse shortest path to the source, just as in traditional multicast protocols based on source-based trees. CAMP guarantees that within a finite time, every receiver of a multicast group has a reverse shortest path to each source of the multicast group. Multicast packets for a group are forwarded along the shortest paths from sources to receivers defined within the group's mesh. CAMP uses cores only to limit the traffic needed for a router to join a multicast group; the failure of cores does not stop packet forwarding or the process of maintaining the multicast meshes.
    BibTeX:
    @article{Garcia-Luna-Aceves1999,
      author = {Garcia-Luna-Aceves, JJ and Madruga, EL},
      title = {The core-assisted mesh protocol},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1999},
      volume = {17},
      number = {8},
      pages = {1380-1394}
    }
    
    Garlaschelli, D., Caldarelli, G. & Pietronero, L. Universal scaling relations in food webs {2003} NATURE
    Vol. {423}({6936}), pp. {165-168} 
    article DOI  
    Abstract: The structure of ecological communities is usually represented by food webs(1-3). In these webs, we describe species by means of vertices connected by links representing the predations. We can therefore study different webs by considering the shape (topology) of these networks(4,5). Comparing food webs by searching for regularities is of fundamental importance, because universal patterns would reveal common principles underlying the organization of different ecosystems. However, features observed in small food webs(1-3,6) are different from those found in large ones(7-15). Furthermore, food webs (except in isolated cases(16,17)) do not share(18,19) general features with other types of network (including the Internet, the World Wide Web and biological webs). These features are a small-world character(4,5) and a scale-free (power-law) distribution of the degree(4,5) (the number of links per vertex). Here we propose to describe food webs as transportation networks(20) by extending to them the concept of allometric scaling(20-22) (how branching properties change with network size). We then decompose food webs in spanning trees and loop-forming links. We show that, whereas the number of loops varies significantly across real webs, spanning trees are characterized by universal scaling relations.
    BibTeX:
    @article{Garlaschelli2003,
      author = {Garlaschelli, D and Caldarelli, G and Pietronero, L},
      title = {Universal scaling relations in food webs},
      journal = {NATURE},
      year = {2003},
      volume = {423},
      number = {6936},
      pages = {165-168},
      doi = {{10.1038/nature01604}}
    }
    
    Ge, A., Callegati, F. & Tamil, L. On optical burst switching and self-similar traffic {2000} IEEE COMMUNICATIONS LETTERS
    Vol. {4}({3}), pp. {98-100} 
    article  
    Abstract: In this letter we consider burst switching for very high speed routing in the nest generation Internet backbone, In this scenario. Internet Protocol (LP) packets to a given destination are collected in Bursts at the network edges. We propose a burst assembly mechanism that can reduce the traffic autocorrelation or degree of self-similarity, and at the same time keep the delay due to burst formation limited at the network edges.
    BibTeX:
    @article{Ge2000,
      author = {Ge, A and Callegati, F and Tamil, LS},
      title = {On optical burst switching and self-similar traffic},
      journal = {IEEE COMMUNICATIONS LETTERS},
      year = {2000},
      volume = {4},
      number = {3},
      pages = {98-100}
    }
    
    Gefen, D. E-commerce: the role of familiarity and trust {2000} OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE
    Vol. {28}({6}), pp. {725-737} 
    article  
    Abstract: Familiarity is a precondition for trust, claims Luhmann [28. Luhmann N. Trust and power. Chichester, UK: Wiley, 1979 (translation from German)], and trust is a prerequisite of social behavior, especially regarding important decisions. This study examines this intriguing idea in the context of the E-commerce involved in inquiring about and purchasing books on the Internet. Survey data from 217 potential users support and extend this hypothesis. The data show that both familiarity with an Internet vendor and its processes and trust in the vendor influenced the respondents' intentions to inquire about books. and their intentions to purchase them. Additionally, the data show that while familiarity indeed builds trust, it is primarily people's disposition to trust that affected their trust in the vendor. Implications for research and practice are discussed. (C) 2000 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Gefen2000,
      author = {Gefen, D},
      title = {E-commerce: the role of familiarity and trust},
      journal = {OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE},
      year = {2000},
      volume = {28},
      number = {6},
      pages = {725-737}
    }
    
    Ghani, N., Dixit, S. & Wang, T. On IP-over-WDM integration {2000} IEEE COMMUNICATIONS MAGAZINE
    Vol. {38}({3}), pp. {72-84} 
    article  
    Abstract: Expanding Internet-based services are driving the need for evermore bandwidth in the network backbone. These needs will grow further as new real-time multimedia applications become more feasible and pervasive. Currently, there is no other technology on the horizon that can effectively meet such a demand for bandwidth in the transport infrastructure other than WDM technology. This technology enables incremental and quick provisioning up to and beyond two orders of magnitude of today's fiber bandwidth levels. This precludes the need to deploy additional cabling and having to contend with right-of-way issues, a key advantage. Hence, it is only natural that over time optical/WDM technology will migrate closer to the end users, from core to regional, metropolitan, and ultimately access networks. At present, WDM deployment is mostly point-to-point and uses SONET/SDH as the standard layer for interfacing to the higher layers of the protocol stack. However, large-scale efforts are underway to develop standards and products that will eliminate one or more of these intermediate layers (e.g., SONET/SDH, ATM) and run IP directly over the WDM layer. IP over WDM has been envisioned as the winning combination due to the ability of the IF to be the common revenue-generating convergence sublayer and WDM as a bandwidth-rich transport sublayer. Various important concerns still need to be addressed regarding IP-WDM integration. These include lightpath routing coupled with tighter interworkings with IP routing and resource management protocols, survivability provisioning, framing/monitoring solutions, anti others.
    BibTeX:
    @article{Ghani2000,
      author = {Ghani, N and Dixit, S and Wang, TS},
      title = {On IP-over-WDM integration},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2000},
      volume = {38},
      number = {3},
      pages = {72-84}
    }
    
    Gibbens, R. & Kelly, F. Resource pricing and the evolution of congestion control {1999} AUTOMATICA
    Vol. {35}({12}), pp. {1969-1985} 
    article  
    Abstract: We describe ways in which the transmission control protocol of the Internet may evolve to support heterogeneous applications. We show that by appropriately marking packets at overloaded resources and by charging a fixed small amount for each mark received, end-nodes are provided with the necessary information and the correct incentive to use the network efficiently. (C) 1999 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Gibbens1999,
      author = {Gibbens, RJ and Kelly, FP},
      title = {Resource pricing and the evolution of congestion control},
      journal = {AUTOMATICA},
      year = {1999},
      volume = {35},
      number = {12},
      pages = {1969-1985}
    }
    
    Giles, J. Internet encyclopaedias go head to head {2005} NATURE
    Vol. {438}({7070}), pp. {900-901} 
    article DOI  
    Abstract: The journal Nature preformed a comparative analysis of scientific data entry accuracy between Wikipedia, a free online encyclopedia that anyone can edit, and Encyclopedia Britannica. The average science entry in Wikipedia contains around 4 inaccuracies; Britannica, about 3.
    BibTeX:
    @article{Giles2005,
      author = {Giles, J},
      title = {Internet encyclopaedias go head to head},
      journal = {NATURE},
      year = {2005},
      volume = {438},
      number = {7070},
      pages = {900-901},
      doi = {{10.1038/438900a}}
    }
    
    Girvan, M. & Newman, M. Community structure in social and biological networks {2002} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {99}({12}), pp. {7821-7826} 
    article DOI  
    Abstract: A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known-a collaboration network and a food web-and find that it detects significant and informative community divisions in both cases.
    BibTeX:
    @article{Girvan2002,
      author = {Girvan, M and Newman, MEJ},
      title = {Community structure in social and biological networks},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2002},
      volume = {99},
      number = {12},
      pages = {7821-7826},
      doi = {{10.1073/pnas.122653799}}
    }
    
    Godes, D. & Mayzlin, D. Using online conversations to study word-of-mouth communication {2004} MARKETING SCIENCE
    Vol. {23}({4}), pp. {545-560} 
    article DOI  
    Abstract: Managers are very interested in word-of-mouth communication because they believe that I product's success. is related to the word of mouth that it generates. However, there are at least three significant challenges associated with measuring word of mouth. First, how does one gather the data? Because the information is exchanged in private conversations, direct observation traditionally has been difficult. Second, what aspect of these conversations should one measure? The third challenge comes from the fact that word of mouth is not exogenous. While the mapping from word of mouth to future sales is of great interest to the firm, we must also recognize that word of mouth is an outcome of past sales. Our primary objective is to address these challenges. As a context for our study, we have chosen new television (TV) shows during the 1999-2000 seasons. Our source of word-of-mouth conversations is Usenet, a collection of thousands of newsgroups with diverse topics. We find that online conversations may offer an easy and cost-effective opportunity to measure word of mouth. We show that a measure of the dispersion of conversations across communities has explanatory power in a dynamic model of TV ratings.
    BibTeX:
    @article{Godes2004,
      author = {Godes, D and Mayzlin, D},
      title = {Using online conversations to study word-of-mouth communication},
      journal = {MARKETING SCIENCE},
      year = {2004},
      volume = {23},
      number = {4},
      pages = {545-560},
      doi = {{10.1287/mksc.1040.0071}}
    }
    
    Goh, K., Kahng, B. & Kim, D. Universal behavior of load distribution in scale-free networks {2001} PHYSICAL REVIEW LETTERS
    Vol. {87}({27}) 
    article DOI  
    Abstract: We study a problem of data packet transport in scale-free networks whose degree distribution follows a power law with the exponent gamma. Load, or ``betweenness centrality,'' of a vertex is the accumulated total number of data packets passing through that vertex when every pair of vertices sends and receives a data packet along the shortest path connecting the pair. It is found that the load distribution follows a power law with the exponent delta approximate to 2.2(1), insensitive to different values of gamma in the range, 23, and different mean degrees, which is valid for both undirected and directed cases. Thus, we conjecture that the load exponent is a universal quantity to characterize scale-free networks.
    BibTeX:
    @article{Goh2001,
      author = {Goh, KI and Kahng, B and Kim, D},
      title = {Universal behavior of load distribution in scale-free networks},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2001},
      volume = {87},
      number = {27},
      doi = {{10.1103/PhysRevLett.87.278701}}
    }
    
    Goh, K., Oh, E., Jeong, H., Kahng, B. & Kim, D. Classification of scale-free networks {2002} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {99}({20}), pp. {12583-12588} 
    article DOI  
    Abstract: While the emergence of a power-law degree distribution in complex networks is intriguing, the degree exponent is not universal. Here we show that the betweenness centrality displays a power-law distribution with an exponent eta, which is robust, and use it to classify the scale-free networks. We have observed two universality classes with eta approximate to 2.2(1) and 2.0, respectively. Real-world networks for the former are the protein-interaction networks, the metabolic networks for eukaryotes and bacteria, and the coauthorship network, and those for the latter one are the Internet, the World Wide Web, and the metabolic networks for Archaea. Distinct features of the mass-distance relation, generic topology of geodesics, and resilience under attack of the two classes are identified. Various model networks also belong to either of the two classes, while their degree exponents are tunable.
    BibTeX:
    @article{Goh2002,
      author = {Goh, KI and Oh, E and Jeong, H and Kahng, B and Kim, D},
      title = {Classification of scale-free networks},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2002},
      volume = {99},
      number = {20},
      pages = {12583-12588},
      doi = {{10.1073/pnas.202301299}}
    }
    
    Goldberg, L., Johnson, J., Eber, H., Hogan, R., Ashton, M., Cloninger, C. & Gough, H. The international personality item pool and the future of public-domain personality measures {2006} JOURNAL OF RESEARCH IN PERSONALITY
    Vol. {40}({1}), pp. {84-96} 
    article DOI  
    Abstract: Seven experts on personality measurement here discuss the viability of public-domain personality measures, focusing on the International Personality Item Pool (IPIP) as a prototype. Since its inception in 1996, the use of items and scales from the IPIP has increased dramatically. Items from the IPIP have been translated from English into more than 25 other languages. Currently over 80 publications using IPIP scales are listed at the IPIP Web site (http://ipip.ori.org), and the rate of IPIP-related publications has been increasing rapidly. The growing popularity of the IPIP can be attributed to five factors: (1) It is cost free; (2) its items can be obtained instantaneously via the Internet; (3) it includes over 2000 items, all easily available for inspection; (4) scoring keys for IPIP scales are provided; and (5) its items can be presented in any order, interspersed with other items, reworded, translated into other languages, and administered on the World Wide Web without asking permission of anyone. The unrestricted availability of the IPIP raises concerns about possible misuse by unqualified persons, and the freedom of researchers to use the IPIP in idiosyncratic ways raises the possibility of fragmentation rather than scientific unification in personality research. (c) 2005 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Goldberg2006,
      author = {Goldberg, LR and Johnson, JA and Eber, HW and Hogan, R and Ashton, MC and Cloninger, CR and Gough, HG},
      title = {The international personality item pool and the future of public-domain personality measures},
      journal = {JOURNAL OF RESEARCH IN PERSONALITY},
      year = {2006},
      volume = {40},
      number = {1},
      pages = {84-96},
      note = {5th Annual Conference of the Association-of-Research-in-Personality, New Orleans, LA, JAN, 2005},
      doi = {{10.1016/j.jrp.2005.08.007}}
    }
    
    Goldschlag, D., Reed, M. & Syverson, P. Onion Routing for anonymous and private Internet connections {1999} COMMUNICATIONS OF THE ACM
    Vol. {42}({2}), pp. {39-41} 
    article  
    BibTeX:
    @article{Goldschlag1999,
      author = {Goldschlag, D and Reed, M and Syverson, P},
      title = {Onion Routing for anonymous and private Internet connections},
      journal = {COMMUNICATIONS OF THE ACM},
      year = {1999},
      volume = {42},
      number = {2},
      pages = {39-41}
    }
    
    Gosling, S., Vazire, S., Srivastava, S. & John, O. Should we trust web-based studies? A comparative analysis of six preconceptions about Internet questionnaires {2004} AMERICAN PSYCHOLOGIST
    Vol. {59}({2}), pp. {93-104} 
    article DOI  
    Abstract: The rapid growth of the Internet provides a wealth of new research opportunities for psychologists. Internet data collection methods, with a focus on self-report questionnaires from self-selected samples; are evaluated and compared with traditional paper-and-pencil methods. Six preconceptions about Internet samples and data quality are evaluated by comparing a new large Internet sample (N = 361,703) with a set of 510 published traditional samples. Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age. Moreover, Internet findings generalize across presentation formats, are not adversely affected by nonserious or repeat responders, and are consistent with findings from traditional methods. It is concluded that Internet methods can contribute to many areas of psychology.
    BibTeX:
    @article{Gosling2004,
      author = {Gosling, SD and Vazire, S and Srivastava, S and John, OP},
      title = {Should we trust web-based studies? A comparative analysis of six preconceptions about Internet questionnaires},
      journal = {AMERICAN PSYCHOLOGIST},
      year = {2004},
      volume = {59},
      number = {2},
      pages = {93-104},
      doi = {{10.1037/0003-066X.59.2.93}}
    }
    
    Gottlieb, B., Beitel, L., Wu, J. & Trifiro, M. The androgen receptor gene mutations database (ARDB): 2004 update {2004} HUMAN MUTATION
    Vol. {23}({6}), pp. {527-533} 
    article DOI  
    Abstract: The current version of the Androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 374 to 605, and the number of AR-interacting proteins described has increased from 23 to 70, both over the past 3 years. A 3D model of the AR ligand-binding domain (AR LBD) has been added to give a better understanding of gene structure-function relationships. In addition, silent mutations have now been reported in both androgen insensitivity syndrome (AIS) and prostate cancer (CAP) cases. The database also now incorporates information on the exon 1 CAG repeat expansion disease, spinobulbar muscular atrophy (SBMA), as well as CAG repeat length variations associated with risk for female breast, uterine endometrial, colorectal, and prostate cancer, as well as for male infertility. The possible implications of somatic mutations, as opposed to germline mutations, in the development of future locus, specific mutation databases (LSDBs) is discussed. The database is available on the Internet (www.mcgill.ca/ androgendb/). (C) 2004 Wiley-Liss, Inc.
    BibTeX:
    @article{Gottlieb2004,
      author = {Gottlieb, B and Beitel, LK and Wu, JH and Trifiro, M},
      title = {The androgen receptor gene mutations database (ARDB): 2004 update},
      journal = {HUMAN MUTATION},
      year = {2004},
      volume = {23},
      number = {6},
      pages = {527-533},
      doi = {{10.1002/humu.20044}}
    }
    
    Gottlieb, B., Lehvaslaiho, H., Beitel, L., Lumbroso, R., Pinsky, L. & Trifiro, M. The Androgen Receptor Gene Mutations Database {1998} NUCLEIC ACIDS RESEARCH
    Vol. {26}({1}), pp. {234-238} 
    article  
    Abstract: The current version of the androgen receptor (AR) gene mutations database is described, The total number of reported mutations has risen from 272 to 309 in the past year, We have expanded the database: (i) by giving each entry an accession number; (ii) by adding information on the length of polymorphic polyglutamine (polyGln) and polyglycine (polyGly) tracts in exon 1; (iii) by adding information on large gene deletions; (iv) by providing a direct link with a completely searchable database (courtesy EMBL-European Bioinformatics Institute), The addition of the exon 1 polymorphisms is discussed in light of their possible relevance as markers for predisposition to prostate or brest cancer. The database is also available on the internet (http://www.mcgill.ca/androgendb/), from EMBL-European Bioinformatics Institute (ftp.ebi.ac.uk/pub/databases/androgen), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca).
    BibTeX:
    @article{Gottlieb1998,
      author = {Gottlieb, B and Lehvaslaiho, H and Beitel, LK and Lumbroso, R and Pinsky, L and Trifiro, M},
      title = {The Androgen Receptor Gene Mutations Database},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {1998},
      volume = {26},
      number = {1},
      pages = {234-238}
    }
    
    Goyal, V., Kovacevic, J. & Kelner, J. Quantized frame expansions with erasures {2001} APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS
    Vol. {10}({3}), pp. {203-233} 
    article  
    Abstract: Frames have been used to capture significant signal characteristics, provide numerical stability of reconstruction, and enhance resilience to additive noise. This paper places frames in a new setting, where some of the elements are deleted. Since proper subsets of frames are sometimes themselves frames, a quantized frame expansion can be a useful representation even when some transform coefficients are lost in transmission. This yields robustness to losses in packet networks such as the Internet. With a simple model for quantization error, it is shown that a normalized frame minimizes mean-squared error if and only if it is tight. With one coefficient erased, a tight frame is again optimal among normalized frames, both in average and worst-case scenarios. For more erasures, a general analysis indicates some optimal designs. Being left with a tight frame after erasures minimizes distortion, but considering also the transmission rate and possible erasure events complicates optimizations greatly. (C) 2001 Academic Press.
    BibTeX:
    @article{Goyal2001,
      author = {Goyal, VK and Kovacevic, J and Kelner, JA},
      title = {Quantized frame expansions with erasures},
      journal = {APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS},
      year = {2001},
      volume = {10},
      number = {3},
      pages = {203-233},
      note = {IEEE Data Compression Conference, SNOWBIRD, UTAH, MAR, 1999}
    }
    
    Graham, C., Ferrier, S., Huettman, F., Moritz, C. & Peterson, A. New developments in museum-based informatics and applications in biodiversity analysis {2004} TRENDS IN ECOLOGY & EVOLUTION
    Vol. {19}({9}), pp. {497-503} 
    article DOI  
    Abstract: Information from natural history collections (NHCs) about the diversity, taxonomy and historical distributions of species worldwide is becoming increasingly available over the Internet. In light of this relatively new and rapidly increasing resource, we critically review its utility and limitations for addressing a diverse array of applications. When integrated with spatial environmental data, NHC data can be used to study a broad range of topics, from aspects of ecological and evolutionary theory, to applications in conservation, agriculture and human health. There are challenges inherent to using NHC data, such as taxonomic inaccuracies and biases in the spatial coverage of data, which require consideration. Promising research frontiers include the integration of NHC data with information from comparative genomics and phylogenetics, and stronger connections between the environmental analysis of NHC data and experimental and field-based tests of hypotheses.
    BibTeX:
    @article{Graham2004,
      author = {Graham, CH and Ferrier, S and Huettman, F and Moritz, C and Peterson, AT},
      title = {New developments in museum-based informatics and applications in biodiversity analysis},
      journal = {TRENDS IN ECOLOGY & EVOLUTION},
      year = {2004},
      volume = {19},
      number = {9},
      pages = {497-503},
      doi = {{10.1016/j.tree.2004.07.006}}
    }
    
    Graves, L. & Swaminathan, B. PulseNet standardized protocol for subtyping Listeria monocytogenes by macrorestriction and pulsed-field gel electrophoresis {2001} INTERNATIONAL JOURNAL OF FOOD MICROBIOLOGY
    Vol. {65}({1-2}), pp. {55-62} 
    article  
    Abstract: PulseNet is a national network of pubic health and food regulatory laboratories established in the US to detect clusters of foodborne disease and respond quickly to foodborne outbreak investigations. PulseNet laboratories: currently subtype Escherichia coli O157:H7, non-typhoidal Salmonella. and Shigella isolates by a highly standardized I-day pulsed-field gel electrophoresis (PFGE). and exchange normalized DNA ``fingerprint'' patterns via the Internet. We describe a standardized molecular subtyping protocol for subtyping Listeria monocytogenes that was recently added to PulseNet. The subtyping can be completed within 30 h from the time a pure culture of the bacteria is obtained. (C) 2001 Elsevier science B.V. All rights reserved.
    BibTeX:
    @article{Graves2001,
      author = {Graves, LM and Swaminathan, B},
      title = {PulseNet standardized protocol for subtyping Listeria monocytogenes by macrorestriction and pulsed-field gel electrophoresis},
      journal = {INTERNATIONAL JOURNAL OF FOOD MICROBIOLOGY},
      year = {2001},
      volume = {65},
      number = {1-2},
      pages = {55-62}
    }
    
    Green, P., Stavropoulos, S., Panagi, S., Goldstein, S., McMahon, D., Absan, H. & Neugut, A. Characteristics of adult celiac disease in the USA: Results of a national survey {2001} AMERICAN JOURNAL OF GASTROENTEROLOGY
    Vol. {96}({1}), pp. {126-131} 
    article  
    Abstract: OBJECTIVE: The clinical spectrum of adults with celiac disease in the United States, where the disease is considered rare, is not known. We sought this information by distributing a survey. METHODS: A questionnaire was distributed by way of a celiac newsletter, directly to celiac support groups, and through the Internet. RESULTS: Respondents (1612) were from all United States except one. Seventy-five percent (1138) were biopsy proven. Women predominated (2.9:1). The majority of respondents were diagnosed in their fourth to sixth decades. Symptoms were present a mean of 11 yr before diagnosis. Diarrhea was present in 85 Diagnosis was considered prompt by only 52% and 31% had consulted two or more gastroenterologists. Improved quality of life after diagnosis was reported by 77 Those diagnosed at age greater than or equal to 60 yr also reported improved quality of life. Five respondents had small intestinal malignancies (carcinoma 2, lymphoma 3) accounting for a relative risk of 300 (60-876) fur the development of lymphoma and 67 (7-240) for adenocarcinoma. CONCLUSIONS: Patients with celiac disease in the United States have a long duration of symptoms and consider their diagnosis delayed. Improved quality of life after diagnosis is common. An increased risk of developing small intestine malignancies is present. (Am J Gastroenterol 2001;96: 126-131. (C) 2001 by Am. Coll. of Gastroenterology).
    BibTeX:
    @article{Green2001,
      author = {Green, PHR and Stavropoulos, SN and Panagi, SG and Goldstein, SL and McMahon, DJ and Absan, H and Neugut, AI},
      title = {Characteristics of adult celiac disease in the USA: Results of a national survey},
      journal = {AMERICAN JOURNAL OF GASTROENTEROLOGY},
      year = {2001},
      volume = {96},
      number = {1},
      pages = {126-131}
    }
    
    Greenwald, A., Nosek, B. & Banaji, M. Understanding and using the implicit association test: I. An improved scoring algorithm {2003} JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY
    Vol. {85}({2}), pp. {197-216} 
    article DOI  
    Abstract: In reporting Implicit Association Test (IAT) results, researchers have most often used scoring conventions described in the first publication of the IAT (A. G. Greenwald, D. E. McGhee, & J. L. K. Schwartz, 1998). Demonstration IATs available on the Internet have produced large data sets that were used in the current article to evaluate alternative scoring procedures. Candidate new algorithms were examined in terms of their (a) correlations with parallel self-report measures, (b) resistance to an artifact associated with speed of responding, (c) internal consistency, (d) sensitivity to known influences on IAT measures, and (e) resistance to known procedural influences. The best-performing measure incorporates data from the IAT's practice trials, uses a metric that is calibrated by each respondent's latency variability, and includes a latency penalty for errors. This new algorithm strongly outperforms the earlier (conventional) procedure.
    BibTeX:
    @article{Greenwald2003,
      author = {Greenwald, AG and Nosek, BA and Banaji, MR},
      title = {Understanding and using the implicit association test: I. An improved scoring algorithm},
      journal = {JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY},
      year = {2003},
      volume = {85},
      number = {2},
      pages = {197-216},
      doi = {{10.1037/0022-3514.85.2.197}}
    }
    
    Griffiths, K. & Christensen, H. Quality of web based information on treatment of depression: cross sectional survey {2000} BRITISH MEDICAL JOURNAL
    Vol. {321}({7275}), pp. {1511-1515} 
    article  
    Abstract: Objectives To evaluate quality of web based information on treatment of depression, to identify potential indicators of content quality, and to establish if accountability criteria are indicators of quality. Design Cross sectional survey. Data sources 21 frequently accessed websites about depression. Main outcome measures (i) Site characteristics; (ii) quality of content-concordance with evidence based depression guidelines (guideline score), appropriateness of other relevant site information (issues score), and subjective rating of site quality (global score); and (iii) accountability-conformity with core accountability standards (Silberg score) and quality of evidence cited in support of conclusions (level of evidence score). Results Although the sites contained useful information, their overall quality was poor: the mean guideline, issues, and global scores were only 4.7 (range 0-13) out of 43, 9.8 (6-14) out of 17, and 3 (0.5-7.5) out of 10 respectively. Sites typically did not cite scientific evidence in support of their conclusions. The guideline score correlated with the two other quality of content measures, but none of the content measures correlated with the Silberg accountability score. Content quality was superior for sites owned by organisations and sites with an editorial board. Conclusions There is a need for better evidence based information about depression on the web, and a need to reconsider the role of accountability criteria as indicators of site quality and to develop simple valid indicators of quality. Ownership by an organisation and the involvement of a professional editorial board may be useful indicators. The study methodology may be useful for exploring these issues in other health related subjects.
    BibTeX:
    @article{Griffiths2000,
      author = {Griffiths, KM and Christensen, H},
      title = {Quality of web based information on treatment of depression: cross sectional survey},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2000},
      volume = {321},
      number = {7275},
      pages = {1511-1515}
    }
    
    Guimera, R., Diaz-Guilera, A., Vega-Redondo, F., Cabrales, A. & Arenas, A. Optimal network topologies for local search with congestion {2002} PHYSICAL REVIEW LETTERS
    Vol. {89}({24}) 
    article DOI  
    Abstract: The problem of searchability in decentralized complex networks is of great importance in computer science, economy, and sociology. We present a formalism that is able to cope simultaneously with the problem of search and the congestion effects that arise when parallel searches are performed, and we obtain expressions for the average search cost both in the presence and the absence of congestion. This formalism is used to obtain optimal network structures for a system using a local search algorithm. It is found that only two classes of networks can be optimal: starlike configurations, when the number of parallel searches is small, and homogeneous-isotropic configurations, when it is large.
    BibTeX:
    @article{Guimera2002,
      author = {Guimera, R and Diaz-Guilera, A and Vega-Redondo, F and Cabrales, A and Arenas, A},
      title = {Optimal network topologies for local search with congestion},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2002},
      volume = {89},
      number = {24},
      doi = {{10.1103/PhysRevLett.89.248701}}
    }
    
    Gupta, P. & McKeown, N. Algorithms for packet classification {2001} IEEE NETWORK
    Vol. {15}({2}), pp. {24-32} 
    article  
    Abstract: The process of categorizing packets into ``flows'' in an Internet router is called pocket classification. All packets belonging to the same flow obey a predefined rule and are processed in a similar manner by the router. For example, all packets with the same source and destination IP addresses may be defined to form a flow. Packet classification is needed for non-best-effort services, such as firewalls and quality of service; services that require the capability to distinguish and isolate traffic in different flows for suitable processing. In general, packet classification on multiple fields is a difficult problem. Hence, researchers have proposed a variety of algorithms which, broadly speaking, can be categorized as basic search algorithms, geometric algorithms, heuristic algorithms, or hardware-specific search algorithms. In this tutorial we describe algorithms that are representative of each category, and discuss which type of algorithm might be suitable for different applications.
    BibTeX:
    @article{Gupta2001,
      author = {Gupta, P and McKeown, N},
      title = {Algorithms for packet classification},
      journal = {IEEE NETWORK},
      year = {2001},
      volume = {15},
      number = {2},
      pages = {24-32}
    }
    
    Gustafsson, E. & Jonsson, A. Always best connected {2003} IEEE WIRELESS COMMUNICATIONS
    Vol. {10}({1}), pp. {49-55} 
    article  
    Abstract: Over the last few years, we have experienced a variety of access technologies being deployed. While 2G cellular systems evolve into 3G systems such as UMTS or cdma2000, providing worldwide cover age, wireless LAN solutions have been extensively deployed to provide hot-spot high-bandwidth Internet access in Airports, hotels, and conference centers. At the same time, fixed access such as DSL and cable modem tied, to wireless LANs appear in home and office environments. The Always Best Connected (ABC) concept allows a person connectivity to applications using the devices and access technologies that best suit his or her needs, thereby combining the features of access technologies such as DSL, Bluetooth, and WLAN with cellular systems to provide an enhanced user experience for 2.5G, 3G, and beyond. An always best connected scenario, where a person is allowed to choose the best available access networks and devices at any point in time, generates great complexity and a number of requirements, not only for the technical solutions, but also in terms of business relationships between operators and service providers, and in subscription handling. This article describes the concept of being always best connected, discusses the user experience and business relationships in an ABC environment, and outlines the different aspects of an ABC solution that will broaden the technology and business base of 3G.
    BibTeX:
    @article{Gustafsson2003,
      author = {Gustafsson, E and Jonsson, A},
      title = {Always best connected},
      journal = {IEEE WIRELESS COMMUNICATIONS},
      year = {2003},
      volume = {10},
      number = {1},
      pages = {49-55}
    }
    
    Gutmann, D., Aylsworth, A., Carey, J., Korf, B., Marks, J., Pyeritz, R., Rubenstein, A. & Viskochil, D. The diagnostic evaluation and multidisciplinary management of neurofibromatosis 1 and neurofibromatosis 2 {1997} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {278}({1}), pp. {51-57} 
    article  
    Abstract: Objective.-Neurofibromatosis 1 and neurofibromatosis 2 are autosomal dominant genetic disorders in which affected individuals develop both benign and malignant tumors at an increased frequency, Since the original National Institutes of Health Consensus Development Conference in 1987, there has been significant progress toward a more complete understanding of the molecular bases for neurofibromatosis 1 and neurofibromatosis 2. Our objective was to determine the diagnostic criteria for neurofibromatosis 1 and neurofibromatosis 2, recommendations for the care of patients and their families at diagnosis and during routine follow-up, and the role of DNA diagnostic testing in the evaluation of these disorders, Data Sources.-Published reports from 1966 through 1996 obtained by MEDLINE search and studies presented at national and international meetings. Study Selection.-All studies were reviewed and analyzed by consensus from multiple authors. Data Extraction.-Peer-reviewed published data were critically evaluated by independent extraction by multiple authors. Data Synthesis.-The main results of the review were qualitative and were reviewed by neurofibromatosis clinical directors worldwide through an Internet Web site. Conclusions.-On the basis of the information presented in this review, we propose a comprehensive approach to the diagnosis and treatment of individuals with neurofibromatosis 1 and neurofibromatosis 2.
    BibTeX:
    @article{Gutmann1997,
      author = {Gutmann, DH and Aylsworth, A and Carey, JC and Korf, B and Marks, J and Pyeritz, RE and Rubenstein, A and Viskochil, D},
      title = {The diagnostic evaluation and multidisciplinary management of neurofibromatosis 1 and neurofibromatosis 2},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1997},
      volume = {278},
      number = {1},
      pages = {51-57}
    }
    
    Guttman, R., Moukas, A. & Maes, P. Agent-mediated electronic commerce: a survey {1998} KNOWLEDGE ENGINEERING REVIEW
    Vol. {13}({2}), pp. {147-159} 
    article  
    Abstract: Software agents help automate a variety of tasks including those involved in buying and selling products over the Internet. This paper surveys several of these agent-mediated electronic commerce systems by describing their roles in the context of a Consumer Buying Behavior (CBB) model. The CBB model we present augments traditional marketing models with concepts from Software Agents research to accommodate electronic markets. We then discuss the variety of Artificial Intelligence techniques that support agent mediation and conclude with future directions of agent-mediated electronic commerce research.
    BibTeX:
    @article{Guttman1998,
      author = {Guttman, RH and Moukas, AG and Maes, P},
      title = {Agent-mediated electronic commerce: a survey},
      journal = {KNOWLEDGE ENGINEERING REVIEW},
      year = {1998},
      volume = {13},
      number = {2},
      pages = {147-159},
      note = {International Conference and Exhibition on the Practical Application of Intelligent Agents and Multi-Agent Technology (PAAM), LONDON, ENGLAND, APR, 1997}
    }
    
    HANSEN, J., LUND, O., ENGELBRECHT, J., BOHR, H., NIELSEN, J., HANSEN, J. & BRUNAK, S. PREDICTION OF O-GLYCOSYLATION OF MAMMALIAN PROTEINS - SPECIFICITY PATTERNS OF UDP-GALNAC-POLYPEPTIDE N-ACETYLGALACTOSAMINYLTRANSFERASE {1995} BIOCHEMICAL JOURNAL
    Vol. {308}({Part 3}), pp. {801-813} 
    article  
    Abstract: The specificity of the enzyme(s) catalysing the covalent link between the hydroxyl side chains of serine or threonine and the sugar moiety N-acetylgalactosamine (GalNAc) is unknown. Pattern recognition by artificial neural networks and weight matrix algorithms was performed to determine the exact position of in vivo O-linked GalNAc-glycosylated serine and threonine residues from the primary sequence exclusively. The acceptor sequence context for O-glycosylation of serine was found to differ from that of threonine and the two types were therefore treated separately. The context of the sites showed a high abundance of proline, serine and threonine extending far beyond the previously reported region covering positions -4 through +4 relative to the glycosylated residue. The O-glycosylation sites were found to cluster and to have a high abundance in the N-terminal part of the protein. The sites were also found to have an increased preference for three different classes of beta-turns. No simple consensus-like rule could be deduced for the complex glycosylation sequence acceptor patterns. The neural networks were trained on the hitherto largest data material consisting of 48 carefully examined mammalian glycoproteins comprising 264 O-glycosylation sites. For detection neural network algorithms were much more reliable than weight matrices. The networks correctly found 60-95% of the O-glycosylated serine/threonine residues and 88-97% of the non-glycosylated residues in two independent test sets of known glycoproteins. A computer server using E-mail for prediction of O-glycosylation sites has been implemented and made publicly available. The Internet address is NetOglyc@cbs.dtu.dk.
    BibTeX:
    @article{HANSEN1995,
      author = {HANSEN, JE and LUND, O and ENGELBRECHT, J and BOHR, H and NIELSEN, JO and HANSEN, JES and BRUNAK, S},
      title = {PREDICTION OF O-GLYCOSYLATION OF MAMMALIAN PROTEINS - SPECIFICITY PATTERNS OF UDP-GALNAC-POLYPEPTIDE N-ACETYLGALACTOSAMINYLTRANSFERASE},
      journal = {BIOCHEMICAL JOURNAL},
      year = {1995},
      volume = {308},
      number = {Part 3},
      pages = {801-813}
    }
    
    Hardey, M. Doctor in the house: the Internet as a source of lay health knowledge and the challenge to expertise {1999} SOCIOLOGY OF HEALTH & ILLNESS
    Vol. {21}({6}), pp. {820-835} 
    article  
    Abstract: This paper investigates the new and unique medium of the Internet as a source of information about health. The Internet is an inherently interactive environment that transcends established national boundaries, regulations and distinctions between professions and expertise. The paper reports findings from a qualitative study of households who routinely used the Internet to access health information and examines how it affected their health beliefs and behaviours. The public use of previously obscure and inaccessible medical information is placed in the context of the debate about deprofessionalisation. It is shown that it is the users of Internet information rather than authors or professional experts who decided what and how material is accessed and used. It is concluded that the Internet forms the site of a new struggle over expertise in health that will transform the relationship between the health professions and their clients.
    BibTeX:
    @article{Hardey1999,
      author = {Hardey, M},
      title = {Doctor in the house: the Internet as a source of lay health knowledge and the challenge to expertise},
      journal = {SOCIOLOGY OF HEALTH & ILLNESS},
      year = {1999},
      volume = {21},
      number = {6},
      pages = {820-835}
    }
    
    Harmsen, D., Claus, H., Witte, W., Rothganger, J., Claus, H., Turnwald, D. & Vogel, U. Typing of methicillin-resistant Staphylococcus aureus in a university hospital setting by using novel software for spa repeat determination and database management {2003} JOURNAL OF CLINICAL MICROBIOLOGY
    Vol. {41}({12}), pp. {5442-5448} 
    article DOI  
    Abstract: The spa gene of Staphylococcus aureus encodes protein A and is used for typing of methicillin-resistant Staphylococcus aureus (MRSA). We used sequence typing of the spa gene repeat region to study the epidemiology of MRSA at a German university hospital. One hundred seven and 84 strains were studied during two periods of 10 and 4 months, respectively. Repeats and spa types were determined by Ridom StaphType, a novel software tool allowing rapid repeat determination, data management and retrieval, and Internet-based assignment of new spa types following automatic quality control of DNA sequence chromatograms. Isolates representative of the most abundant spa types were subjected to multilocus sequence typing and pulsed-field gel electrophoresis. One of two predominant spa types was replaced by a clonally related variant in the second study period. Ten unique spa types, which were equally distributed in both study periods, were recovered. The data show a rapid dynamics of clone circulation in a university hospital setting. spa typing was valuable for tracking of epidemic isolates. The data show that disproval of epidemiologically suggested transmissions of MRSA is one of the main objectives of spa typing in departments with a high incidence of MRSA.
    BibTeX:
    @article{Harmsen2003,
      author = {Harmsen, D and Claus, H and Witte, W and Rothganger, J and Claus, H and Turnwald, D and Vogel, U},
      title = {Typing of methicillin-resistant Staphylococcus aureus in a university hospital setting by using novel software for spa repeat determination and database management},
      journal = {JOURNAL OF CLINICAL MICROBIOLOGY},
      year = {2003},
      volume = {41},
      number = {12},
      pages = {5442-5448},
      doi = {{10.1128/JCM.41.12.5442-5448.2003}}
    }
    
    Hay, S., Rogers, D., Toomer, J. & Snow, R. Annual Plasmodium falciparum entomological inoculation rates (EIR) across Africa: literature survey, internet access and review {2000} TRANSACTIONS OF THE ROYAL SOCIETY OF TROPICAL MEDICINE AND HYGIENE
    Vol. {94}({2}), pp. {113-127} 
    article  
    Abstract: This paper presents the results of an extensive search of the formal and informal literature on annual Plasmodium falciparum entomological inoculation rates (EIR) across Africa from 1980 onwards. It first describes how the annual EIR data were collated, summarized, gee-referenced and staged for public access on the internet. Problems of data standardization, reporting accuracy and the subsequent publishing of information on the internet follow. The review was conducted primarily to investigate the spatial heterogeneity of malaria exposure in Africa and supports the idea of highly heterogeneous risk at the continental, regional and country levels. The implications for malaria control of the significant spatial (and seasonal) variation in exposure to infected mosquito bites are discussed.
    BibTeX:
    @article{Hay2000,
      author = {Hay, SI and Rogers, DJ and Toomer, JF and Snow, RW},
      title = {Annual Plasmodium falciparum entomological inoculation rates (EIR) across Africa: literature survey, internet access and review},
      journal = {TRANSACTIONS OF THE ROYAL SOCIETY OF TROPICAL MEDICINE AND HYGIENE},
      year = {2000},
      volume = {94},
      number = {2},
      pages = {113-127}
    }
    
    Hellens, R., Edwards, E., Leyland, N., Bean, S. & Mullineaux, P. pGreen: a versatile and flexible binary Ti vector for Agrobacterium-mediated plant transformation {2000} PLANT MOLECULAR BIOLOGY
    Vol. {42}({6}), pp. {819-832} 
    article  
    Abstract: Binary Ti vectors are the plasmid vectors of choice in Agrobacterium-mediated plant transformation protocols. The pGreen series of binary Ti vectors are configured for ease-of-use and to meet the demands of a wide range of transformation procedures for many plant species. This plasmid system allows any arrangement of selectable marker and reporter gene at the right and left T-DNA borders without compromising the choice of restriction sites for cloning, since the pGreen cloning sites are based on the well-known pBluescript general vector plasmids. Its size and copy number in Escherichia coli offers increased efficiencies in routine in vitro recombination procedures. pGreen can replicate in Agrobacterium only if another plasmid, pSoup, is co-resident in the same strain. pSoup provides replication functions in trans for pGreen. The removal of RepA and Mob functions has enabled the size of pGreen to be kept to a minimum. Versions of pGreen have been used to transform several plant species with the same efficiencies as other binary Ti vectors. Information on the pGreen plasmid system is supplemented by an Internet site (http://www.pgreen.ac.uk) through which comprehensive information, protocols, order forms and lists of different pGreen marker gene permutations can be found.
    BibTeX:
    @article{Hellens2000,
      author = {Hellens, RP and Edwards, EA and Leyland, NR and Bean, S and Mullineaux, PM},
      title = {pGreen: a versatile and flexible binary Ti vector for Agrobacterium-mediated plant transformation},
      journal = {PLANT MOLECULAR BIOLOGY},
      year = {2000},
      volume = {42},
      number = {6},
      pages = {819-832}
    }
    
    HERBST, W., HERBST, D., GROSSMAN, E. & WEINSTEIN, D. CATALOG OF UBVRI PHOTOMETRY OF T-TAURI STARS AND ANALYSIS OF THE CAUSES OF THEIR VARIABILITY {1994} ASTRONOMICAL JOURNAL
    Vol. {108}({5}), pp. {1906-1923} 
    article  
    Abstract: A computer-based catalogue of UBVRI photoelectric photometry of T Tauri stars and their earlier type analogs has been compiled. It presently includes over 10 000 entries on 80 stars and will be updated on a regular basis; it is available on Internet. The catalogue is used to analyze the sometimes bizarre light variations of pre-main-sequence stars on time scales of days to months in an attempt to illuminate the nature and causes of the phenomenon. It is useful in discussing their light variations to divide the stars into three groups according to their spectra. These are: weak T Tauri stars (WTTS; spectral class later than KO and W-H alpha < 10 Angstrom), classical T Tauri stars (CTTS; spectral class later than K0 and W-H alpha > 10 Angstrom), and early type T Tauri stars (ETTS; spectral class of K0 or earlier). Three distinct types of variability are displayed by stars in the catalogue. Type I variations are periodic in VRI and undoubtedly caused by rotational modulation of a star with an asymmetric distribution of cool spots on its surface. Irregular flare activity is sometimes seen on such stars in U and B. Type I variations are easiest to see on WITS but are clearly present on CTTS and ETTS as well. Type II variations are caused by hot `'spots'' or zones and, it is argued, result from changes in the excess or `'veiling'' continuum commonly attributed to an accretion boundary layer or impact zone of a magnetically channeled accretion flow. This type of variation is seen predominantly or solely in CTTS. A subcategory, designated Type IIp, consists of stars which display periodic variations caused by hot spots. Whereas cool spots may last for hundreds or thousands of rotations, hot spots appear to come and go on a much shorter time scale. This suggests that both unsteady accretion and rotation of the star contribute to Type II variations. It is shown that a third type of variation exists among ETTS, including stars as early as A type. UX Ori is a typical example and we call these Type III variables or UXors. Their distinguishing characteristic is that they can display very large amplitudes (exceeding 2.8 mag in V) while showing little or no evidence for a veiling continuum or any substantial change in their photospheric spectra. If Type III variations are caused by changes in accretion luminosity, then boundary layers or impact zones in ETTS must be much different from CTTS which, of course, is possible since mass accretion rates are probably much higher. However, the leading hypothesis for explaining Type III variations is variable obscuration by circumstellar dust. It is argued that the putative dust clumps causing such variations cannot be confined to a disk; otherwise UXors would be rare. Perhaps magnetic effects are involved in levitating accreting dust out of the plane, as has been suggested for CTTS, or perhaps we are witnessing continuing infall of clumps from placental clouds. A third possibility is that dust may be condensing in an outflow.
    BibTeX:
    @article{HERBST1994,
      author = {HERBST, W and HERBST, DK and GROSSMAN, EJ and WEINSTEIN, D},
      title = {CATALOG OF UBVRI PHOTOMETRY OF T-TAURI STARS AND ANALYSIS OF THE CAUSES OF THEIR VARIABILITY},
      journal = {ASTRONOMICAL JOURNAL},
      year = {1994},
      volume = {108},
      number = {5},
      pages = {1906-1923}
    }
    
    Hertel, G., Niedner, S. & Herrmann, S. Motivation of software developers in Open Source projects: an Internet-based survey of contributors to the Linux kernel {2003} RESEARCH POLICY
    Vol. {32}({7}), pp. {1159-1177} 
    article DOI  
    Abstract: The motives of 141 contributors to a large Open Source Software (OSS) project (the Linux kernel) was explored with an Internet-based questionnaire study. Measured factors were both derived from discussions within the Linux community as well as from models from social sciences. Participants' engagement was particularly determined by their identification as a Linux developer, by pragmatic motives to improve own software, and by their tolerance of time investments. Moreover, some of the software development was accomplished by teams. Activities in these teams were particularly determined by participants' evaluation of the team goals as well as by their perceived indispensability and self-efficacy. (C) 2003 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Hertel2003,
      author = {Hertel, G and Niedner, S and Herrmann, S},
      title = {Motivation of software developers in Open Source projects: an Internet-based survey of contributors to the Linux kernel},
      journal = {RESEARCH POLICY},
      year = {2003},
      volume = {32},
      number = {7},
      pages = {1159-1177},
      doi = {{10.1016/S0048-7333(03)00047-7}}
    }
    
    Herzenberg, L., Parks, D., Sahaf, B., Perez, O., Roederer, M. & Herzenberg, L. The history and future of the fluorescence activated cell sorter and flow cytometry: A view from Stanford {2002} CLINICAL CHEMISTRY
    Vol. {48}({10}), pp. {1819-1827} 
    article  
    Abstract: The Fluorescence Activated Cell Sorter (FACS) was invented in the late 1960s by Bonner, Sweet, Hulett, Herzenberg, and others to do flow cytometry and cell sorting of viable cells. Becton Dickinson Immunocytometry Systems introduced the commercial machines in the early 1970s, using the Stanford patent and expertise supplied by the Herzenberg Laboratory and a Becton Dickinson engineering group under Bernie Shoor. Over the years, we have increased the number of measured FACS dimensions (parameters) and the speed of sorting to where we now simultaneously measure 12 fluorescent colors plus 2 scatter parameters. In this history, I illustrate the great utility of this state-of-the-art instrument, which allows us to simultaneously stain, analyze, and then sort cells from small samples of human blood cells from AIDS patients, infants, stern cell transplant patients, and others. I also illustrate analysis and sorting of multiple subpopulations of lymphocytes by use of 8-12 colors. In addition, I review single cell sorting used to clone and analyze hybridomas and discuss other applications of FACS developed over the past 30 years, as well as give our ideas on the future of FACS. These ideas are currently being implemented in new programs using the internet for data storage and analysis as well as developing new fluorochromes, e.g., green fluorescent protein and tandem dyes, with applications in such areas as apoptosis, gene expression, cytokine expression, cell biochemistry, redox regulation, and AIDS. Finally, I describe new FACS methods for measuring activated kinases and phosphatases and redox active enzymes in individual cells simultaneously with cell surface phenotyping. Thus, key functions can be studied in various subsets of cells without the need for prior sorting. (C) 2002 American Association for Clinical Chemistry.
    BibTeX:
    @article{Herzenberg2002,
      author = {Herzenberg, LA and Parks, D and Sahaf, B and Perez, O and Roederer, M and Herzenberg, LA},
      title = {The history and future of the fluorescence activated cell sorter and flow cytometry: A view from Stanford},
      journal = {CLINICAL CHEMISTRY},
      year = {2002},
      volume = {48},
      number = {10},
      pages = {1819-1827},
      note = {34th Annual Oak Ridge Conference, LA JOLLA, CALIFORNIA, APR 25-26, 2002}
    }
    
    Hesse, B., Nelson, D., Kreps, G., Croyle, R., Arora, N., Rimer, B. & Viswanath, K. Trust and sources of health information - The impact of the Internet and its implications for health care providers: Findings from the first Health Information National Trends Survey {2005} ARCHIVES OF INTERNAL MEDICINE
    Vol. {165}({22}), pp. {2618-2624} 
    article  
    Abstract: Background: The context in which patients consume health information has changed dramatically with diffusion of the Internet, advances in telemedicine, and changes in media health coverage. The objective of this study was to provide nationally representative estimates for health-related uses of the Internet, level of trust in health information sources, and preferences for cancer information sources. Methods: Data from the Health Information National Trends Survey were used. A total of 6369 persons 18 years or older were studied. The main outcome measures were online health activities, levels of trust, and source preference. Results: Analyses indicated that 63.0% (95% confidence interval [CI], 61.764.3 of the US adult population in 2003 reported ever going online, with 63.7% (95% Cl, 61.765.8 of the online population having looked for health information for themselves or others at least once in the previous 12 months. Despite newly available communication channels, physicians remained the most highly trusted information source to patients, with 62.4% (95% Cl, 60.864.0 of adults expressing a lot of trust in their physicians. When asked where they preferred going for specific health information, 49.5% (95% Cl, 48.150.8 reported wanting to go to their physicians first. When asked where they actually went, 48.6% (95% Cl, 46.15 1.0 reported going online first, with only 10.9% (95% Cl, 9.512.3 going to their physicians first. Conclusion: The Health Information National Trends Survey data portray a tectonic shift in the ways in which patients consume health and medical information, with more patients looking for information online before talking with their physicians.
    BibTeX:
    @article{Hesse2005,
      author = {Hesse, BW and Nelson, DE and Kreps, GL and Croyle, RT and Arora, NK and Rimer, BK and Viswanath, K},
      title = {Trust and sources of health information - The impact of the Internet and its implications for health care providers: Findings from the first Health Information National Trends Survey},
      journal = {ARCHIVES OF INTERNAL MEDICINE},
      year = {2005},
      volume = {165},
      number = {22},
      pages = {2618-2624}
    }
    
    HITCHCOCK, A. & MANCINI, D. BIBLIOGRAPHY AND DATABASE OF INNER-SHELL EXCITATION-SPECTRA OF GAS-PHASE ATOMS AND MOLECULES {1994} JOURNAL OF ELECTRON SPECTROSCOPY AND RELATED PHENOMENA
    Vol. {67}({1}), pp. {1-132} 
    article  
    Abstract: An annotated bibliography of articles consisting of theoretical or experimental studies of inner shell (core) excitation of atoms or molecules is provided. A computer database consisting of 374 spectra of 191 molecules has been assembled. All of these spectra were recorded by electron energy loss spectroscopy (EELS) in the electric dipole regime. The database spectra are presented as both raw data and after conversion of the isolated core excitation signal to an absolute oscillator strength (f-value) scale. An improved scheme for deriving absolute oscillator strengths is described and tested. Instructions are provided for accessing the database with the file transfer protocol (FTP) over the Internet. Such spectra may be useful for fingerprint comparisons, comparisons to calculations, etc. In the future the database will be extended to include data from other sources, including gas phase photoabsorption spectra recorded by direct optical means, as well as critical commentary on data base entries.
    BibTeX:
    @article{HITCHCOCK1994,
      author = {HITCHCOCK, AP and MANCINI, DC},
      title = {BIBLIOGRAPHY AND DATABASE OF INNER-SHELL EXCITATION-SPECTRA OF GAS-PHASE ATOMS AND MOLECULES},
      journal = {JOURNAL OF ELECTRON SPECTROSCOPY AND RELATED PHENOMENA},
      year = {1994},
      volume = {67},
      number = {1},
      pages = {1-132}
    }
    
    Hoffman, D. & Novak, T. Bridging the racial divide on the Internet {1998} SCIENCE
    Vol. {280}({5362}), pp. {390-391} 
    article  
    BibTeX:
    @article{Hoffman1998,
      author = {Hoffman, DL and Novak, TP},
      title = {Bridging the racial divide on the Internet},
      journal = {SCIENCE},
      year = {1998},
      volume = {280},
      number = {5362},
      pages = {390-391}
    }
    
    Hoffman, D. & Novak, T. Marketing in hypermedia computer-mediated environments: Conceptual foundations {1996} JOURNAL OF MARKETING
    Vol. {60}({3}), pp. {50-68} 
    article  
    Abstract: The authors address the role of marketing in hypermedia computer-mediated environments (CMEs). Their approach considers hypermedia CMEs to be large-scale (i.e., national or global) networked environments, of which the World Wide Web on the Internet is the first and current global implementation. They introduce marketers to this revolutionary new medium, propose a structural model of consumer navigation behavior in a CME that incorporates the notion of flow, and examine a series of research issues and marketing implications that follow from the model.
    BibTeX:
    @article{Hoffman1996,
      author = {Hoffman, DL and Novak, TP},
      title = {Marketing in hypermedia computer-mediated environments: Conceptual foundations},
      journal = {JOURNAL OF MARKETING},
      year = {1996},
      volume = {60},
      number = {3},
      pages = {50-68}
    }
    
    Holm, L., Kaariainen, S., Rosenstrom, P. & Schenkel, A. Searching protein structure databases with DaliLite v.3 {2008} BIOINFORMATICS
    Vol. {24}({23}), pp. {2780-2781} 
    article DOI  
    Abstract: The Red Queen said, `It takes all the running you can do, to keep in the same place.' Lewis Carrol Motivation: Newly solved protein structures are routinely scanned against structures already in the Protein Data Bank (PDB) using Internet servers. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The number of known structures continues to grow exponentially. Sensitive-thorough but slow search algorithms are challenged to deliver results in a reasonable time, as there are now more structures in the PDB than seconds in a day. The brute-force solution would be to distribute the individual comparisons on a massively parallel computer. A frugal solution, as implemented in the Dali server, is to reduce the total computational cost by pruning search space using prior knowledge about the distribution of structures in fold space. This note reports paradigm revisions that enable maintaining such a knowledge base up-to-date on a PC.
    BibTeX:
    @article{Holm2008,
      author = {Holm, L. and Kaariainen, S. and Rosenstrom, P. and Schenkel, A.},
      title = {Searching protein structure databases with DaliLite v.3},
      journal = {BIOINFORMATICS},
      year = {2008},
      volume = {24},
      number = {23},
      pages = {2780-2781},
      doi = {{10.1093/bioinformatics/btn507}}
    }
    
    HOLM, L., OUZOUNIS, C., SANDER, C., TUPAREV, G. & VRIEND, G. A DATABASE OF PROTEIN-STRUCTURE FAMILIES WITH COMMON FOLDING MOTIFS {1992} PROTEIN SCIENCE
    Vol. {1}({12}), pp. {1691-1698} 
    article  
    Abstract: The availability of fast and robust algorithms for protein structure comparison provides an opportunity to produce a database of three-dimensional comparisons, called families of structurally similar proteins (FSSP). The database currently contains an extended structural family for each of 154 representative (below 30% sequence identity) protein chains. Each data set contains: the search structure; all its relatives with 70-30% sequence identity, aligned structurally; and all other proteins from the representative set that contain substructures significantly similar to the search structure. Very close relatives (above 700% sequence identity) rarely have significant structural differences and are excluded. The alignments of remote relatives are the result of pairwise all-against-all structural comparisons in the set of 154 representative protein chains. The comparisons were carried out with each of three novel automatic algorithms that cover different aspects of protein structure similarity. The user of the database has the choice between strict rigid-body comparisons and comparisons that take into account interdomain motion or geometrical distortions; and, between comparisons that require strictly sequential ordering of segments and comparisons, which allow altered topology of loop connections or chain reversals. The data sets report the structurally equivalent residues in the form of a multiple alignment and as a list of matching fragments to facilitate inspection by three-dimensional graphics. If substructures are ignored, the result is a database of structure alignments of full-length proteins, including those in the twilight zone of sequence similarity. The database makes explicitly visible architectural similarities in the known part of the universe of protein folds and may be useful for understanding protein folding and for extracting structural modules for protein design. The data sets are available via Internet.
    BibTeX:
    @article{HOLM1992,
      author = {HOLM, L and OUZOUNIS, C and SANDER, C and TUPAREV, G and VRIEND, G},
      title = {A DATABASE OF PROTEIN-STRUCTURE FAMILIES WITH COMMON FOLDING MOTIFS},
      journal = {PROTEIN SCIENCE},
      year = {1992},
      volume = {1},
      number = {12},
      pages = {1691-1698}
    }
    
    Holme, P., Kim, B., Yoon, C. & Han, S. Attack vulnerability of complex networks {2002} PHYSICAL REVIEW E
    Vol. {65}({5, Part 2}) 
    article DOI  
    Abstract: We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.
    BibTeX:
    @article{Holme2002,
      author = {Holme, P and Kim, BJ and Yoon, CN and Han, SK},
      title = {Attack vulnerability of complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {65},
      number = {5, Part 2},
      doi = {{10.1103/PhysRevE.65.056109}}
    }
    
    Hsu, C. & Lu, H. Why do people play on-line games? An extended TAM with social influences and flow experience {2004} INFORMATION & MANAGEMENT
    Vol. {41}({7}), pp. {853-868} 
    article DOI  
    Abstract: On-line games have been a highly profitable e-commerce application in recent years. The market value of on-line games is increasing markedly and number of players is rapidly growing. The reasons that people play on-line games is an important area of research. This study views on-line games as entertainment technology. However, while most past studies have focused on task-oriented technology, predictors of entertainment-oriented technology adoption have seldom been addressed. This study applies the technology acceptance model (TAM) that incorporates social influences and flow experience as belief-related constructs to predict users' acceptance of on-line games. The proposed model was empirically evaluated using survey data collected from 233 users about their perceptions of on-line games. Overall, the results reveal that social norms, attitude, and flow experience explain about 80% of game playing. The implications of this study are discussed. (C) 2003 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Hsu2004,
      author = {Hsu, CL and Lu, HP},
      title = {Why do people play on-line games? An extended TAM with social influences and flow experience},
      journal = {INFORMATION & MANAGEMENT},
      year = {2004},
      volume = {41},
      number = {7},
      pages = {853-868},
      doi = {{10.1016/j.im.2003.08.014}}
    }
    
    Hu, N. & Steenkiste, P. Evaluation and characterization of available bandwidth probing techniques {2003} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {21}({6}), pp. {879-894} 
    article DOI  
    Abstract: The packet pair mechanism has been shown to be a reliable method to measure the bottleneck link capacity on a network path, but its use for measuring available bandwidth is more challenging. In this paper, we use modeling, measurements, and simulations to better characterize the interaction between probing packets and the competing network traffic. We first construct a simple model to understand how competing traffic changes the probing packet gap for a single-hop network. The gap model shows that the initial probing gap is a critical parameter when using packet pairs to estimate available bandwidth. Based on this insight, we present two available bandwidth measurement techniques, the initial gap increasing (IGI) method and the packet transmission rate (PTR) method. We use extensive Internet measurements to show that these techniques estimate available bandwidth faster than existing techniques such as Pathload, with comparable accuracy. Finally, using both Internet measurements and ns simulations, we explore how the measurement accuracy of active probing is affected by factors such as the probing packet size, the length of probing packet train, and the competing traffic on links other than the tight link.
    BibTeX:
    @article{Hu2003,
      author = {Hu, NN and Steenkiste, P},
      title = {Evaluation and characterization of available bandwidth probing techniques},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {2003},
      volume = {21},
      number = {6},
      pages = {879-894},
      doi = {{10.1109/JSAC.2003.814505}}
    }
    
    Huberman, B. & Adamic, L. Internet - Growth dynamics of the World-Wide Web {1999} NATURE
    Vol. {401}({6749}), pp. {131} 
    article  
    BibTeX:
    @article{Huberman1999,
      author = {Huberman, BA and Adamic, LA},
      title = {Internet - Growth dynamics of the World-Wide Web},
      journal = {NATURE},
      year = {1999},
      volume = {401},
      number = {6749},
      pages = {131}
    }
    
    Huizingh, E. The content and design of Web sites: an empirical study {2000} INFORMATION & MANAGEMENT
    Vol. {37}({3}), pp. {123-134} 
    article  
    Abstract: To support the emergence of a solid knowledge base for analyzing Web activity, we have developed a framework to analyze and categorize the capabilities of Web sites. This distinguishes content from design. Content refers to the information, features, or services that are offered in the Web site, design to the way the content is made available for Web visitors. Both concepts have been operationalized by means of objective and subjective measures to capture features as well as perceptions. This framework has been applied to study how different groups of companies are using the Web for commercial purposes. We have compared Web sites based on their source, industry, and size. On average, larger Web sites seem to be `richer' and more advanced. (C) 2000 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Huizingh2000,
      author = {Huizingh, EKRE},
      title = {The content and design of Web sites: an empirical study},
      journal = {INFORMATION & MANAGEMENT},
      year = {2000},
      volume = {37},
      number = {3},
      pages = {123-134}
    }
    
    Hunter, D. & Andonovic, I. Approaches to optical Internet packet switching {2000} IEEE COMMUNICATIONS MAGAZINE
    Vol. {38}({9}), pp. {116-122} 
    article  
    Abstract: Wavelength-division multiplexing is currently being deployed in telecommunications networks in order to satisfy the increased demand for capacity brought about by the explosion in Internet use. The most widely accepted network evolution prediction is via an extension of these initial predominantly point-to-point deployments, with limited system functionalities, into highly interconnected networks supporting circuit-switched paths. While current applications of WDM focus on relatively static usage of individual wavelength channels, optical switching technologies enable fast dynamic allocation of WDM channels. The challenge involves combining the advantages of these relatively coarse-grained WDM techniques with emerging optical switching capabilities to yield a high-throughput optical platform directly underpinning next-generation networks. One alternative longer-term strategy for network evolution employs optical packet switching, providing greater flexibility, functionality, and granularity. This article reviews progress on the definition of optical packet switching and routing networks capable of providing end-to-end optical paths and/or connectionless transport. To date the approaches proposed predominantly use fixed-duration optical packets with lower-bit-rate headers to facilitate processing at the network-node interfaces. Thus, the major advances toward the goal of developing an extensive optical packet-switched layer employing fixed-length packets will be summarized, but initial concepts on the support of variable-length IF-like optical packets will also be introduced. Particular strategies implementing the crucial optical buffering function at the switching nodes will be described, motivated by the network functionalities required within the optical packet layer.
    BibTeX:
    @article{Hunter2000,
      author = {Hunter, DK and Andonovic, I},
      title = {Approaches to optical Internet packet switching},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2000},
      volume = {38},
      number = {9},
      pages = {116-122}
    }
    
    Impicciatore, P., Pandolfini, C., Casella, N. & Bonati, M. Reliability of health information for the public on the world wide web: Systematic survey of advice on managing fever in children at home {1997} BRITISH MEDICAL JOURNAL
    Vol. {314}({7098}), pp. {1875-1879} 
    article  
    Abstract: Objective: To assess the reliability of healthcare information on the world wide web and therefore how it may help lay people cope with common health problems. Methods: Systematic search by means of two search engines, Yahoo and Excite, of parent oriented web pages relating to home management of feverish children. Reliability of information on the web sites was checked by comparison with published guidelines. Main outcome measures: Minimum temperature of child that should be considered as fever, optimal sites for measuring temperature, pharmacological and physical treatment of fever, conditions that may warrant a doctor's visit. Results: 41 web pages were retrieved and considered. 28 web pages gave a temperature above which a child is feverish; 26 pages indicated the optimal site for taking temperature, most recommending rectal measurement; 31 of the 34 pages that mentioned drug treatment recommended paracetamol as an antipyretic; 38 pages recommended non-drug measures, most commonly tepid sponging, dressing lightly, and increasing fluid intake; and 36 pages gave some indication of when a doctor should be called. Only four web pages adhered closely to the main recommendations in the guidelines. The largest deviations were in sponging procedures and how to take a child's temperature, whereas there was a general agreement in the use of paracetamol. Conclusions: Only a few web sites provided complete and accurate information for this common and widely discussed condition. This suggests an urgent need to check public oriented healthcare information on the internet for accuracy, completeness, and consistency.
    BibTeX:
    @article{Impicciatore1997,
      author = {Impicciatore, P and Pandolfini, C and Casella, N and Bonati, M},
      title = {Reliability of health information for the public on the world wide web: Systematic survey of advice on managing fever in children at home},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {1997},
      volume = {314},
      number = {7098},
      pages = {1875-1879}
    }
    
    Ishikawa, J. & Hotta, K. FramePlot: a new implementation of the Frame analysis for predicting protein-coding regions in bacterial DNA with a high G plus C content {1999} FEMS MICROBIOLOGY LETTERS
    Vol. {174}({2}), pp. {251-253} 
    article  
    Abstract: FramePlot is a web-based tool for predicting protein-coding regions in bacterial DNA with a high G+C content, such as Streptomyces. The graphical output provides for easy distinction of protein-coding regions from non-coding regions. The plot is a clickable map. Clicking on an ORF provides not only the nucleotide sequence but also its deduced amino acid sequence. These sequences can then be compared to the NCBI sequence database over the Internet. The program is freely available for academic purposes at http://www.nih.go.jp/-jun/cgi-bin/frameplot.pl. (C) 1999 Federation of European Microbiological Societies. Published by Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Ishikawa1999,
      author = {Ishikawa, J and Hotta, K},
      title = {FramePlot: a new implementation of the Frame analysis for predicting protein-coding regions in bacterial DNA with a high G plus C content},
      journal = {FEMS MICROBIOLOGY LETTERS},
      year = {1999},
      volume = {174},
      number = {2},
      pages = {251-253}
    }
    
    Issenberg, S., McGaghie, W., Petrusa, E., Gordon, D. & Scalese, R. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review {2005} MEDICAL TEACHER
    Vol. {27}({1}), pp. {10-28} 
    article DOI  
    Abstract: Review date: 1969 to 2003, 34 years. Background and context: Simulations are now in widespread use in medical education and medical personnel evaluation. Outcomes research on the use and effectiveness of simulation technology in medical education is scattered, inconsistent and varies widely in methodological rigor and substantive focus. Objectives: Review and synthesize existing evidence in educational science that addresses the question, ` What are the features and uses of high-fidelity medical simulations that lead to most effective learning?'. Search strategy: The search covered five literature databases (ERIC, MEDLINE, PsycINFO, Web of Science and Timelit) and employed 91 single search terms and concepts and their Boolean combinations. Hand searching, Internet searches and attention to the ` grey literature' were also used. The aim was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished literature that have been judged for academic quality. Inclusion and exclusion criteria: Four screening criteria were used to reduce the initial pool of 670 journal articles to a focused set of 109 studies: ( a) elimination of review articles in favor of empirical studies; ( b) use of a simulator as an educational assessment or intervention with learner outcomes measured quantitatively; ( c) comparative research, either experimental or quasi-experimental; and (d) research that involves simulation as an educational intervention. Data extraction: Data were extracted systematically from the 109 eligible journal articles by independent coders. Each coder used a standardized data extraction protocol. Data synthesis: Qualitative data synthesis and tabular presentation of research methods and outcomes were used. Heterogeneity of research designs, educational interventions, outcome measures and timeframe precluded data synthesis using meta-analysis. Headline results: Coding accuracy for features of the journal articles is high. The extant quality of the published research is generally weak. The weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions. These include the following: providing feedback - 51 ( 47 journal articles reported that educational feedback is the most important feature of simulation-based medical education; repetitive practice - 43 (39 journal articles identified repetitive practice as a key feature involving the use of high-fidelity simulations in medical education; curriculum integration - 27 (25 journal articles cited integration of simulation- based exercises into the standard medical school or postgraduate educational curriculum as an essential feature of their effective use; range of difficulty level - 15 (14 journal articles address the importance of the range of task difficulty level as an important variable in simulation- based medical education; multiple learning strategies - 11 (10 journal articles identified the adaptability of high-fidelity simulations to multiple learning strategies as an important factor in their educational effectiveness; capture clinical variation - 11 ( 10 journal articles cited simulators that capture a wide variety of clinical conditions as more useful than those with a narrow range; controlled environment - 10 (9 journal articles emphasized the importance of using high-fidelity simulations in a controlled environment where learners can make, detect and correct errors without adverse consequences; individualized learning - 10 ( 9 journal articles highlighted the importance of having reproducible, standardized educational experiences where learners are active participants, not passive bystanders; defined outcomes - seven (6 journal articles cited the importance of having clearly stated goals with tangible outcome measures that will more likely lead to learners mastering skills; simulator validity - four (3 journal articles provided evidence for the direct correlation of simulation validity with effective learning. Conclusions: While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation- based education complements medical education in patient care settings.
    BibTeX:
    @article{Issenberg2005,
      author = {Issenberg, SB and McGaghie, WC and Petrusa, ER and Gordon, DL and Scalese, RJ},
      title = {Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review},
      journal = {MEDICAL TEACHER},
      year = {2005},
      volume = {27},
      number = {1},
      pages = {10-28},
      doi = {{10.1080/01421590500046924}}
    }
    
    Jackson, L., Ervin, K., Gardner, P. & Schmitt, N. Gender and the Internet: Women communicating and men searching {2001} SEX ROLES
    Vol. {44}({5-6}), pp. {363-379} 
    article  
    Abstract: This research examined gender differences in Internet use and factors responsible for these differences. A sample of 630 Anglo American undergraduates completed the Student Computer and Internet Survey that contained questions about e-mail and Web use, and about potential affective and cognitive mediators of use. Based on a general model of Internet use, we predicted and found that females used e-mail more than did males, males used the Web more than did females, and females reported more computer anxiety, less computer self-efficacy, and less favorable and less stereotypic computer attitudes. Path analysis to identify mediators of gender differences in Internet use revealed that computer self-efficacy, loneliness, and depression accounted in part for gender differences, but that gender continued to have a direct effect on use after these factors were considered. Implications for realizing the democratizing potential and benefits of Internet use are discussed.
    BibTeX:
    @article{Jackson2001,
      author = {Jackson, LA and Ervin, KS and Gardner, PD and Schmitt, N},
      title = {Gender and the Internet: Women communicating and men searching},
      journal = {SEX ROLES},
      year = {2001},
      volume = {44},
      number = {5-6},
      pages = {363-379}
    }
    
    Jadad, A. & Gagliardi, A. Rating health information on the Internet - Navigating to knowledge or to Babel? {1998} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {279}({8}), pp. {611-614} 
    article  
    Abstract: Context.-The rapid growth of the Internet has triggered an information revolution of unprecedented magnitude. Despite its obvious benefits, the increase in the availability of information could also result in many potentially harmful effects on both consumers and health professionals who do not use it appropriately. Objectives.-To identify instruments used to rate Web sites providing health information on the Internet, rate criteria used by them, establish the degree of validation of the instruments, and provide future directions for research in this area. Data Sources.-MEDLINE (1966-1997), CINHAL (1982-1997), HEALTH (1975-1997), Information Science Abstracts (1966 to September 1995), Library and Information Science Abstracts (1969-1995), and Library Literature (1984-1996);the search engines Lycos, Excite, Open Text, Yahoo, HotBot, Infoseek, and Magellan; Internet discussion lists; meeting proceedings; multiple Web pages; and reference lists. Instrument Selection.-Instruments used at least once to rate the quality of Web sites providing health information with their rating criteria available on the Internet. Data Extraction.-The name of the developing organization, Internet address, rating criteria, information on the development of the instrument, number and background of people generating the assessments, and data on the validity and reliability of the measurements. Data Synthesis.-A total of 47 rating instruments were identified. Fourteen provided a description of the criteria used to produce the ratings, and 5 of these provided instructions for their use. None of the instruments identified provided information on the interobserver reliability and construct validity of the measurements. Conclusions.-Many incompletely developed instruments to evaluate health information exist on the Internet. It is unclear, however, whether they should exist in the first place, whether they measure what they claim to measure, or whether they lead to more good than harm.
    BibTeX:
    @article{Jadad1998,
      author = {Jadad, AR and Gagliardi, A},
      title = {Rating health information on the Internet - Navigating to knowledge or to Babel?},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1998},
      volume = {279},
      number = {8},
      pages = {611-614}
    }
    
    Jain, M. & Dovrolis, C. End-to-end available bandwidth: Measurement methodology, dynamics, and Relation with TCP throughput {2003} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {11}({4}), pp. {537-549} 
    article DOI  
    Abstract: The available bandwidth (avail-bw) in a network path is of major importance in congestion control, streaming applications, quality-of-service verification, server selection, and overlay networks. We describe an end-to-end methodology, called self-loading periodic streams (SLoPS), for measuring avail-bw. The basic idea in SLoPS is that the one-way delays of a periodic packet stream show an increasing trend when the stream's rate is higher than the avail-bw. We implemented SLoPS in a tool called pathload. The accuracy of the tool has been evaluated with both simulations and experiments over real-world Internet paths. Pathload is nonintrusive, meaning that it does not cause significant increases in the network utilization, delays, or losses. We used pathload to evaluate the variability (''dynamics'') of the avail-bw in internet paths. The avail-bw becomes significantly more variable in heavily utilized paths, as well as in Oaths with limited capacity (probably due to a lower degree,of statistical multiplexing). We finally examine the relation. between avail-bw and TCP throughput. A persistent TCP connection can be used to roughly measure the avail-bw in a path, but TCP saturates the path and increases significantly the path delays and jitter.
    BibTeX:
    @article{Jain2003,
      author = {Jain, M and Dovrolis, C},
      title = {End-to-end available bandwidth: Measurement methodology, dynamics, and Relation with TCP throughput},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2003},
      volume = {11},
      number = {4},
      pages = {537-549},
      note = {ACM SIGCOMM 2002 Conference, PITTSBURGH, PENNSYLVANIA, AUG 19-23, 2002},
      doi = {{10.1109/TNET.2003.815304}}
    }
    
    Jansen, B. & Pooch, U. A review of Web searching studies and a framework for future research {2001} JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY
    Vol. {52}({3}), pp. {235-246} 
    article  
    Abstract: Research on Web searching is at an incipient stage. This aspect provides a unique opportunity to review the current state of research in the field, identify common trends, develop a methodological framework, and define terminology for future Web searching studies, In this article, the results from published studies of Web searching are reviewed to present the current state of research. The analysis of the limited Web searching studies available indicates that research methods and terminology are already diverging. A framework is proposed for future studies that will facilitate comparison of results, The advantages of such a framework are presented, and the implications for the design of Web information retrieval systems studies are discussed. Additionally, the searching characteristics of Web users are compared and contrasted with users of traditional information retrieval and online public access systems to discover if there is a need for more studies that focus predominantly or exclusively on Web searching, The comparison indicates that Web searching differs from searching in other environments.
    BibTeX:
    @article{Jansen2001,
      author = {Jansen, BJ and Pooch, U},
      title = {A review of Web searching studies and a framework for future research},
      journal = {JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY},
      year = {2001},
      volume = {52},
      number = {3},
      pages = {235-246}
    }
    
    Jansen, B., Spink, A. & Saracevic, T. Real life, real users, and real needs: a study and analysis of user queries on the web {2000} INFORMATION PROCESSING & MANAGEMENT
    Vol. {36}({2}), pp. {207-227} 
    article  
    Abstract: We analyzed transaction logs containing 51,473 queries posed by 18,113 users of Excite, a major Internet search service. We provide data on: (i) sessions - changes in queries during a session, number of pages viewed, and use of relevance feedback; (ii) queries - the number of search terms, and the use of logic and modifiers; and (iii) terms - their rank/frequency distribution and the most highly used search terms, We then shift the focus of analysis from the query to the user to gain insight to the characteristics of the Web user. With these characteristics as a basis, we then conducted a failure analysis, identifying trends among user mistakes. We conclude with a summary of findings and a discussion of the implications of these findings. (C) 2000 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Jansen2000,
      author = {Jansen, BJ and Spink, A and Saracevic, T},
      title = {Real life, real users, and real needs: a study and analysis of user queries on the web},
      journal = {INFORMATION PROCESSING & MANAGEMENT},
      year = {2000},
      volume = {36},
      number = {2},
      pages = {207-227}
    }
    
    Jeffrey, S., Carter, J., Moodie, K. & Beswick, A. Using spatial interpolation to construct a comprehensive archive of Australian climate data {2001} ENVIRONMENTAL MODELLING & SOFTWARE
    Vol. {16}({4}), pp. {309-330} 
    article  
    Abstract: A comprehensive archive of Australian rainfall and climate data has been constructed from ground-based observational data. Continuous, daily time step records have been constructed using spatial interpolation algorithms to estimate missing data. Datasets have been constructed for daily rainfall, maximum and minimum temperatures, evaporation, solar radiation and vapour pressure. Datasets are available for approximately 4600 locations across Australia, commencing in 1890 for rainfall and 1957 for climate variables. The datasets can be accessed on the Internet at http://www.dnr.qld.gov.au/silo. Interpolated surfaces have been computed on a regular 0.05 degrees grid extending from latitude 10 degreesS to 44 degreesS and longitude 112 degreesE to 154 degreesE. A thin plate smoothing spline was used to interpolate daily climate variables, and ordinary kriging was used to interpolate daily and monthly rainfall. Independent cross validation has been used to analyse the temporal and spatial error of the interpolated data. An Internet based facility has been developed which allows database clients to interrogate the gridded surfaces at any desired location. (C) 2001 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Jeffrey2001,
      author = {Jeffrey, SJ and Carter, JO and Moodie, KB and Beswick, AR},
      title = {Using spatial interpolation to construct a comprehensive archive of Australian climate data},
      journal = {ENVIRONMENTAL MODELLING & SOFTWARE},
      year = {2001},
      volume = {16},
      number = {4},
      pages = {309-330}
    }
    
    Johari, R. & Tan, D. End-to-end congestion control for the Internet: Delays and stability {2001} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {9}({6}), pp. {818-832} 
    article  
    Abstract: Under the assumption that queueing delays will eventually become small relative to propagation delays, we derive stability results for a fluid flow model of end-to-end Internet congestion control. The theoretical results of the paper are intended to be decentralized and locally implemented: each end system needs knowledge only of its own round-trip delay. Criteria for local stability and rate of convergence are completely characterized for a single resource, single user system. Stability criteria are also described for networks where all users share the same round-trip delay. Numerical experiments investigate extensions to more general networks. Through simulations, we are able to evaluate the relative importance of queueing delays and propagation delays on network stability. Finally, we suggest how these results may be used to design network resources.
    BibTeX:
    @article{Johari2001,
      author = {Johari, R and Tan, DKH},
      title = {End-to-end congestion control for the Internet: Delays and stability},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2001},
      volume = {9},
      number = {6},
      pages = {818-832}
    }
    
    Joinson, A. Self-disclosure in computer-mediated communication: The role of self-awareness and visual anonymity {2001} EUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY
    Vol. {31}({2}), pp. {177-192} 
    article  
    Abstract: Three studies examined the notion that computer-mediated communication (CMC) cart be characterised by high levels of self-disclosure. In Study One, significantly higher levels of spontaneous self-disclosure were found in computer-mediated compared to face-to-face discussions. Study Two examined the role of visual anonymity in encouraging self-disclosure during CMC. Visually anonymous participants disclosed significantly more information about themselves than non-visually anonymous participants. In Study Three, private and public self-awareness were independently manipulated, using video-conferencing cameras and accountability cues, to create a 2 x 2 design (public self-awareness (high and low) x private self-awareness (high and low). It was found that heightened private self-awareness, when combined with reduced public self-awareness, was associated with significantly higher levels of spontaneous self-disclosure during computer-mediated communication Copyright (C) 2001 John Wiley & Sons, Ltd.
    BibTeX:
    @article{Joinson2001,
      author = {Joinson, AN},
      title = {Self-disclosure in computer-mediated communication: The role of self-awareness and visual anonymity},
      journal = {EUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY},
      year = {2001},
      volume = {31},
      number = {2},
      pages = {177-192}
    }
    
    Joinson, A. Social desirability, anonymity, and Internet-based questionnaires {1999} BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS
    Vol. {31}({3}), pp. {433-438} 
    article  
    Abstract: It has been argued that behavior on the Internet differs from similar behavior in the ``real world'' (Joinson, 1998a). In the present study, participants completed measures of self-consciousness, social anxiety, self-esteem, and social desirability, using either the World-Wide Web (WWW) or pen and paper, and were assigned to either an anonymous or a nonanonymous condition. It was found that people reported lower social anxiety and social desirability and higher self-esteem when they were anonymous than when they were nonanonymous. Furthermore, participants also reported lower social anxiety and social desirability when they were using the Internet than when they were using paper-based methods. Contrast analyses supported the prediction that participants using the WWW anonymously would show the lowest levels of social desirability, whereas participants answering with pen and paper nonanonymously would score highest on the same measure. Implications for the use of the Internet for the collection of psychological data are discussed.
    BibTeX:
    @article{Joinson1999,
      author = {Joinson, A},
      title = {Social desirability, anonymity, and Internet-based questionnaires},
      journal = {BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS},
      year = {1999},
      volume = {31},
      number = {3},
      pages = {433-438}
    }
    
    Jolley, K., Chan, M. & Maiden, M. mlstdbNet - distributed multi-locus sequence typing (MLST) databases {2004} BMC BIOINFORMATICS
    Vol. {5} 
    article DOI  
    Abstract: Background: Multi-locus sequence typing (MLST) is a method of typing that facilitates the discrimination of microbial isolates by comparing the sequences of housekeeping gene fragments. The mlstdbNet software enables the implementation of distributed web-accessible MLST databases that can be linked widely over the Internet. Results: The software enables multiple isolate databases to query a single profiles database that contains allelic profile and sequence definitions. This separation enables isolate databases to be established by individual laboratories, each customised to the needs of the particular project and with appropriate access restrictions, while maintaining the benefits of a single definitive source of profile and sequence information. Databases are described by an XML file that is parsed by a Perl CGI script. The software offers a large number of ways to query the databases and to further break down and export the results generated. Additional features can be enabled by installing third-party ( freely available) tools. Conclusion: Development of a distributed structure for MLST databases offers scalability and flexibility, allowing participating centres to maintain ownership of their own data, without introducing duplication and data integrity issues.
    BibTeX:
    @article{Jolley2004,
      author = {Jolley, KA and Chan, MS and Maiden, MCJ},
      title = {mlstdbNet - distributed multi-locus sequence typing (MLST) databases},
      journal = {BMC BIOINFORMATICS},
      year = {2004},
      volume = {5},
      doi = {{10.1186/1471-2105-5-86}}
    }
    
    Josang, A., Ismail, R. & Boyd, C. A survey of trust and reputation systems for online service provision {2007} DECISION SUPPORT SYSTEMS
    Vol. {43}({2}), pp. {618-644} 
    article DOI  
    Abstract: Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision. The basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other pal ties in deciding whether or not to transact with that party in the future. A natural side effect is that it also provides an incentive for good behaviour, and therefore tends to have a positive effect on market quality. Reputation systems can be called collaborative sanctioning systems to reflect their collaborative nature, and are related to collaborative filtering systems. Reputation systems are already being used in successful commercial online applications. There is also a rapidly growing literature around trust and reputation systems, but unfortunately this activity is not very coherent. The purpose of this article is to give an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions, to analyse the current trends and developments in this area, and to propose a research agenda for trust and reputation systems. (c) 2005 Elsevier B.V All rights reserved.
    BibTeX:
    @article{Josang2007,
      author = {Josang, Audun and Ismail, Roslan and Boyd, Colin},
      title = {A survey of trust and reputation systems for online service provision},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {2007},
      volume = {43},
      number = {2},
      pages = {618-644},
      doi = {{10.1016/j.dss.2005.05.019}}
    }
    
    Jun, J. & Sichitiu, M. The nominal capacity of wireless mesh networks {2003} IEEE WIRELESS COMMUNICATIONS
    Vol. {10}({5}), pp. {8-14} 
    article  
    Abstract: Wireless mesh networks are an alternative access. In WMNs, similar to ad hoc-networks, each user node operates not only as a host but also as a router; user packets are forwarded to and from an Internet-connected gateway in multihop fashion. The meshed topology provides good reliability, market coverage, and scalability, as well as low upfront investment. Despite the recent startup surge in WMNs, much research remains to be done before WMNs realize their full potential. This article tackles the problem of determining the exact capacity of a WMN. The key concept we introduce to enable this calculation is the bottleneck collision domain, defined as the geographical area of the network that bounds from above the amount of data that can be transmitted in the network. We show that for WMNs the throughput of each node decreases as O(1/n), where n is the total number of nodes in the network. In contrast with most existing work on ad hoc network capacity, we do not limit our study to the asymptotic case. In particular, for a given topology and the set of active nodes, we provide exact upper bounds on the throughput of any node. The calculation can be used to provision theoretical results are validated by detailed simulations.
    BibTeX:
    @article{Jun2003,
      author = {Jun, JG and Sichitiu, ML},
      title = {The nominal capacity of wireless mesh networks},
      journal = {IEEE WIRELESS COMMUNICATIONS},
      year = {2003},
      volume = {10},
      number = {5},
      pages = {8-14}
    }
    
    Kalnay, E., Kanamitsu, M., Kistler, R., Collins, W., Deaven, D., Gandin, L., Iredell, M., Saha, S., White, G., Woollen, J., Zhu, Y., Chelliah, M., Ebisuzaki, W., Higgins, W., Janowiak, J., Mo, K., Ropelewski, C., Wang, J., Leetmaa, A., Reynolds, R., Jenne, R. & Joseph, D. The NCEP/NCAR 40-year reanalysis project {1996} BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY
    Vol. {77}({3}), pp. {437-471} 
    article  
    Abstract: The NCEP and NCAR are cooperating in a project (denoted `'reanalysis'') to produce a 40-year record of global analyses of atmospheric fields in support of the needs of the research and climate monitoring communities. This effort involves the recovery of land surface, ship, rawinsonde, pibal, aircraft, satellite, and other data; quality controlling and assimilating these data with a data assimilation system that is kept unchanged over the reanalysis period 1957-96. This eliminates perceived climate jumps associated with changes in the data assimilation system. The NCEP/NCAR 40-yr reanalysis uses a frozen state-of-the-art global data assimilation system and a database as complete as possible. The data assimilation and the model used are identical to the global system implemented operationally at the NCEP on 11 January 1995, except that the horizontal resolution is T62 (about 210 km). The database has been enhanced with many sources of observations not available in real time for operations, provided by different countries and organizations. The system has been designed with advanced quality control and monitoring components, and can produce 1 mon of reanalysis per day on a Gray YMP/8 supercomputer. Different types of output archives are being created to satisfy different user needs, including a `'quick look'' CD-ROM (one per year) with six tropospheric and stratospheric fields available twice daily, as well as surface, top-of-the atmosphere, and isentropic fields. Reanalysis information and selected output is also available on-line via the Internet (http//:nic.fb4.noaa.gov:8000). A special CDROM, containing 13 years of selected observed, daily, monthly, and climatological data from the NCEP/NCAR Reanalysis, is included with this issue. Output variables are classified into four classes, depending on the degree to which they are influenced by the observations and/or the model. For example, `'C'' variables (such as precipitation and surface fluxes) are completely determined by the model during the data assimilation and should be used with caution. Nevertheless, a comparison of these variables with observations and with several climatologies shows that they generally contain considerable useful information. Eight-day forecasts, produced every 5 days, should be useful for predictability studies and for monitoring the quality of the observing systems. The 40 years of reanalysis (1957-96) should be completed in early 1997. A continuation into the future through an identical Climate Data Assimilation System will allow researchers to reliably compare recent anomalies with those in earlier decades. Since changes in the observing systems will inevitably produce perceived changes in the climate, parallel reanalyses (at least 1 year long) will be generated for the periods immediately after the introduction of new observing systems, such as new types of satellite data. NCEP plans currently call for an updated reanalysis using a state-of-the-art system every five years or so. The successive reanalyses will be greatly facilitated by the generation of the comprehensive database in the present reanalysis.
    BibTeX:
    @article{Kalnay1996,
      author = {Kalnay, E and Kanamitsu, M and Kistler, R and Collins, W and Deaven, D and Gandin, L and Iredell, M and Saha, S and White, G and Woollen, J and Zhu, Y and Chelliah, M and Ebisuzaki, W and Higgins, W and Janowiak, J and Mo, KC and Ropelewski, C and Wang, J and Leetmaa, A and Reynolds, R and Jenne, R and Joseph, D},
      title = {The NCEP/NCAR 40-year reanalysis project},
      journal = {BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY},
      year = {1996},
      volume = {77},
      number = {3},
      pages = {437-471}
    }
    
    KASPI, V., TAYLOR, J. & RYBA, M. HIGH-PRECISION TIMING OF MILLISECOND PULSARS .3. LONG-TERM MONITORING OF PSRS B1885+09 AND B1937+21 {1994} ASTROPHYSICAL JOURNAL
    Vol. {428}({2, Part 1}), pp. {713-728} 
    article  
    Abstract: Biweekly timing observations of PSRs B1855 + 09 and B1937 + 21 have been made at the Arecibo Observatory for more than 7 and 8 yr, respectively, with uniform procedures and only a few modest gaps. On each observing date we measure an equivalent pulse arrival time for PSR B1855 + 09 at 1.4 GHz, with typical accuracies of about 0.8 mus, and for PSR B1937 + 21 at both 1.4 and 2.4 GHz, with accuracies around 0.2 mus. The pulse arrival times are fitted to a simple model for each pulsar, yielding high-precision astrometric, rotational, and orbital parameters, and a diverse range of conclusions. The celestial coordinates and proper motions of the two pulsars are determined with uncertainties less-than-or-equal-to 0.12 mas and less-than-or-equal-to 0.06 mas yr-1 in the reference frame of the DE200 planetary ephemeris. The annual parallaxes are found to be pi = 1.1 +/- 0.3 mas and pi < 0.28 mas for PSRs B1855 + 09 and B1937 + 21, respectively. The general relativistic Shapiro delay is measured in the PSR B1855 + 09 system and used to obtain masses m1 = 1.50(+0.26/-0.14) M. and m2 = 0.258(+0.028/-0.016) M. for the pulsar and its orbiting companion. The extremely stable orbital period of this system provides a phenomenological limit on the secular change of Newton's gravitational constant, G/G = (-9 +/- 18) x 10(-12) yr-1. Variations in the dispersion measure of PSR B1937 + 21 indicate that the spectrum of electron-density fluctuations in the interstellar medium has a power-law index beta = 3.874 +/- 0.01 1, slightly steeper than the Kolmogorov value of 11/3, and we find no strong evidence for an `'inner scale'' greater than about 2 x 10(9) cm. 2 x 10(9) cm. In the residual pulse arrival times for PSR B1937 + 21 we have observed small systematic trends not explained by our deterministic timing model. We discuss a number of possible causes; although the results are not yet conclusive, the most straightforward interpretation is that the unmodeled noise (a few microseconds over 8 yr, or DELTAt/T almost-equal-to 10(-14)) is inherent to the pulsar itself. In the present data set, PSR B1855 + 09 exhibits no discernible timing noise. With conventional assumptions we derive a limit OMEGA(g)h2 < 6 x 10(-8) (95% confidence) for the energy density, per logarithmic frequency interval, in a cosmic background of stochastic gravitational waves. We discuss the feasibility of establishing a pulsar-based timescale that might be used to test the stabilities of the best available atomic clocks. In an Appendix, we propose guidelines for the archiving of pulsar timing observations. Instructions are provided for obtaining copies of our own archival data, via Internet.
    BibTeX:
    @article{KASPI1994,
      author = {KASPI, VM and TAYLOR, JH and RYBA, MF},
      title = {HIGH-PRECISION TIMING OF MILLISECOND PULSARS .3. LONG-TERM MONITORING OF PSRS B1885+09 AND B1937+21},
      journal = {ASTROPHYSICAL JOURNAL},
      year = {1994},
      volume = {428},
      number = {2, Part 1},
      pages = {713-728}
    }
    
    Kastrati, A., Dibra, A., Eberle, S., Mehilli, J., de Lezo, J., Goy, J., Ulm, K. & Schomig, A. Sirolimus-eluting stents vs paclitaxel-eluting stents in patients with coronary artery disease - Meta-analysis of randomized trials {2005} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {294}({7}), pp. {819-825} 
    article  
    Abstract: Context Placement of sirolimus-eluting stents or paclitaxel-eluting stents has emerged as the predominant percutaneous treatment strategy in patients with coronary artery disease (CAD). Whether there are any differences in efficacy and safety between these 2 drug-eluting stents is unclear. Objective To compare outcomes of sirolimus-eluting and paclitaxel-eluting coronary stents on the basis of data generated by randomized head-to-head clinical trials. Data Sources PubMed and the Cochrane Central Register of Controlled Trials, conference proceedings from major cardiology meetings, and Internet-based sources of information on clinical trials in cardiology from January 2003 to April 2005. Study Selection Randomized trials comparing the sirolimus-eluting stent with the paclitaxel-eluting stent in patients with CAD reporting the outcomes of interest (target lesion revascularization, angiographic restenosis, stent thrombosis, myocardial infarction [MI] death, and the composite of death or MI) during a follow-up of at least 6 months. Data Extraction Two reviewers independently identified studies and abstracted data on sample size, baseline characteristics, and outcomes of interest. Data Synthesis Six trials, including 3669 patients, met the selection criteria. No significant heterogeneity was found across trials. Target lesion revascularization, the primary outcome of interest, was less frequently performed in patients who were treated with the sirolimus-eluting stent (5.1 vs the paclitaxel-eluting stent (7.8 (odds ratio [OR], 0.64; 95% confidence interval [CI], 0.49-0,84; P=.001). Similarly, angiographic restenosis was less frequently observed among patients assigned to the sirolimus-eluting stent (9.3 vs the paclitaxel-eluting stent (13.1 (OR, 0.68; 95% Cl, 0.55-0.86; P=.001). Event rates for sirolimus-eluting vs paclitaxel-eluting stents were 0.9% and 1.1 respectively, for stent thrombosis (P=.62); 1.4% and 1.6 respectively, for death (P=.56); and 4.9% and 5.8 respectively, for the composite of death or MI (P=.23). Conclusions Patients receiving sirolimus-eluting stents had a significantly lower risk of restenosis and target vessel revascularization compared with those receiving paclitaxel-eluting stents. Rates of death, death or MI, and stent thrombosis were similar.
    BibTeX:
    @article{Kastrati2005,
      author = {Kastrati, A and Dibra, A and Eberle, S and Mehilli, J and de Lezo, JS and Goy, JJ and Ulm, K and Schomig, A},
      title = {Sirolimus-eluting stents vs paclitaxel-eluting stents in patients with coronary artery disease - Meta-analysis of randomized trials},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2005},
      volume = {294},
      number = {7},
      pages = {819-825}
    }
    
    Keefe, F., Rumble, M., Scipio, C., Giordano, L. & Perri, L. Psychological aspects of persistent pain: Current state of the science {2004} JOURNAL OF PAIN
    Vol. {5}({4}), pp. {195-211} 
    article DOI  
    Abstract: This article provides an overview of current research on psychological aspects of persistent pain. It is divided into 3 sections. in section 1, recent studies are reviewed that provide evidence that psychological factors are related to adjustment to persistent pain. This section addresses research on factors associated with increased pain and poorer adjustment to pain (ie, pain catastrophizing, pain-related anxiety and fear of pain, and helplessness) and factors associated with decreased pain and improved adjustment to pain (ie, self-efficacy, pain coping strategies, readiness to change, and acceptance). in section 2, we review recent research on behavioral and psychosocial interventions for patients with persistent pain. Topics addressed include early intervention, tailoring treatment, telephone/Internet-based treatment, caregiver-assisted treatment, and exposure-based protocols. In section 3, we conclude with a general discussion that highlights steps needed to advance this area of research including developing more comprehensive and integrative conceptual models, increasing attention to the social context of pain, examining the link of psychological factors to pain-related brain activation patterns, and investigating the mechanisms underlying the efficacy of psychological treatments for pain. Perspective: This is one of several invited commentaries to appear in The Journal of Pain in recognition of The Decade of Pain Research. This article provides an overview of current research on psychological aspects of persistent pain, and highlights steps needed to advance this area of research. (C) 2004 by the American Pain Society.
    BibTeX:
    @article{Keefe2004,
      author = {Keefe, FJ and Rumble, ME and Scipio, CD and Giordano, LA and Perri, LM},
      title = {Psychological aspects of persistent pain: Current state of the science},
      journal = {JOURNAL OF PAIN},
      year = {2004},
      volume = {5},
      number = {4},
      pages = {195-211},
      doi = {{10.1016/j.jpain.2004.02.576}}
    }
    
    Keeney, R. The value of Internet commerce to the customer {1999} MANAGEMENT SCIENCE
    Vol. {45}({4}), pp. {533-542} 
    article  
    Abstract: Internet commerce has the potential to offer customers a better deal compared to purchases by conventional methods in many situations. To make this potential a reality, businesses must focus on the values of their customers. We interviewed over one-hundred individuals about all the pros and cons of using Internet commerce that they experienced or envisioned. The results were organized into twenty-five categories of objectives that were influenced by Internet purchases. These categories were separated into means objectives and fundamental objectives used to describe the bottom line consequences of concern to customers. These results are applicable to designing an Internet commerce system for a business, creating and redesigning products, and increasing value to customers. The set of fundamental objectives also provides the foundation for developing a quantitative model of customer values.
    BibTeX:
    @article{Keeney1999,
      author = {Keeney, RL},
      title = {The value of Internet commerce to the customer},
      journal = {MANAGEMENT SCIENCE},
      year = {1999},
      volume = {45},
      number = {4},
      pages = {533-542}
    }
    
    Kelly, F., Maulloo, A. & Tan, D. Rate control for communication networks: shadow prices, proportional fairness and stability {1998} JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY
    Vol. {49}({3}), pp. {237-252} 
    article  
    Abstract: This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing.
    BibTeX:
    @article{Kelly1998,
      author = {Kelly, FP and Maulloo, AK and Tan, DKH},
      title = {Rate control for communication networks: shadow prices, proportional fairness and stability},
      journal = {JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY},
      year = {1998},
      volume = {49},
      number = {3},
      pages = {237-252}
    }
    
    Kim, P., Eng, T., Deering, M. & Maxfield, A. Published criteria for evaluating health related web sites: review {1999} BRITISH MEDICAL JOURNAL
    Vol. {318}({7184}), pp. {647-649} 
    article  
    Abstract: Objective To review published criteria for specifically evaluating health related information on the world wide web, and to identify areas of consensus. Design Search of world wide web sites and peer reviewed medical journals for explicit criteria for evaluating health related information on the web, using Medline and Lexis-Nerds databases, and the following internet search engines: Yahoo!, Excite, Altavista, Webcrawler, HotBot, Infoseek, Magellan Internet Guide, and Lycos. Criteria were extracted and grouped into categories. Results 29 published rating tools and journal articles were identified that had explicit criteria for assessing health related web sites. Of the 165 criteria extracted fr om these tools and articles, 132 (80 were grouped under one of 12 specific categories and 33 (20) were grouped as miscellaneous because they lacked specificity or were unique. The most frequently cited criteria were those dealing with content, design and aesthetics of site, disclosure of authors, sponsors, or developers, currency of information (includes frequency of update, freshness, maintenance of site), authority of source, ease of use, and accessibility and availability. Conclusions Results suggest that many authors agree on key criteria for evaluating health related web sites, and that efforts to develop consensus criteria may be helpful. The next step is to identify and assess a clear, simple set of consensus criteria that the general public can understand and use.
    BibTeX:
    @article{Kim1999,
      author = {Kim, P and Eng, TR and Deering, MJ and Maxfield, A},
      title = {Published criteria for evaluating health related web sites: review},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {1999},
      volume = {318},
      number = {7184},
      pages = {647-649}
    }
    
    Kim, Y., Demarque, P., Yi, S. & Alexander, D. The Y-2 isochrones for alpha-element enhanced mixtures {2002} ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES
    Vol. {143}({2}), pp. {499-511} 
    article  
    Abstract: We present a new set of isochrones in which the effect of the alpha-element enhancement is fully incorporated. These isochrones are an extension of the already published set of Y-2 Isochrones ( our Paper I), constructed for the scaled-solar mixture. As in Paper I, helium diffusion and convective core overshoot have been taken into account. The range of chemical compositions covered is 0.00001 less than or equal to Z less than or equal to 0.08. The models were evolved from the pre-main-sequence stellar birthline to the onset of helium burning in the core. The age range of the full isochrone set is 0.1-20 Gyr, while younger isochrones of age 1-80 Myr are also presented up to the main-sequence turn-off. Combining this set with that of Paper I for scaled-solar mixture isochrones, we provide a consistent set of isochrones that can be used to investigate populations of any value of alpha-enhancement. We confirm the earlier results of Paper I that inclusion of alpha-enhancement effects further reduces the age estimates of globular clusters by approximately 8% if [alpha/Fe] = +0.3. It is important to note the metallicity dependence of the change in age estimates ( larger age reductions in lower metallicities). This reduces the age gap between the oldest metal-rich and metal-poor Galactic stellar populations and between the halo and the disk populations. We also investigate whether the effects of alpha-enhancement can be mimicked by increasing the total metal abundance in the manner proposed by Salaris and collaborators. We find such simple scaling formulae are valid at low metallicities but not at all at high metallicities near and above solar. Thus, it is essential to use the isochrones rigorously computed for alpha-enhancement when modeling metal-rich populations, such as bright galaxies. The isochrone tables, together with interpolation routines have been made available via internet.
    BibTeX:
    @article{Kim2002,
      author = {Kim, YC and Demarque, P and Yi, SYK and Alexander, DR},
      title = {The Y-2 isochrones for alpha-element enhanced mixtures},
      journal = {ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES},
      year = {2002},
      volume = {143},
      number = {2},
      pages = {499-511}
    }
    
    Kimble, H.J. The quantum internet {2008} NATURE
    Vol. {453}({7198}), pp. {1023-1030} 
    article DOI  
    Abstract: Quantum networks provide opportunities and challenges across a range of intellectual and technical frontiers, including quantum computation, communication and metrology. The realization of quantum networks composed of many nodes and channels requires new scientific capabilities for generating and characterizing quantum coherence and entanglement. Fundamental to this endeavour are quantum interconnects, which convert quantum states from one physical system to those of another in a reversible manner. Such quantum connectivity in networks can be achieved by the optical interactions of single photons and atoms, allowing the distribution of entanglement across the network and the teleportation of quantum states between nodes.
    BibTeX:
    @article{Kimble2008,
      author = {Kimble, H. J.},
      title = {The quantum internet},
      journal = {NATURE},
      year = {2008},
      volume = {453},
      number = {7198},
      pages = {1023-1030},
      doi = {{10.1038/nature07127}}
    }
    
    Kitayama, K. & Wada, N. Photonic IP routing {1999} IEEE PHOTONICS TECHNOLOGY LETTERS
    Vol. {11}({12}), pp. {1689-1691} 
    article  
    Abstract: A photonic internet protocol (IP) routing is proposed in which the TP address, mapped onto an optical code, is recognized by performing optical correlation in the time domain in a parallel manner. Preliminary experiment shows that it can process 6.5 x 10(9) packets per second. It will help overcome the bottleneck in current electrical TP routers; i.e., the time it takes to look up addresses in the routing table.
    BibTeX:
    @article{Kitayama1999,
      author = {Kitayama, K and Wada, N},
      title = {Photonic IP routing},
      journal = {IEEE PHOTONICS TECHNOLOGY LETTERS},
      year = {1999},
      volume = {11},
      number = {12},
      pages = {1689-1691}
    }
    
    Kitayama, K., Wada, N. & Sotobayashi, H. Architectural considerations for photonic IP router based upon optical code correlation {2000} JOURNAL OF LIGHTWAVE TECHNOLOGY
    Vol. {18}({12}), pp. {1834-1844} 
    article  
    Abstract: A photonic label switching router (PLSR) of which the photonic label processing is based upon optical code correlation, is investigated. To resolve the electronic router's bottleneck in current Internet protocol (IP) over wavelength division multiplexing (WDM) networks, we will envision IP over photonic networks in which the PLSRs totally replace the electronic routers. The:architectures of PLSR including the photonic label processing, the photonic label swapping, and the optical switching and their optical implementations are studied. Results of proof-of-concept experiments for the photonic label processing and photonic label swapping Will confirm the feasibility to attain the target performance: the throughput of 100 Tb/s at least, the processing speed around 10 Gpacket/s, and the number of label entries up to 10 k.
    BibTeX:
    @article{Kitayama2000,
      author = {Kitayama, K and Wada, N and Sotobayashi, H},
      title = {Architectural considerations for photonic IP router based upon optical code correlation},
      journal = {JOURNAL OF LIGHTWAVE TECHNOLOGY},
      year = {2000},
      volume = {18},
      number = {12},
      pages = {1834-1844}
    }
    
    Klappenbach, J., Saxman, P., Cole, J. & Schmidt, T. rrndb: the Ribosomal RNA Operon Copy Number Database {2001} NUCLEIC ACIDS RESEARCH
    Vol. {29}({1}), pp. {181-184} 
    article  
    Abstract: The Ribosomal RNA Operon Copy Number Database (rrndb) is an Internet-accessible database containing annotated information on rRNA operon copy number among prokaryotes. Gene redundancy is uncommon in prokaryotic genomes, yet the rRNA genes can vary from one to as many as 15 copies. Despite the widespread use of 16S rRNA gene sequences for identification of prokaryotes, information on the number and sequence of individual rRNA genes in a genome is not readily accessible. In an attempt to understand the evolutionary implications of rRNA operon redundancy, we have created a phylogenetically arranged report on rRNA gene copy number for a diverse collection of prokaryotic microorganisms. Each entry (organism) in the rrndb contains detailed information linked directly to external websites including the Ribosomal Database Project, GenBank, PubMed and several culture collections. Data contained in the rrndb will be valuable to researchers investigating microbial ecology and evolution using 16S rRNA gene sequences. The rrndb web site is directly accessible on the WWW at http ://rrndb.cme.msu.edu.
    BibTeX:
    @article{Klappenbach2001,
      author = {Klappenbach, JA and Saxman, PR and Cole, JR and Schmidt, TM},
      title = {rrndb: the Ribosomal RNA Operon Copy Number Database},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2001},
      volume = {29},
      number = {1},
      pages = {181-184}
    }
    
    Klausner, J., Wolf, W., Fischer-Ponce, L., Zolt, I. & Katz, M. Tracing a syphilis outbreak through cyberspace {2000} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {284}({4}), pp. {447+} 
    article  
    Abstract: Context A recent outbreak of syphilis among users of an Internet chat room challenged traditional methods of partner notification and community education because locating information on sexual partners was limited to screen names and privacy concerns precluded identifying sexual partners through the Internet service provider. Objectives To determine the association of Internet use and acquisition of syphilis and to describe innovative methods of partner notification in cyberspace. Design, Setting, and Patients Outbreak investigation conducted at the San Francisco (Calif) Department of Public Health (SFDPH) in June-August 1999 of 7 cases of early syphilis among gay men linked to an online chat room; case-control study of 6 gay men with syphilis reported to SFDPH in July-August 1999 (cases) and 32 gay men without syphilis who presented to a city clinic in April-July 1999 (controls), Main Outcome Measures Association of syphilis infection with Internet use, Internet use among cases vs controls, and partner notification methods and partner evaluation indexes. Results During the outbreak, cases were significantly more likely than controls to have met their sexual partners through use of the Internet (67% vs 19 odds ratio, 8.7; P=.03). We notified and confirmed testing for 42% of named partners; the mean number of sexual partners medically evaluated per index case was 5.9, Conclusions In this study, meeting sexual partners through the Internet was associated with acquisition of syphilis among gay men. Public health efforts must continually adapt disease control procedures to new venues, carefully weighing the rights to privacy vs the need to protect public health.
    BibTeX:
    @article{Klausner2000,
      author = {Klausner, JD and Wolf, W and Fischer-Ponce, L and Zolt, I and Katz, MH},
      title = {Tracing a syphilis outbreak through cyberspace},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2000},
      volume = {284},
      number = {4},
      pages = {447+}
    }
    
    Klein, L. Evaluating the potential of interactive media through new lens: Search versus experience goods {1998} JOURNAL OF BUSINESS RESEARCH
    Vol. {41}({3}), pp. {195-203} 
    article  
    Abstract: The burgeoning growth of interactive media, and more specifically the Internet, as communication vehicles has inspired flurry of market research that attempts to measure the impact of advertising in the new media, utilizing traditional advertising measurement methods. However, the full impact of these new media will nor be realized unless we engage in more thorough research into how to evaluate their potential in terms of their influence on information search behavior. This article seeks to provide direction for such exploration by proposing a new model of consumer information search that integrates the principles of information economics and a goods classification model based on the search/experience/credence paradigm. This model will facilitate a greater understanding by marketers and academics of how a medium can influence consumer information search through its impact on the critical information consumers have access to prior to product usage. (C) 1998 Elsevier Science Inc.
    BibTeX:
    @article{Klein1998,
      author = {Klein, LR},
      title = {Evaluating the potential of interactive media through new lens: Search versus experience goods},
      journal = {JOURNAL OF BUSINESS RESEARCH},
      year = {1998},
      volume = {41},
      number = {3},
      pages = {195-203}
    }
    
    Kobayashi, M. & Takeda, K. Information retrieval on the Web {2000} ACM COMPUTING SURVEYS
    Vol. {32}({2}), pp. {144-173} 
    article  
    Abstract: In this paper we review studies of the growth of the Internet and technologies that are useful for information search and retrieval on the Web. We present data an the Internet from several different sources, e.g., current as well as projected number of users, hosts, and Web sites. Although numerical figures vary, overall trends cited by the sources are consistent and point to exponential growth in the past and in the coming decade. Hence it is not surprising that about 85% of Internet users surveyed claim using search engines and search services to find specific information. The same surveys show, however, that users are not satisfied with the performance of the current generation of search engines; the slow retrieval speed, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. We discuss the development of new techniques targeted to resolve some of the problems associated with Web-based information retrieval,and speculate an future trends.
    BibTeX:
    @article{Kobayashi2000,
      author = {Kobayashi, M and Takeda, K},
      title = {Information retrieval on the Web},
      journal = {ACM COMPUTING SURVEYS},
      year = {2000},
      volume = {32},
      number = {2},
      pages = {144-173}
    }
    
    Konca, K., Lankoff, A., Banasik, A., Lisowska, H., Kuszewski, T., Gozdz, S., Koza, Z. & Wojcik, A. A cross-platform public domain PC image-analysis program for the comet assay {2003} MUTATION RESEARCH-GENETIC TOXICOLOGY AND ENVIRONMENTAL MUTAGENESIS
    Vol. {534}({1-2}), pp. {15-20} 
    article  
    Abstract: The single-cell gel electrophocesis, also known as the comet assay, has gained wide-spread popularity as a simple and reliable method to measure genotoxic and cytotoxic effects of physical and chemical agents as well as kinetics of DNA repair. Cells are generally stained with fluorescent dyes. The analysis of comets-damaged cells which form a typical comet-shaped pattern-is greatly facilitated by the use of a computer image-analysis program. Although several image-analysis programs are available commercially, they are expensive and their source codes are not provided. For Macintosh computers a cost-free public domain macro is available on the Internet. No ready for use, cost-free program exists for the PC platform. We have, therefore, developed such a public domain program under the GNU license for PC computers. The program is called CASP and can be run on a variety of hardware and software platforms. Its practical merit was tested an human lymphocytes exposed to gamma-rays and found to yield reproducible results. The binaries for Windows 95 and Linux, together with the source code can be obtained from: http://www.casp.of.pl. (C) 2002 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Konca2003,
      author = {Konca, K and Lankoff, A and Banasik, A and Lisowska, H and Kuszewski, T and Gozdz, S and Koza, Z and Wojcik, A},
      title = {A cross-platform public domain PC image-analysis program for the comet assay},
      journal = {MUTATION RESEARCH-GENETIC TOXICOLOGY AND ENVIRONMENTAL MUTAGENESIS},
      year = {2003},
      volume = {534},
      number = {1-2},
      pages = {15-20}
    }
    
    Korgaonkar, P. & Wolin, L. A multivariate analysis of Web usage {1999} JOURNAL OF ADVERTISING RESEARCH
    Vol. {39}({2}), pp. {53-68} 
    article  
    Abstract: Applying the uses and gratification theory to improve the understanding of Web usage, the authors explore Web users' motivations and concerns. These motivations and concerns, as well as demographic factors, were studied in three usage contexts: (1) the number of hours per day spent on the Web, (2) the percentage of time spent for business versus personal purposes, and (3) the purchases made from a Web business and, if purchases were made, the approximate number of times purchasers placed orders on the Web. Multivariate factor analysis suggests the presence of seven motivations and concerns regarding Web use. Additionally, the results suggest that these seven factors, along with age, income, gender, and education levels, are significantly correlated with the three usage contexts.
    BibTeX:
    @article{Korgaonkar1999,
      author = {Korgaonkar, PK and Wolin, LD},
      title = {A multivariate analysis of Web usage},
      journal = {JOURNAL OF ADVERTISING RESEARCH},
      year = {1999},
      volume = {39},
      number = {2},
      pages = {53-68}
    }
    
    Koufaris, M. Applying the technology acceptance model and flow theory to online consumer behavior {2002} INFORMATION SYSTEMS RESEARCH
    Vol. {13}({2}), pp. {205-223} 
    article  
    Abstract: In this study, we consider the online consumer as both a shopper and a computer user. We test constructs from information systems (Technology Acceptance Model), marketing (Consumer Behavior), and psychology (Flow and Environmental Psychology) in an integrated theoretical framework of online consumer behavior. Specifically, we examine how emotional and cognitive responses to visiting a Web-based store for the first time can influence online consumers' intention to return and their likelihood to make unplanned purchases. The instrumentation shows reasonably good measurement properties and the constructs are validated as a nomological network. A questionnaire-based empirical study is used to test this nomological network. Results confirm the double identity of the online consumer as a shopper and a computer user because both shopping enjoyment and perceived usefulness of the site strongly predict intention to return. Our results on unplanned purchases are not conclusive. We also test some individual and Web site factors that can affect the consumer's emotional and cognitive responses. Product involvement, Web skills, challenges, and use of value-added search mechanisms all have a significant impact on the Web consumer. The study provides a more rounded, albeit partial, view of the online consumer and is a significant step towards a better understanding of consumer behavior on the Web. The validated metrics should be of use to researchers and practitioners alike.
    BibTeX:
    @article{Koufaris2002,
      author = {Koufaris, M},
      title = {Applying the technology acceptance model and flow theory to online consumer behavior},
      journal = {INFORMATION SYSTEMS RESEARCH},
      year = {2002},
      volume = {13},
      number = {2},
      pages = {205-223}
    }
    
    Koynova, R. & Caffrey, M. Phases and phase transitions of the phosphatidylcholines {1998} BIOCHIMICA ET BIOPHYSICA ACTA-REVIEWS ON BIOMEMBRANES
    Vol. {1376}({1}), pp. {91-145} 
    article  
    Abstract: LIPIDAT (http://www.lipidat.chemistry.ohio-state.edu) is an Internet accessible, computerized relational database providing access to the wealth of information scattered throughout the literature concerning synthetic and biologically derived polar lipid polymorphic and mesomorphic phase behavior and molecular structures. Here, a review of the data subset referring to phosphatidylcholines is presented together with an analysis of these data. This subset represents ca. 60% of all LIPIDAT records. It includes data collected over a 43-year period and consists of 12,208 records obtained from 1573 articles in 106 different journals. An analysis of the data in the subset identifies trends in phosphatidylcholine phase behavior reflecting changes in lipid chain length, unsaturation (number, isomeric type and position of double bonds), asymmetry and branching, type of chain-glycerol linkage (ester, ether, amide), position of chain attachment to the glycerol backbone (1,2- vs. 1,3-) and head group modification. Also included is a summary of the data concerning the effect of pressure, pH, stereochemical purity, and different additives such as salts, saccharides, amino acids and alcohols, on phosphatidylcholine phase behavior. Information on the phase behavior of biologically derived phosphatidylcholines is also presented. This review includes 651 references. (C) 1998 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Koynova1998,
      author = {Koynova, R and Caffrey, M},
      title = {Phases and phase transitions of the phosphatidylcholines},
      journal = {BIOCHIMICA ET BIOPHYSICA ACTA-REVIEWS ON BIOMEMBRANES},
      year = {1998},
      volume = {1376},
      number = {1},
      pages = {91-145}
    }
    
    Kozinets, R. Utopian enterprise: Articulating the meanings of Star Trek's culture of consumption {2001} JOURNAL OF CONSUMER RESEARCH
    Vol. {28}({1}), pp. {67-88} 
    article  
    Abstract: In this article, I examine the cultural and subcultural construction of consumption meanings and practices as they are negotiated from mass media images and objects. Field notes and artifacts from 20 months of fieldwork at Star Trek fan clubs, at conventions, and in Internet groups, and 67 interviews with Star Trek fans are used as data. Star Treks subculture of consumption is found to be constructed as a powerful utopian refuge. Stigma, social situation, and the need for legitimacy shape the diverse subcultures' consumption meanings and practices. Legitimizing articulations of Star Trek as a religion or myth underscore fans' heavy investment of self in the text. These sacralizing articulations are used to distance the text from its superficial status as a commercial product. The findings emphasize and describe how consumption often fulfills the contemporary hunger for a conceptual space in which to construct a sense of self and what matters in life. They also reveal broader cultural tensions between the affective investments people make in consumption objects and the encroachment of commercialization.
    BibTeX:
    @article{Kozinets2001,
      author = {Kozinets, RV},
      title = {Utopian enterprise: Articulating the meanings of Star Trek's culture of consumption},
      journal = {JOURNAL OF CONSUMER RESEARCH},
      year = {2001},
      volume = {28},
      number = {1},
      pages = {67-88}
    }
    
    Kramer, G., Mukherjee, B. & Pesavento, G. Ethernet PON (ePON): Design and analysis of an optical access network {2001} PHOTONIC NETWORK COMMUNICATIONS
    Vol. {3}({3}), pp. {307-319} 
    article  
    Abstract: With the expansion of services offered over the Internet, the ``last mile'' bottleneck problems continue to exacerbate. A passive optical network (PON) is a technology viewed by many as an attractive solution to this problem. In this study, we propose the design and analysis of a PON architecture which has an excellent performance-to-cost ratio. This architecture uses the time-division multiplexing (TDM) approach to deliver data encapsulated in Ethernet packets from a collection of optical network units (ONUs) to a central optical line terminal (OLT) over the PON access network. The OLT, in turn, is connected to the rest of the Internet. A simulation model is used to analyze the system's performance such as bounds on packets delay and queue occupancy. Then, we discuss the possibility of improving the bandwidth utilization by means of timeslot size adjustment, and by packet scheduling.
    BibTeX:
    @article{Kramer2001,
      author = {Kramer, G and Mukherjee, B and Pesavento, G},
      title = {Ethernet PON (ePON): Design and analysis of an optical access network},
      journal = {PHOTONIC NETWORK COMMUNICATIONS},
      year = {2001},
      volume = {3},
      number = {3},
      pages = {307-319}
    }
    
    Krapivsky, P. & Redner, S. Organization of growing random networks {2001} PHYSICAL REVIEW E
    Vol. {63}({6, Part 2}) 
    article  
    Abstract: The organizational development of growing random networks is investigated. These growing networks are built by adding nodes successively, and linking each to an earlier node of degree k with an attachment probability A(k). When A(k) grows more slowly than linearly with k, the number of nodes with k links. N-k(t), decays faster than a power law in k, while for A(k) growing faster than linearly in k, a single node emerges which connects to nearly all other nodes. When A(k) is asymptotically linear, N-k(t) similar to tk(-nu), With nu dependent on details of the attachment probability, but in the range 2 < <. The combined age and degree distribution of nodes shows that old nodes typically have a large degree. There is also a significant correlation in the degrees of neighboring nodes, so that nodes of similar degree are more likely to be connected. The size distributions of the in and out components of the network with respect to a given node-namely, its ``descendants'' and ``ancestors''-are also determined. The in component exhibits a robust s(-2) power-law tail, where s is the component size. The out component has a typical size of order In t, and it provides basic insights into the genealogy of the network.
    BibTeX:
    @article{Krapivsky2001,
      author = {Krapivsky, PL and Redner, S},
      title = {Organization of growing random networks},
      journal = {PHYSICAL REVIEW E},
      year = {2001},
      volume = {63},
      number = {6, Part 2}
    }
    
    Kraut, R., Kiesler, S., Boneva, B., Cummings, J., Helgeson, V. & Crawford, A. Internet paradox revisited {2002} JOURNAL OF SOCIAL ISSUES
    Vol. {58}({1}), pp. {49-74} 
    article  
    Abstract: Kraut et al. (1998) reported negative effects of using the Internet on social involvement and psychological well-being among new Internet users in 1995-96. We called the effects a ``paradox'' because participants used the Internet heavily for communication, which generally has positive effects. A 3-year follow-up of 208 of these respondents found that negative effects dissipated. We also report findings from a longitudinal survey in 1998-99 of 406 new computer and television purchasers. This sample generally experienced positive effects of using the Internet on communication, social involvement, and well-being. However, consistent with a ``rich get richer'' model, using the Internet predicted better outcomes for extraverts and those with more social support but worse outcomes for introverts and those with less support.
    BibTeX:
    @article{Kraut2002,
      author = {Kraut, R and Kiesler, S and Boneva, B and Cummings, J and Helgeson, V and Crawford, A},
      title = {Internet paradox revisited},
      journal = {JOURNAL OF SOCIAL ISSUES},
      year = {2002},
      volume = {58},
      number = {1},
      pages = {49-74}
    }
    
    Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J. & Couper, M. Psychological research online - Report of board of scientific affairs' advisory group on the conduct of research on the Internet {2004} AMERICAN PSYCHOLOGIST
    Vol. {59}({2}), pp. {105-117} 
    article DOI  
    Abstract: As the Internet has changed communication, commerce, and the distribution of information, so too it is changing psychological research. Psychologists can observe new or rare phenomena online and can do research on traditional psychological topics more efficiently, enabling them to expand the scale and scope of their research. Yet these opportunities entail risk both to research quality and to human subjects. Internet research is inherently no more risky than traditional observational, survey, or experimental methods. Yet the risks and safeguards against them will differ from those characterizing traditional research and will themselves change over time. This article describes some benefits, and challenges of conducting psychological research via the Internet and offers recommendations to both researchers and institutional review boards for dealing with them.
    BibTeX:
    @article{Kraut2004,
      author = {Kraut, R and Olson, J and Banaji, M and Bruckman, A and Cohen, J and Couper, M},
      title = {Psychological research online - Report of board of scientific affairs' advisory group on the conduct of research on the Internet},
      journal = {AMERICAN PSYCHOLOGIST},
      year = {2004},
      volume = {59},
      number = {2},
      pages = {105-117},
      doi = {{10.1037/0003-066X.59.2.105}}
    }
    
    Kraut, R., Patterson, M., Lundmark, V., Kiesler, S., Mukopadhyay, T. & Scherlis, W. Internet paradox - A social technology that reduces social involvement and psychological well-being? {1998} AMERICAN PSYCHOLOGIST
    Vol. {53}({9}), pp. {1017-1031} 
    article  
    Abstract: The Internet could change the lives of average citizens as much as did the telephone in the early part of the 20th century and television in the 1950s and 1960s. Researchers and social critics are debating whether the Internet is improving or harming participation in community life and social I relationships. This research examined the social and psychological impact of the Internet on 169 people in 73 households during their first 1 to 2 years on-line. We used longitudinal data to examine the effects of the Internet on social involvement and psychological well-being. In this sample, the Internet was used extensively for communication. Nonetheless, greater use of the Internet was associated with declines in participants' communication with family members in the household, declines in the size of their social circle, and increases in their depression and loneliness. These findings have implications for research, for public policy: and for the design of technology.
    BibTeX:
    @article{Kraut1998,
      author = {Kraut, R and Patterson, M and Lundmark, V and Kiesler, S and Mukopadhyay, T and Scherlis, W},
      title = {Internet paradox - A social technology that reduces social involvement and psychological well-being?},
      journal = {AMERICAN PSYCHOLOGIST},
      year = {1998},
      volume = {53},
      number = {9},
      pages = {1017-1031}
    }
    
    Krishnan, P., Raz, D. & Shavitt, Y. The cache location problem {2000} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {8}({5}), pp. {568-582} 
    article  
    Abstract: This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number of caches in the network. We formulate these location problems both for general caches and for transparent en-route caches (TERCs), and identify that, in general, they are intractable. We give optimal algorithms for line and ring networks, and present dosed form formulae for some special cases. We also present a computationally efficient dynamic programming algorithm for the single server case. This last case is of particular practical interest. It models a network that wishes to minimize the average access delay for a single web server We experimentally study the effects of our algorithm using real web server data. We observe that a small number of TERCs are sufficient to reduce the network traffic significantly, Furthermore, there is a surprising consistency over time in the relative amount of web traffic from the server along a path, lending a stability to our TERC location solution. Our techniques can be used by network providers to reduce traffic load in their network.
    BibTeX:
    @article{Krishnan2000,
      author = {Krishnan, P and Raz, D and Shavitt, Y},
      title = {The cache location problem},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2000},
      volume = {8},
      number = {5},
      pages = {568-582}
    }
    
    Kumar, A. Comparative performance analysis of versions of TCP in a local network with a lossy link {1998} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {6}({4}), pp. {485-498} 
    article  
    Abstract: We use a stochastic model to study the throughput performance of various versions of transport control protocol (TCP) (Tahoe (including its older version that we call OldTahoe), Reno, and NewReno) in the presence of random losses on a wireless link in a local network. We model the cyclic evolution of TCP, each cycle starting at the epoch at which recovery starts from the losses in the previous cycle, TCP throughput is computed as the reward rate in a certain Markov renewal-reward process. Our model allows us to study the performance implications of various protocol features, such as fast retransmit and fast recovery. We show the impact of coarse timeouts, In the local network environment the key issue is to avoid a coarse timeout after a loss occurs. We show the effect of reducing the number of duplicate acknowledgements (ACK's) for triggering a fast retransmit. A large coarse timeout granularity seriously affects the performance of TCP, and the various protocol versions differ in their ability to avoid a coarse timeout when random loss occurs; we quantify these differences. As observed in simulations by other researchers, we show that, for large packet-loss probabilities, TCP-Reno performs no better, or worse, than TCP-Tahoe, TCP-NewReno is a considerable improvement over TCP-Tahoe, and reducing the fast-retransmit threshold from three to one yields a large gain in throughput; this is similar to one of the modifications in the recent TCP-Vegas proposal. We explain some of these observations in terms of the variation of fast-recovery probabilities with packet-loss probability. Finally, we show that the results of our analysis compare well with a simulation that uses actual TCP code.
    BibTeX:
    @article{Kumar1998,
      author = {Kumar, A},
      title = {Comparative performance analysis of versions of TCP in a local network with a lossy link},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1998},
      volume = {6},
      number = {4},
      pages = {485-498}
    }
    
    Kunniyur, S. & Srikant, R. End-to-end congestion control schemes: Utility functions, random losses. and ECN marks {2003} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {11}({5}), pp. {689-702} 
    article DOI  
    Abstract: We present a framework for designing end-to-end congestion control schemes in a network where each user may have a different utility function and may experience non-congestion-related. losses. We first show that there exists an additive-increase-multiplicative-decrease scheme using only end-to-end measurable losses such that a socially optimal solution can be reached. We incorporate round-trip delay in this model, and show that one can generalize observations regarding TCP-type congestion avoidance to more general window flow control schemes. We then consider explicit congestion notification (ECN) as an alternate mechanism (instead of losses) for signaling congestion and show that ECN marking levels can be designed to nearly eliminate losses in the network by choosing the marking level independently for each node in the network. While the ECN marking level at each node may depend on the number of flows through the node, the appropriate marking level can be estimated using only aggregate flow measurements, i.e., per-flow measurements are not required.
    BibTeX:
    @article{Kunniyur2003,
      author = {Kunniyur, S and Srikant, R},
      title = {End-to-end congestion control schemes: Utility functions, random losses. and ECN marks},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2003},
      volume = {11},
      number = {5},
      pages = {689-702},
      note = {IEEE Conference on Computer Communications (INFOCOM), TEL AVIV, ISRAEL, MAR, 2000},
      doi = {{10.1109/TNET.2003.818183}}
    }
    
    Kushmerick, N. Wrapper induction: Efficiency and expressiveness {2000} ARTIFICIAL INTELLIGENCE
    Vol. {118}({1-2}), pp. {15-68} 
    article  
    Abstract: The Internet presents numerous sources of useful information-telephone directories, product catalogs, stock quotes, event listings, etc. Recently, many systems have been built that automatically gather and manipulate such information on a user's behalf. However, these resources are usually formatted for use by people (e.g., the relevant content is embedded in HTML pages), so extracting their content is difficult. Most systems use customized wrapper procedures to perform this extraction task. Unfortunately, writing wrappers is tedious and error-prone. As an alternative, we advocate wrapper induction, a technique for automatically constructing wrappers. In this article, we describe six wrapper classes, and use a combination of empirical and analytical techniques to evaluate the computational tradeoffs among them. We first consider expressiveness: how well the classes can handle actual Internet resources, and the extent to which wrappers in one class can mimic those in another. We then turn to efficiency: we measure the number of examples and time required to learn wrappers in each class, and we compare these results to PAC models of our task and asymptotic complexity analyses of our algorithms. Summarizing our results, we find that most of our wrapper classes are reasonably useful (70% of surveyed sites can be handled in total), yet can rapidly learned (learning usually requires just a handful of examples and a fraction of a CPU second per example). (C) 2000 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Kushmerick2000,
      author = {Kushmerick, N},
      title = {Wrapper induction: Efficiency and expressiveness},
      journal = {ARTIFICIAL INTELLIGENCE},
      year = {2000},
      volume = {118},
      number = {1-2},
      pages = {15-68}
    }
    
    Labovitz, C., Malan, G. & Jahanian, F. Internet routing instability {1998} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {6}({5}), pp. {515-528} 
    article  
    Abstract: This paper examines the network interdomain routing information exchanged between backbone service providers at the major U.S. public Internet exchange points. Internet routing instability, or the rapid fluctuation of network reachability information, is an important problem currently facing the Internet engineering community. High levels of network instability can lead to packet loss, increased network latency and time to convergence. At the extreme, high levels of routing instability have led to the loss of internal connectivity in wide-area, national networks, In this paper, we describe several unexpected trends in routing instability, and examine a number of anomalies and pathologies observed in the exchange of inter-domain routing information, The analysis in this paper is based on data collected from BGP routing messages generated by border routers at five of the Internet core's public exchange points during a nine month period. We show that the volume of these routing updates is several orders of magnitude more than expected and that the majority of this routing information is redundant, or pathological, Furthermore, our analysis reveals several unexpected trends and ill-behaved systematic properties in Internet routing, We finally posit a number of explanations for these anomalies and evaluate their potential impact on the Internet infrastructure.
    BibTeX:
    @article{Labovitz1998,
      author = {Labovitz, C and Malan, GR and Jahanian, F},
      title = {Internet routing instability},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1998},
      volume = {6},
      number = {5},
      pages = {515-528}
    }
    
    Lakshman, T. & Madhow, U. The performance of TCP/IP for networks with high bandwidth-delay products and random loss {1997} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {5}({3}), pp. {336-350} 
    article  
    Abstract: This paper examines the performance of TCP/IP, the Internet data transport protocol, over wide-area networks (WANs) in which data traffic could coexist with real-time traffic such as voice and video. Specifically, we attempt to develop a basic understanding, using analysis and simulation, of the properties of TCP/IP in a regime where: 1) the bandwidth-delay product of the network is high compared to the buffering in the network and 2) packets may incur random loss (e.g., due to transient congestion caused by fluctuations in real-time traffic, or wireless Links in the path of the connection), The following key results are obtained. First, random loss leads to significant throughput deterioration when the product of the loss probability and the square of the bandwidth-delay product is larger than one, Second, for multiple connections sharing a bottleneck link, TCP is grossly unfair toward connections with higher round-trip delays. This means that a simple first in first out (FIFO) queueing discipline might not suffice for data traffic in WANs. Finally, while the recent Reno version of TCP produces less bursty traffic than the original Tahoe version, it is less robust than the latter when successive losses are closely spaced. We conclude by indicating modifications that may be required both at the transport and network layers to provide good end-to-end performance over high-speed WANs.
    BibTeX:
    @article{Lakshman1997,
      author = {Lakshman, TV and Madhow, U},
      title = {The performance of TCP/IP for networks with high bandwidth-delay products and random loss},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1997},
      volume = {5},
      number = {3},
      pages = {336-350}
    }
    
    Lebowitz, J., Lewis, M. & Schuck, P. Modern analytical ultracentrifugation in protein science: A tutorial review {2002} PROTEIN SCIENCE
    Vol. {11}({9}), pp. {2067-2079} 
    article DOI  
    Abstract: Analytical ultracentrifugation (AU) is reemerging as a versatile tool for the study of proteins. Monitoring the sedimentation of macromolecules in the centrifugal field allows their hydrodynamic and thermodynamic characterization in solution, without interaction with any matrix or surface. The combination of new instrumentation and powerful computational software for data analysis has led to major advances in the characterization of proteins and protein complexes. The pace of new advancements makes it difficult for protein scientists to gain sufficient expertise to apply modern AU to their research problems. To address this problem, this review builds from the basic concepts to advanced approaches for the characterization of protein systems, and key computational and internet resources are provided. We will first explore the characterization of proteins by sedimentation velocity (SV). Determination of sedimentation coefficients allows for the modeling of the hydrodynamic shape of proteins and protein complexes. The computational treatment of SV data to resolve sedimenting components has been achieved. Hence, SV can be very useful in the identification of the oligomeric state and the stoichiometry of heterogeneous interactions. The second major part of the review covers sedimentation equilibrium (SE) of proteins, including membrane proteins and glycoproteins. This is the method of choice for molar mass determinations and the study of self-association and heterogeneous interactions, such as protein-protein, protein-nucleic acid, and protein-small molecule binding.
    BibTeX:
    @article{Lebowitz2002,
      author = {Lebowitz, J and Lewis, MS and Schuck, P},
      title = {Modern analytical ultracentrifugation in protein science: A tutorial review},
      journal = {PROTEIN SCIENCE},
      year = {2002},
      volume = {11},
      number = {9},
      pages = {2067-2079},
      doi = {{10.1110/ps.0207702}}
    }
    
    Lederer, A., Maupin, D., Sena, M. & Zhuang, Y. The technology acceptance model and the World Wide Web {2000} DECISION SUPPORT SYSTEMS
    Vol. {29}({3}), pp. {269-282} 
    article  
    Abstract: The technology acceptance model (TAM) proposes that ease of use and usefulness predict applications usage. The current research investigated TAM fur work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also demonstrate that (1) ease of understanding and ease of finding predict ease of use, and that (2) information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, acid managers understand antecedents to users' decisions to revisit sites relevant to their jobs. (C) 2000 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Lederer2000,
      author = {Lederer, AL and Maupin, DJ and Sena, MP and Zhuang, YL},
      title = {The technology acceptance model and the World Wide Web},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {2000},
      volume = {29},
      number = {3},
      pages = {269-282}
    }
    
    Lee, M. & Turban, E. A trust model for consumer Internet shopping {2001} INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE
    Vol. {6}({1}), pp. {75-91} 
    article  
    Abstract: E-commerce success, especially in the business-to-consumer area, is determined in part by whether consumers trust sellers and products they cannot see or touch, and electronic systems with which they have no previous experience. This paper describes a theoretical model for investigating the four main antecedent influences on consumer trust in Internet shopping, a major form of business-to-consumer e-commerce: trustworthiness of the Internet merchant, trustworthiness of the Internet as a shopping medium, infrastructural (contextual) factors (e.g., security, third-party certification), and other factors (e.g., company size, demographic variables). The antecedent variables are moderated by the individual consumer's degree of trust propensity, which reflects personality traits, culture, and experience. Based on the research model, a comprehensive set of hypotheses is formulated and a methodology for testing them is outlined. Some of the hypotheses are tested empirically to demonstrate the applicability of the theoretical model. The findings indicate that merchant integrity is a major positive determinant of consumer trust in Internet shopping, and that its effect is moderated by the individual consumer's trust propensity.
    BibTeX:
    @article{Lee2001,
      author = {Lee, MKO and Turban, E},
      title = {A trust model for consumer Internet shopping},
      journal = {INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE},
      year = {2001},
      volume = {6},
      number = {1},
      pages = {75-91},
      note = {Meeting of the International Conference on Electronic Commerce 2000 (ICEC2000), SEOUL, SOUTH KOREA, AUG 24, 2000}
    }
    
    Lemley, M. & McGowan, D. Legal implications of network economic effects {1998} CALIFORNIA LAW REVIEW
    Vol. {86}({3}), pp. {479-611} 
    article  
    Abstract: Economic scholarship has recently focused a great deal of attention on the phenomenon of network externalities, or network effects: markets in which the value that consumers place on a good increases as others use the good. Though the economic theory of network effects is of recent origin and is still not thoroughly understood, network effects increasingly play a role in legal argument. Judges, litigators, and scholars have suggested that antitrust law, intellectual property law, telecommunications law, Internet law, corporate law, and contract law need to be modified to take account of network effects. Their arguments reflect a wide range of views about what network effects are and how courts should react to them. In this Article, we explore the application of network economic theory in each of these contexts. We suggest ways in which particular legal rules should-and should not-be modified to take account of network effects. We also attempt to draw some general conclusions about the role of network economic theory in the legal enterprise and about the way in which courts should revise legal doctrines in response to theories from fields outside the law.
    BibTeX:
    @article{Lemley1998,
      author = {Lemley, MA and McGowan, D},
      title = {Legal implications of network economic effects},
      journal = {CALIFORNIA LAW REVIEW},
      year = {1998},
      volume = {86},
      number = {3},
      pages = {479-611}
    }
    
    Levine, R., Wadleigh, M., Cools, J., Ebert, B., Wernig, G., Huntly, B., Boggon, T., Wlodarska, L., Clark, J., Moore, S., Adelsperger, J., Koo, S., Lee, J., Gabriel, S., Mercher, T., D'Andrea, A., Frohling, S., Dohner, K., Marynen, P., Vandenberghe, P., Mesa, R., Tefferi, A., Griffin, J., Eck, M., Sellers, W., Meyerson, M., Golub, T., Lee, S. & Gilliland, D. Activating mutation in the tyrosine kinase JAK2 in polycythemia vera, essential thrombocythemia, and myeloid metaplasia with myelofibrosis {2005} CANCER CELL
    Vol. {7}({4}), pp. {387-397} 
    article DOI  
    Abstract: Polycythemia vera (PV), essential thrombocythemia (ET), and myeloid metaplasia with myelofibrosis (MMM) are clonal disorders arising from hematopoietic progenitors. An internet-based protocol was used to collect clinical information and biological specimens from patients with these diseases. High-throughput DNA resequencing identified a recurrent somatic missense mutation JAK2V617F in granulocyte DNA samples of 121 of 164 PV patients, of which 41 had homozygous and 80 had heterozygous mutations. Molecular and cytogenetic analyses demonstrated that homozygous mutations were due to duplication of the mutant allele. JAK2V617F was also identified in granulocyte DNA samples from 37 of 115 ET and 16 of 46 MMM patients, but was not observed in 269 normal individuals. In vitro analysis demonstrated that JAK2V617F is a constitutively active tyrosine kinase.
    BibTeX:
    @article{Levine2005,
      author = {Levine, RL and Wadleigh, M and Cools, J and Ebert, BL and Wernig, G and Huntly, BJP and Boggon, TJ and Wlodarska, L and Clark, JJ and Moore, S and Adelsperger, J and Koo, S and Lee, JC and Gabriel, S and Mercher, T and D'Andrea, A and Frohling, S and Dohner, K and Marynen, P and Vandenberghe, P and Mesa, RA and Tefferi, A and Griffin, JD and Eck, MJ and Sellers, WR and Meyerson, M and Golub, TR and Lee, SJ and Gilliland, DG},
      title = {Activating mutation in the tyrosine kinase JAK2 in polycythemia vera, essential thrombocythemia, and myeloid metaplasia with myelofibrosis},
      journal = {CANCER CELL},
      year = {2005},
      volume = {7},
      number = {4},
      pages = {387-397},
      doi = {{10.1016/j.ccr.2005.03.023}}
    }
    
    Li, W. Overview of fine granularity scalability in MPEG-4 video standard {2001} IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
    Vol. {11}({3}), pp. {301-317} 
    article  
    Abstract: Streaming Video Profile is the subject of an Amendment of MPEG-4, and is developed in response to the growing need on a video-coding standard for streaming video over the Internet. It provides the capability to distribute single-layered frame-based video over a wide range of bit rates with high coding efficiency, It also provides fine granularity scalability (FGS), and its combination with temporal scalability, to address a variety of challenging problems in delivering video over the Internet, This paper provides an overview of the FGS video coding technique in this Amendment of the MPEG-4.
    BibTeX:
    @article{Li2001,
      author = {Li, WP},
      title = {Overview of fine granularity scalability in MPEG-4 video standard},
      journal = {IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY},
      year = {2001},
      volume = {11},
      number = {3},
      pages = {301-317}
    }
    
    Li, Y., Chuang, J. & Sollenberger, N. Transmitter diversity for OFDM systems and its impact on high-rate data wireless networks {1999} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {17}({7}), pp. {1233-1243} 
    article  
    Abstract: Transmitter diversity and down-link beamforming can be used in high-rate data wireless networks with orthogonal frequency division multiplexing (OFDM) for capacity improvement. In this paper, we compare the performance of delay, permutation and space-time coding transmitter diversity for high-rate packet data wireless networks using OFDM modulation. For these systems, relatively high block error rates, such as 10 are acceptable assuming the use of effective automatic retransmission request (ARQ), As an alternative, we also consider using the same number of transmitter antennas for down-link beamforming as we consider for transmitter diversity, Our investigation indicates that delay transmitter diversity with quaternary phase-shift keying (QPSK) modulation and adaptive antenna arrays provides good quality of service (QoS) with low retransmission probability, while space-time coding transmitter diversity provides high peak data rates. Down-link beamforming together with adaptive antenna arrays, however, provides higher capacity than transmitter diversity for typical mobile environments.
    BibTeX:
    @article{Li1999,
      author = {Li, YG and Chuang, JC and Sollenberger, NR},
      title = {Transmitter diversity for OFDM systems and its impact on high-rate data wireless networks},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1999},
      volume = {17},
      number = {7},
      pages = {1233-1243}
    }
    
    Liang, T. & Huang, J. An empirical study on consumer acceptance of products in electronic markets: a transaction cost model {1998} DECISION SUPPORT SYSTEMS
    Vol. {24}({1}), pp. {29-43} 
    article  
    Abstract: Electronic commerce is gaining much attention from researchers and practitioners. Although increasing numbers of products are being marketed on the web, little effort has been spent on studying what product is more suitable for marketing electronically and why. In this research, a model based on the transaction cost theory is developed to tackle the problem. It is assumed that customers will go with a channel that has lower transactional costs. In other words, whether a customer would buy a product electronically is determined by the transaction cost of the channel. The transaction cost of a product on the web is determined by the uncertainty and asset specificity. An empirical study involving eight-six Internet users was conducted to test the model. Five products with different characteristics (book, shoes, toothpaste, microwave oven, and flower) were used in the study. The results indicate that (1) different products do have different customer acceptance on the electronic market, (2) the customer acceptance is determined by the transaction cost, which is in turn determined by the uncertainty and asset specificity, and (3) experienced shoppers are concerned more about the uncertainty in electronic shopping, whereas inexperienced shoppers are concerned with both. (C) 1998 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Liang1998,
      author = {Liang, TP and Huang, JS},
      title = {An empirical study on consumer acceptance of products in electronic markets: a transaction cost model},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {1998},
      volume = {24},
      number = {1},
      pages = {29-43}
    }
    
    Liberati, N., Urbach, J., Miyata, S., Lee, D., Drenkard, E., Wu, G., Villanueva, J., Wei, T. & Ausubel, F. An ordered, nonredundant library of Pseudomonas aeruginosa strain PA14 transposon insertion mutants {2006} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {103}({8}), pp. {2833-2838} 
    article DOI  
    Abstract: Random transposon insertion libraries have proven invaluable in studying bacterial genomes. Libraries that approach saturation must be large, with multiple insertions per gene, making comprehensive genome-wide scanning difficult. To facilitate genomescale study of the opportunistic human pathogen Pseudomonas aeruginosa strain PA14, we constructed a nonredundant library of PA14 transposon mutants (the PA14NR Set) in which nonessential PA14 genes are represented by a single transposon insertion chosen from a comprehensive library of insertion mutants. The parental library of PA14 transposon insertion mutants was generated by using MAR2xT7, a transposon compatible with transposonsite hybridization and based on mariner. The transposon-site hybridization genetic footprinting feature broadens the utility of the library by allowing pooled MAR2xT7 mutants to be individually tracked under different experimental conditions. A public, internet-accessible database (the PA14 Transposon Insertion Mutant Database, http://ausubellab.mgh.harvard.edu/cgi-bin/pa14/ home.cgi) was developed to facilitate construction, distribution, and use of the PA14NR Set. The usefulness of the PA14NR Set in genome-wide scanning for phenotypic mutants was validated in a screen for attachment to abiotic surfaces. Comparison of the genes disrupted in the PA14 transposon insertion library with an independently constructed insertion library in A aeruginosa strain PAO1 provides an estimate of the number of P. aeruginosa essential genes.
    BibTeX:
    @article{Liberati2006,
      author = {Liberati, NT and Urbach, JM and Miyata, S and Lee, DG and Drenkard, E and Wu, G and Villanueva, J and Wei, T and Ausubel, FM},
      title = {An ordered, nonredundant library of Pseudomonas aeruginosa strain PA14 transposon insertion mutants},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2006},
      volume = {103},
      number = {8},
      pages = {2833-2838},
      doi = {{10.1073/pnas.0511100103}}
    }
    
    Lin, C. & Liu, J. QoS routing in ad hoc wireless networks {1999} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {17}({8}), pp. {1426-1438} 
    article  
    Abstract: The emergence of nomadic applications have recently generated much interest in wireless network infrastructures that support real-time communications. In this paper, we propose a bandwidth routing protocol for quality-of-ser,ice (QoS) support in a multihop mobile network. The QoS routing feature is important for a mobile network to interconnect wired networks with QoS support (e.g., ATM, Internet, etc.). The QoS routing protocol can also work in a stand-alone multihop mobile network for real-time applications. Our QoS routing protocol contains end-to-end bandwidth calculation and bandwidth allocation. Under such a routing protocol, the source (or the ATM gateway) is informed of the bandwidth and QoS available to any destination in the mobile network. This knowledge enables the establishment of QoS connections within the mobile network and the efficient support of real-time applications. In addition, it enables more efficient call admission control. In the case of ATM interconnection, the bandwidth information can be used to carry out intelligent handoff between ATM gateways and/or to extend the ATM virtual circuit (VC) service to the mobile network with possible renegotiation of QoS parameters at the gateway. We examine the system performance in various QoS traffic hows and mobility environments via simulation. Simulation results suggest distinct performance advantages of our protocol that calculates the bandwidth information, It is particularly useful in call admission control, Furthermore, ``standby'' routing enhances the performance in the mobile environment. Simulation experiments show this improvement.
    BibTeX:
    @article{Lin1999,
      author = {Lin, CHR and Liu, JS},
      title = {QoS routing in ad hoc wireless networks},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1999},
      volume = {17},
      number = {8},
      pages = {1426-1438}
    }
    
    Lloyd, A. & May, R. Epidemiology - How viruses spread among computers and people {2001} SCIENCE
    Vol. {292}({5520}), pp. {1316-1317} 
    article  
    BibTeX:
    @article{Lloyd2001,
      author = {Lloyd, AL and May, RM},
      title = {Epidemiology - How viruses spread among computers and people},
      journal = {SCIENCE},
      year = {2001},
      volume = {292},
      number = {5520},
      pages = {1316-1317}
    }
    
    Lopez, M., Berggren, K., Chernokalskaya, E., Lazarev, A., Robinson, M. & Patton, W. A comparison of silver stain and SYPRO Ruby Protein Gel Stain with respect to protein detection in two-dimensional gels and identification by peptide mass profiling {2000} ELECTROPHORESIS
    Vol. {21}({17}), pp. {3673-3683} 
    article  
    Abstract: Proteomic projects are often focused on the discovery of differentially expressed proteins between control and experimental samples. Most laboratories choose the approach of running two-dimensional (2-D) gels, analyzing them and identifying the differentially expressed proteins by in-gel digestion and mass spectrometry. To date, the available stains for visualizing proteins on 2-D gels have been less than ideal for these projects because of poor detection sensitivity (Coomassie blue stain) or poor peptide recovery from in-gel digests and mass spectrometry (silver stain), unless extra destaining and washing steps are included in the protocol, in addition, the limited dynamic range of these stains has made it difficult to rigorously and reliably determine subtle differences in protein quantities. SYPRO Ruby Protein Gel Stain is a novel, ruthenium-based fluorescent dye for the detection of proteins in sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) gels that has properties making it well suited to high-throughput proteomics projects. The advantages of SYPRO Ruby Protein Gel Stain relative to silver stain demonstrated in this study include a broad linear dynamic range and enhanced recovery of peptides from in-gel digests for matrix assisted laser desorption/ionization-time of flight (MALDI-TOF) mass spectrometry.
    BibTeX:
    @article{Lopez2000,
      author = {Lopez, MF and Berggren, K and Chernokalskaya, E and Lazarev, A and Robinson, M and Patton, WF},
      title = {A comparison of silver stain and SYPRO Ruby Protein Gel Stain with respect to protein detection in two-dimensional gels and identification by peptide mass profiling},
      journal = {ELECTROPHORESIS},
      year = {2000},
      volume = {21},
      number = {17},
      pages = {3673-3683}
    }
    
    Los, S., Collatz, G., Sellers, P., Malmstrom, C., Pollack, N., DeFries, R., Bounoua, L., Parris, M., Tucker, C. & Dazlich, D. A global 9-yr biophysical land surface dataset from NOAA AVHRR data {2000} JOURNAL OF HYDROMETEOROLOGY
    Vol. {1}({2}), pp. {183-199} 
    article  
    Abstract: Global, monthly, 1 degrees by 1 degrees biophysical land surface datasets for 1982-90 were derived from data collected by the Advanced Very High Resolution Radiometer (AVHRR) on board the NOAA-7, -9, and -11 satellites. The AVHRR data are adjusted for sensor degradation, volcanic aerosol effects, cloud contamination, short-term atmospheric effects (e.g., water vapor and aerosol effects less than or equal to 2 months), solar zenith angle variations, and missing data. Interannual variation in the data is more realistic as a result. The following biophysical parameters are estimated: fraction of photosynthetically active radiation absorbed by vegetation, vegetation cover fraction, leaf area index, and fraction of green leaves. Biophysical retrieval algorithms are tested and updated with data from intensive remote sensing experiments. The multiyear vegetation datasets are consistent spatially and temporally and are useful for studying spatial, seasonal, and interannual variability in the biosphere related to the hydrological cycle, the energy balance, and biogeochemical cycles. The biophysical data are distributed via the Internet by the Goddard Distributed Active Archive Center as a precursor to the International Satellite Land Surface Climatology Project (ISLSCP) Initiative II. Release of more extensive, higher-resolution datasets (0.25 degrees by 0.25 degrees) over longer time periods (1982-97/98) is planned for ISLSCP Initiative II.
    BibTeX:
    @article{Los2000,
      author = {Los, SO and Collatz, GJ and Sellers, PJ and Malmstrom, CM and Pollack, NH and DeFries, RS and Bounoua, L and Parris, MT and Tucker, CJ and Dazlich, DA},
      title = {A global 9-yr biophysical land surface dataset from NOAA AVHRR data},
      journal = {JOURNAL OF HYDROMETEOROLOGY},
      year = {2000},
      volume = {1},
      number = {2},
      pages = {183-199}
    }
    
    Loughran, T. & Ritter, J. Why has IPO underpricing changed over time? {2004} FINANCIAL MANAGEMENT
    Vol. {33}({3}), pp. {5-37} 
    article  
    Abstract: In the 1980s, the average first-day return on initial public offerings (IPOs) was 7 The average first-day return doubled to almost 15% during 1990-1998, before jumping to 65% during the internet bubble years of 1999-2000 and then reverting to 12% during 2001-2003. We attribute much of the higher underpricing during the bubble period to a changing issuer objective function. We argue that in the later periods there was less focus on maximizing IPO proceeds due to an increased emphasis on research coverage. Furthermore, allocations of hot IPOs to the personal brokerage accounts of issuing firm executives created an incentive to seek rather than avoid underwriters with a reputation for severe underpricing.
    BibTeX:
    @article{Loughran2004,
      author = {Loughran, T and Ritter, J},
      title = {Why has IPO underpricing changed over time?},
      journal = {FINANCIAL MANAGEMENT},
      year = {2004},
      volume = {33},
      number = {3},
      pages = {5-37}
    }
    
    Low, S. A duality model of TCP and queue management algorithms {2003} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {11}({4}), pp. {525-536} 
    article DOI  
    Abstract: We propose a duality model of end-to-end congestion control and apply it to understand the equilibrium properties of TCP and active queue management schemes. The basic idea is to regard source rates as primal variables and congestion measures as dual variables, and congestion control as a distributed primal-dual algorithm over the Internet to maximize aggregate utility subject to capacity constraints. The primal iteration is carried out by TCP algorithms such as Reno or Vegas, and the dual iteration is carried out by queue management algorithms such as DropTail, RED or REM. We present these algorithms and their generalizations, derive their utility functions, and study their interaction.
    BibTeX:
    @article{Low2003,
      author = {Low, SH},
      title = {A duality model of TCP and queue management algorithms},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2003},
      volume = {11},
      number = {4},
      pages = {525-536},
      doi = {{10.1109/TNET.2003.815297}}
    }
    
    Low, S., Paganini, F. & Doyle, J. Internet congestion control {2002} IEEE CONTROL SYSTEMS MAGAZINE
    Vol. {22}({1}), pp. {28-43} 
    article  
    BibTeX:
    @article{Low2002,
      author = {Low, SH and Paganini, F and Doyle, JC},
      title = {Internet congestion control},
      journal = {IEEE CONTROL SYSTEMS MAGAZINE},
      year = {2002},
      volume = {22},
      number = {1},
      pages = {28-43}
    }
    
    Loy, A., Horn, M. & Wagner, M. probeBase: an online resource for rRNA-targeted oligonucleotide probes {2003} NUCLEIC ACIDS RESEARCH
    Vol. {31}({1}), pp. {514-516} 
    article DOI  
    Abstract: Ribosomal RNA- ( rRNA) - targeted oligonucleotide probes are widely used for culture- independent identification of microorganisms in environmental and clinical samples. ProbeBase is a comprehensive database containing more than 700 published rRNA-targeted oligonucleotide probe sequences ( status August 2002) with supporting bibliographic and biological annotation that can be accessed through the internet at http: / / www. probebase. net. Each oligonucleotide probe entry contains information on target organisms, target molecule ( small- or large-subunit rRNA) and position, G + C content, predicted melting temperature, molecular weight, necessity of competitor probes, and the reference that originally described the oligonucleotide probe, including a link to the respective abstract at PubMed. In addition, probes successfully used for fluorescence in situ hybridization ( FISH) are highlighted and the recommended hybridization conditions are listed. ProbeBase also offers difference alignments for 16S rRNA- targeted probes by using the probe match tool of the ARB software and the latest small-subunit rRNA ARB database ( release June 2002). The option to directly submit probe sequences to the probe match tool of the Ribosomal Database Project II ( RDP- II) further allows one to extract supplementary information on probe specificities. The two main features of probeBase, search probeBase and find probe set, help researchers to find suitable, published oligonucleotide probes for microorganisms of interest or for rRNA gene sequences submitted by the user. Furthermore, the search target site option provides guidance for the development of new FISH probes.
    BibTeX:
    @article{Loy2003,
      author = {Loy, A and Horn, M and Wagner, M},
      title = {probeBase: an online resource for rRNA-targeted oligonucleotide probes},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2003},
      volume = {31},
      number = {1},
      pages = {514-516},
      doi = {{10.1093/nar/gkg015}}
    }
    
    Lu, G. TOP: a new method for protein structure comparisons and similarity searches {2000} JOURNAL OF APPLIED CRYSTALLOGRAPHY
    Vol. {33}({Part 1}), pp. {176-183} 
    article  
    Abstract: In order to facilitate the three-dimensional structure comparison of proteins, software for making comparisons and searching for similarities to protein structures in databases has been developed. The program identifies the residues that share similar positions of both main-chain and side-chain atoms between two proteins. The unique functions of the software also include database processing via Internet- and Web-based servers for different types of users. The developed method and ifs friendly user interface copes with many of the problems that frequently occur in protein structure comparisons, such as detecting structurally equivalent residues, misalignment caused by coincident match of C-alpha atoms, circular sequence permutations, tedious repetition of access, maintenance of the most recent database, and inconvenience of user interface. The program is also designed to cooperate with other tools in structural bioinformatics, such as the 3DB Browser software [Prilusky (1998), Protein Data Bank Q. Newslett. 54, 3-4] and the SCOP database [Murzin, Brenner, Hubbard & Chothia (1995). J. Mel. Biol. 247, 536-540], for convenient molecular modelling and protein structure analysis. A similarity ranking score of `structure diversity' is proposed in order to estimate the evolutionary distance between proteins based on the comparisons of their three-dimensional structures. The function of the program has been utilized as a part of an automated program for multiple protein structure alignment. In this paper, the algorithm of the program and results of systematic tests are presented and discussed.
    BibTeX:
    @article{Lu2000,
      author = {Lu, GG},
      title = {TOP: a new method for protein structure comparisons and similarity searches},
      journal = {JOURNAL OF APPLIED CRYSTALLOGRAPHY},
      year = {2000},
      volume = {33},
      number = {Part 1},
      pages = {176-183}
    }
    
    Lucking-Reiley, D. Auctions on the Internet: What's being auctioned, and how? {2000} JOURNAL OF INDUSTRIAL ECONOMICS
    Vol. {48}({3}), pp. {227-252} 
    article  
    Abstract: This paper is an economist's guide to auctions on the Internet. It traces the development of online auctions since 1993, and presents data from a comprehensive study of 142 different Internet auction sites. The results describe the transaction volumes, the types of auction mechanisms used, the types of goods auctioned, and the business models employed at the various sites. These new electronic-commerce institutions raise interesting questions for the economic theory of auctions, such as predicting the types of goods to be sold at auction, examining the incentive effects of varying auctioneer fee structures, and identifying the optimal auction formats for online sellers.
    BibTeX:
    @article{Lucking-Reiley2000,
      author = {Lucking-Reiley, D},
      title = {Auctions on the Internet: What's being auctioned, and how?},
      journal = {JOURNAL OF INDUSTRIAL ECONOMICS},
      year = {2000},
      volume = {48},
      number = {3},
      pages = {227-252}
    }
    
    Lynch, J. & Ariely, D. Wine online: Search costs affect competition on price, quality, and distribution {2000} MARKETING SCIENCE
    Vol. {19}({1}), pp. {83-103} 
    article  
    Abstract: A fundamental dilemma confronts retailers with stand-alone sites on the World Wide Web and those attempting to build electronic malls for delivery via the Internet, online services, or interactive television (Alba et al. 1997). For consumers, the main potential advantage of electronic shopping over other channels is a reduction in search costs for products and product-related information. Retailers, however, fear that such lowering of consumers' search costs will intensify competition and lower margins by expanding the scope of competition from local to national and international. Some retailers' electronic offerings have been constructed to thwart comparison shopping and to ward off price competition, dimming the appeal of many initial electronic shopping services. Ceteris paribus, if electronic shopping lowers the cost of acquiring price information, it should increase price sensitivity, just as is the case for price advertising. In a similar vein, though, electronic shopping can lower the cost of search for quality information. Most analyses ignore the offsetting potential of the latter effect to lower price sensitivity in the current period. They also ignore the potential of maximally transparent shopping systems to produce welfare gains that give consumers a long-term reason to give repeat business to electronic merchants (cf. Alba et al. 1997 Bakes 1997). We test conditions under which lowered search costs should increase or decrease price sensitivity. We conducted an experiment in which we varied independently three different search costs via electronic shopping: search cost for price information, search cost for quality information within a given store, and search cost for comparing across two competing electronic wine stores. Consumers spent their own money purchasing wines from two competing electronic merchants selling some overlapping and some unique wines. We show four primary empirical results. First, for differentiated products like wines, lowering the cost of search for quality information reduced price sensitivity. Second, price sensitivity for wines common to both stores increased when cross-store comparison was made easy, as many analysts have assumed. However, easy cross;store comparison had no effect on price sensitivity for unique wines. Third, making information environments more transparent by lowering all three search costs produced welfare gains for consumers. They liked the shopping experience more, selected wines they liked more in subsequent tasting, and their retention probability was higher when they were contacted two months later and invited to continue using the electronic shopping service from home. Fourth, we examined the implications of these results for manufacturers and examined how market shares of wines sold by two stores or one were affected by search costs. When store comparison was difficult, results showed that the market share of common wines was proportional to share of distribution; but when store comparison was made easy, the market share returns to distribution decreased significantly. All these results suggest incentives for retailers carrying differentiated goods to make information environments maximally transparent, but to avoid price competition by carrying more unique merchandise.
    BibTeX:
    @article{Lynch2000,
      author = {Lynch, JG and Ariely, D},
      title = {Wine online: Search costs affect competition on price, quality, and distribution},
      journal = {MARKETING SCIENCE},
      year = {2000},
      volume = {19},
      number = {1},
      pages = {83-103}
    }
    
    Mackinnon, J. Numerical distribution functions for unit root and cointegration tests {1996} JOURNAL OF APPLIED ECONOMETRICS
    Vol. {11}({6}), pp. {601-618} 
    article  
    Abstract: This paper employs response surface regressions based on simulation experiments to calculate distribution functions for some well-known unit root and cointegration rest statistics. The principal contributions of the paper are a set of data files that contain estimated response surface coefficients and a computer program for utilizing them. This program, which is freely available via the Internet, can easily be used to calculate both asymptotic and finite-sample critical values and P-values for any of the tests. Graphs of some of the tabulated distribution functions are provided. An empirical example deals with interest rates and inflation rates in Canada.
    BibTeX:
    @article{Mackinnon1996,
      author = {Mackinnon, JG},
      title = {Numerical distribution functions for unit root and cointegration tests},
      journal = {JOURNAL OF APPLIED ECONOMETRICS},
      year = {1996},
      volume = {11},
      number = {6},
      pages = {601-618}
    }
    
    Mackinnon, J., Haug, A. & Michelis, L. Numerical distribution functions of likelihood ratio tests for cointegration {1999} JOURNAL OF APPLIED ECONOMETRICS
    Vol. {14}({5}), pp. {563-577} 
    article  
    Abstract: This paper employs response surface regressions based on simulation experiments to calculate asymptotic distribution functions for the Johansen-type likelihood ratio tests for cointegration. These are carried out in the context of the models recently proposed by Pesaran, Shin, and Smith (1997) that allow for the possibility of exogenous variables integrated of order one. The paper calculates critical values that are very much more accurate than those available previously. The principal contributions of the paper are a set of data files that contain estimated asymptotic quantiles obtained from response surface estimation and a computer program for utilizing them. This program, which is freely available via the Internet, can be used to calculate both asymptotic critical values and P-values. Copyright (C) 1999 John Wiley & Sons, Ltd.
    BibTeX:
    @article{Mackinnon1999,
      author = {Mackinnon, JG and Haug, AA and Michelis, L},
      title = {Numerical distribution functions of likelihood ratio tests for cointegration},
      journal = {JOURNAL OF APPLIED ECONOMETRICS},
      year = {1999},
      volume = {14},
      number = {5},
      pages = {563-577}
    }
    
    Maiden, M., Bygraves, J., Feil, E., Morelli, G., Russell, J., Urwin, R., Zhang, Q., Zhou, J., Zurth, K., Caugant, D., Feavers, I., Achtman, M. & Spratt, B. Multilocus sequence typing: A portable approach to the identification of clones within populations of pathogenic microorganisms {1998} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {95}({6}), pp. {3140-3145} 
    article  
    Abstract: Traditional and molecular typing schemes for the characterization of pathogenic microorganisms are poorly portable because they index variation that is difficult to compare among laboratories. To overcome these problems, we propose multilocus sequence typing (MLST), which exploits the unambiguous nature and electronic portability of nucleotide sequence data for the characterization of microorganisms, To evaluate MLST, we determined the sequences of approximate to 470-bp fragments from 11 housekeeping genes in a reference set of 107 isolates of Neisseria meningitidis from invasive disease and healthy carriers, For each locus, alleles were assigned arbitrary numbers and dendrograms were constructed from the pairwise differences in multilocus allelic profiles by cluster analysis. The strain associations obtained were consistent with clonal groupings previously determined by multilocus enzyme electrophoresis. A subset of six gene fragments was chosen that retained the resolution and congruence achieved by using all 11 loci, Most isolates from hyper-virulent lineages of serogroups A, B, and C meningococci were identical for all loci or differed from the majority type at only a single locus, MLST using six loci therefore reliably identified the major meningococcal lineages associated with invasive disease, MLST can be applied to almost all bacterial species and other haploid organisms, including those that are difficult to cultivate. The overwhelming advantage of MLST over other molecular typing methods is that sequence data are truly portable between laboratories, permitting one expanding global database per species to be placed on a World-Wide Web site, thus enabling exchange of molecular typing data for global epidemiology via the Internet.
    BibTeX:
    @article{Maiden1998,
      author = {Maiden, MCJ and Bygraves, JA and Feil, E and Morelli, G and Russell, JE and Urwin, R and Zhang, Q and Zhou, JJ and Zurth, K and Caugant, DA and Feavers, IM and Achtman, M and Spratt, BG},
      title = {Multilocus sequence typing: A portable approach to the identification of clones within populations of pathogenic microorganisms},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {1998},
      volume = {95},
      number = {6},
      pages = {3140-3145}
    }
    
    Maiden, M.C.J. Multilocus sequence typing of bacteria {2006} ANNUAL REVIEW OF MICROBIOLOGY
    Vol. {60}, pp. {561-588} 
    article DOI  
    Abstract: Multilocus sequence typing (MLST) was proposed in 1998 as a portable, universal, and definitive method for characterizing bacteria, using the human pathogen Neisseria meningitidis as an example. In addition to providing a standardized approach to data collection, by examining the nucleotide sequences of multiple loci encoding housekeeping genes, or fragments of them, MLST data are made freely available over the Internet to ensure that a uniform nomenclature is readily available to all those interested in categorizing bacteria. At the time of writing, over thirty MLST schemes have been published and made available on the Internet, mostly for pathogenic bacteria, although there are schemes for pathogenic fungi and some nonpathogenic bacteria. MLST data have been employed in epidemiological investigations of various scales and in studies of the population biology, pathogenicity, and evolution of bacteria. The increasing speed and reduced cost of nucleotide sequence determination, together with improved web-based databases and analysis tools, present the prospect of increasingly wide application of MLST.
    BibTeX:
    @article{Maiden2006,
      author = {Maiden, Martin C. J.},
      title = {Multilocus sequence typing of bacteria},
      journal = {ANNUAL REVIEW OF MICROBIOLOGY},
      year = {2006},
      volume = {60},
      pages = {561-588},
      doi = {{10.1146/annurev.micro.59.030804.121325}}
    }
    
    Mandl, K., Kohane, I. & Brandt, A. Electronic patient-physician communication: Problems and promise {1998} ANNALS OF INTERNAL MEDICINE
    Vol. {129}({6}), pp. {495-500} 
    article  
    Abstract: A critical mass of Internet users will soon enable wide diffusion of electronic communication within medical practice. E-mail between physicians and patients offers important opportunities for better communication. Linking patients and physicians through e-mail may increase the involvement of patients in supervising and documenting their own health care, processes that may activate patients and contribute to improved health. These new linkages may have profound implications for the patient-physician relationship. Although the federal government proposes regulation of telemedicine technologies and medical software, communications technologies are evolving under less scrutiny. Unless these technologies are implemented with substantial forethought, they may disturb delicate balances in the patient-physician relationship, widen social disparities in health outcomes, and create barriers to access to health care. This paper seeks to identify the promise and pitfalls of electronic patient-physician communication before such technology becomes widely distributed. A research agenda is proposed that would provide data that are useful for careful shaping of the communications infrastructure. The paper addresses the need to 1) define appropriate use of the various modes of patient-physician communication, 2) ensure the security and confidentiality of patient information, 3) create user interfaces that guide patients in effective use of the technology, 4) proactively assess medicolegal liability, and 5) ensure access to the technology by a multicultural, multilingual population with varying degrees of literacy.
    BibTeX:
    @article{Mandl1998,
      author = {Mandl, KD and Kohane, IS and Brandt, AM},
      title = {Electronic patient-physician communication: Problems and promise},
      journal = {ANNALS OF INTERNAL MEDICINE},
      year = {1998},
      volume = {129},
      number = {6},
      pages = {495-500}
    }
    
    Masur, H., Kaplan, J. & Holmes, K. Guidelines for preventing opportunistic infections among HIV-infected persons - 2002 - Recommendations of the US Public Health Service and the Infectious Diseases Society of America {2002} ANNALS OF INTERNAL MEDICINE
    Vol. {137}({5, Part 2}), pp. {435-477} 
    article  
    Abstract: In 1995, the U.S. Public Health Service (USPHS) and the Infectious Diseases Society of America (IDSA) developed guidelines for preventing opportunistic infections (OIs) among persons infected with human immunodeficiency virus (HIV); these guidelines were updated in 1997 and 1999. This fourth edition of the guidelines, made available on the Internet in 2001, is intended for clinicians and other health-care providers who care for HIV-infected persons. The goal of these guidelines is to provide evidence-based guidelines for preventing OIs among HIV-infected adults and adolescents, including pregnant women, and HIV-exposed or infected children. Nineteen OIs, or groups of OIs, are addressed, and recommendations are included for preventing exposure to opportunistic pathogens, preventing first episodes of disease by chemoprophylaxis or vaccination (primary prophylaxis), and preventing disease recurrence (secondary prophylaxis). Major changes since the last edition of the guidelines include 1) updated recommendations for discontinuing primary and secondary OI prophylaxis among persons whose CD4(+) T lymphocyte counts have increased in response to antiretroviral therapy; 2) emphasis on screening all HIV-infected persons for infection with hepatitis C virus; 3) new information regarding transmission of human herpesvirus 8 infection; 4) new information regarding drug interactions, chiefly related to rifamycins and antiretroviral drugs; and 5) revised recommendations for immunizing HIV-infected adults and adolescents and HIV-exposed or infected children.
    BibTeX:
    @article{Masur2002,
      author = {Masur, H and Kaplan, JE and Holmes, KK},
      title = {Guidelines for preventing opportunistic infections among HIV-infected persons - 2002 - Recommendations of the US Public Health Service and the Infectious Diseases Society of America},
      journal = {ANNALS OF INTERNAL MEDICINE},
      year = {2002},
      volume = {137},
      number = {5, Part 2},
      pages = {435-477}
    }
    
    Mathwick, C., Malhotra, N. & Rigdon, E. Experiential value: conceptualization, measurement and application in the catalog and Internet shopping environment {2001} JOURNAL OF RETAILING
    Vol. {77}({1}), pp. {39-56} 
    article  
    Abstract: An experiential value scale (EVS) reflecting the benefits derived from perceptions of playfulness, aesthetics, customer ``return on investment'' and service excellence is developed and tested in the Internet and catalog shopping context. This study evaluates the psychometric properties of the EVS in both samples and tests the hypothesized hierarchical structure. predictive modeling points to the value of the EVS as a measurement tool, useful in describing the perceived make-up of a retail value package and predicting differences in shopping preferences and patronage intent in multichannel retail systems. Study limitations and directions for future research are identified. (C) 2001 by New York University. All rights reserved.
    BibTeX:
    @article{Mathwick2001,
      author = {Mathwick, C and Malhotra, N and Rigdon, E},
      title = {Experiential value: conceptualization, measurement and application in the catalog and Internet shopping environment},
      journal = {JOURNAL OF RETAILING},
      year = {2001},
      volume = {77},
      number = {1},
      pages = {39-56}
    }
    
    McCabe, S., Boyd, C., Couper, M., Crawford, S. & D'Arcy, H. Mode effects for collecting alcohol and other drug use data: Web and US mail {2002} JOURNAL OF STUDIES ON ALCOHOL
    Vol. {63}({6}), pp. {755-761} 
    article  
    Abstract: Objective: The present study examined mode effects for collecting alcohol and other drug use data using a Web-based survey mode and a US. mail-based survey mode for comparison. Method: A survey regarding alcohol and other drugs was administered to a randomly selected sample of 7,000 undergraduate students attending a large midwestern research university in the spring of 2001. The sample was randomly assigned to either a Web-based survey mode (n = 3,500) or a US. mail-based mode (n = 3,500). Results: The Web survey mode of administration resulted in a final sample that more closely matched the target sample in gender mix than did the U.S. mail survey mode. The response rate for the Web survey mode was significantly higher than for the US. mail survey mode. Chi-square results indicated there were significant differences in response propensity by several sample characteristics including sex, race, class year and academic credit hours. Multivariate logistic regression results revealed significant racial and gender differences in the response propensity between and within modes. After controlling for design discrepancies, there were no significant differences between modes in data quality or substantive responses to substance-use variables. Conclusions: The findings of the present study provide strong evidence that Web surveys can be used as an effective mode for collecting alcohol and other drug use data among certain populations who have access to the Internet and high rates of use. Web surveys provide promise for enhancing survey research methodology among undergraduate college students.
    BibTeX:
    @article{McCabe2002,
      author = {McCabe, SE and Boyd, CJ and Couper, MP and Crawford, S and D'Arcy, H},
      title = {Mode effects for collecting alcohol and other drug use data: Web and US mail},
      journal = {JOURNAL OF STUDIES ON ALCOHOL},
      year = {2002},
      volume = {63},
      number = {6},
      pages = {755-761}
    }
    
    McEwen, A.S., Eliason, E.M., Bergstrom, J.W., Bridges, N.T., Hansen, C.J., Delamere, W.A., Grant, J.A., Gulick, V.C., Herkenhoff, K.E., Keszthelyi, L., Kirk, R.L., Mellon, M.T., Squyres, S.W., Thomas, N. & Weitz, C.M. Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment (HiRISE) {2007} JOURNAL OF GEOPHYSICAL RESEARCH-PLANETS
    Vol. {112}({E5}) 
    article DOI  
    Abstract: [1] The HiRISE camera features a 0.5 m diameter primary mirror, 12 m effective focal length, and a focal plane system that can acquire images containing up to 28 Gb (gigabits) of data in as little as 6 seconds. HiRISE will provide detailed images (0.25 to 1.3 m/pixel) covering similar to 1% of the Martian surface during the 2-year Primary Science Phase (PSP) beginning November 2006. Most images will include color data covering 20% of the potential field of view. A top priority is to acquire similar to 1000 stereo pairs and apply precision geometric corrections to enable topographic measurements to better than 25 cm vertical precision. We expect to return more than 12 Tb of HiRISE data during the 2-year PSP, and use pixel binning, conversion from 14 to 8 bit values, and a lossless compression system to increase coverage. HiRISE images are acquired via 14 CCD detectors, each with 2 output channels, and with multiple choices for pixel binning and number of Time Delay and Integration lines. HiRISE will support Mars exploration by locating and characterizing past, present, and future landing sites, unsuccessful landing sites, and past and potentially future rover traverses. We will investigate cratering, volcanism, tectonism, hydrology, sedimentary processes, stratigraphy, aeolian processes, mass wasting, landscape evolution, seasonal processes, climate change, spectrophotometry, glacial and periglacial processes, polar geology, and regolith properties. An Internet Web site (HiWeb) will enable anyone in the world to suggest HiRISE targets on Mars and to easily locate, view, and download HiRISE data products.
    BibTeX:
    @article{McEwen2007,
      author = {McEwen, Alfred S. and Eliason, Eric M. and Bergstrom, James W. and Bridges, Nathan T. and Hansen, Candice J. and Delamere, W. Alan and Grant, John A. and Gulick, Virginia C. and Herkenhoff, Kenneth E. and Keszthelyi, Laszlo and Kirk, Randolph L. and Mellon, Michael T. and Squyres, Steven W. and Thomas, Nicolas and Weitz, Catherine M.},
      title = {Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment (HiRISE)},
      journal = {JOURNAL OF GEOPHYSICAL RESEARCH-PLANETS},
      year = {2007},
      volume = {112},
      number = {E5},
      doi = {{10.1029/2005JE002605}}
    }
    
    McFarlane, M., Bull, S. & Rietmeijer, C. The Internet as a newly emerging risk environment for sexually transmitted diseases {2000} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {284}({4}), pp. {443+} 
    article  
    Abstract: Context Transmission of sexually transmitted diseases (STDs) such as human immunodeficiency virus (HIV) infection is associated with unprotected sex among multiple anonymous sex partners. The role of the Internet in risk of STDs is not known. Objective To compare risk of STD transmission for persons who seek sex partners on the Internet with risk for persons not seeking sex partners on the Internet. Design Cross-sectional survey conducted September 1999 through April 2000, Setting and Participants A total of 856 clients of the Denver Public Health HIV Counseling and Testing Site in Colorado. Main Outcome Measures Self-report of logging on to the Internet with the intention of finding sex partners; having sex with partners who were originally contacted via the Internet; number of such partners and use of condoms with them; and time since last sexual contact with Internet partners, linked to HIV risk assessment and test records. Results Of the 856 clients, most were white (77.8, men (69.2, heterosexual (65.3, and aged 20 to 50 years (84.1, Of those, 135 (15.8 had sought sex partners on the Internet, and 88 (65.2 of these reported having sex with a partner initially met via the Internet, Of those with Internet partners, 34 (38.7 had 4 or more such partners, with 62 (71.2 of contacts occurring within 6 months prior to the client's HIV test. Internet sex seekers were more likely to be men (P<.001) and homosexual (P<.001) than those not seeking sex via the Internet. Internet sex seekers reported more previous STDs (P=.02); more partners (P<.001); more anal sex (P<.001); and more sexual exposure to men (P<.001), men who have sex with men (P<.001), and partners known to be HIV positive (P<.001) than those not seeking sex via the Internet, Conclusions Seeking sex partners via the Internet was a relatively common practice in this sample of persons seeking HIV testing and counseling (representative of neither Denver nor the overall US population). Clients who seek sex using the Internet appear to be at greater risk for STDs than clients who do not seek sex on the Internet.
    BibTeX:
    @article{McFarlane2000,
      author = {McFarlane, M and Bull, SS and Rietmeijer, CA},
      title = {The Internet as a newly emerging risk environment for sexually transmitted diseases},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2000},
      volume = {284},
      number = {4},
      pages = {443+}
    }
    
    McKay, H., King, D., Eakin, E., Seeley, J. & Glasgow, R. The diabetes network Internet-based physical activity intervention - A randomized pilot study {2001} DIABETES CARE
    Vol. {24}({8}), pp. {1328-1334} 
    article  
    Abstract: OBJECTIVE - Because of other competing priorities, physical activity (PA) is seldom addressed in a consistent way in either primary care or diabetes education. This 8-week pilot study evaluated the short-term benefits of an Internet-based supplement to usual care that focused on providing support for sedentary patients with type 2 diabetes to increase their PA levels. RESEARCH DESIGN AND METHODS - A total of 78 type 2 diabetic patients (53% female, average age 52.3 years) were randomized to the Diabetes Network (D-Net) Active Lives PA Intervention or an Internet information-only condition. The intervention condition received goal-setting and personalized feedback, identified and developed strategies to overcome barriers, received and could post messages to an on-line ``personal coach,'' and were invited to participate in peer group support areas. Key outcomes included minutes of PA per week and depressive symptomatology. RESULTS - There was an overall moderate improvement in PA levels within both intervention and control conditions, but there was no significant improvement in regard to condition effects. There was substantial variability in both site use and outcomes within the intervention and control conditions, Internal analyses revealed that among intervention participants, those who used the site more regularly derived significantly greater benefits, whereas those in the control condition derived no similar benefits with increased program use, CONCLUSIONS- Internet-based self-management interventions for PA and Other regimen areas have great potential to enhance the care of diabetes and Other chronic conditions. We conclude that greater attention should be focused on methods to sustain involvement with Internet-based intervention health promotion programs over time.
    BibTeX:
    @article{McKay2001,
      author = {McKay, HG and King, D and Eakin, EG and Seeley, JR and Glasgow, RE},
      title = {The diabetes network Internet-based physical activity intervention - A randomized pilot study},
      journal = {DIABETES CARE},
      year = {2001},
      volume = {24},
      number = {8},
      pages = {1328-1334}
    }
    
    McKenna, K. & Bargh, J. Plan 9 from cyberspace: The implications of the internet for personality and social psychology {2000} PERSONALITY AND SOCIAL PSYCHOLOGY REVIEW
    Vol. {4}({1}), pp. {57-75} 
    article  
    Abstract: Just as with most other communication breakthroughs before it, the initial media and popular reaction to the Internet has been largely negative, if not apocalyptic. For example, it has been described as ``awash in pornography, `` and more recently as making people ``sad and lonely. `` Yet counter to the initial and widely publicized claim that Internet use causes depression and social isolation, the body of evidence (even in the initial study on which the claim was based) is mainly to the contrary. More than this, however, it is argued that like the telephone and television before it, the Internet by itself is not a main effect cause of anything, and that psychology must move beyond this notion to an informed analysis of how social identity, social interaction, and relationship formation may be different on the Internet than in veal life. Four major differences and their implications for self and identity, social interaction, and relationships are identified: one's greater anonymity, the greatly reduced importance of physical appearance and physical distance as ``gating features'' to relationship development, and one's greater control over the time and pace of interactions. Existing research is reviewed along these lines and some promising directions for future research are described.
    BibTeX:
    @article{McKenna2000,
      author = {McKenna, KYA and Bargh, JA},
      title = {Plan 9 from cyberspace: The implications of the internet for personality and social psychology},
      journal = {PERSONALITY AND SOCIAL PSYCHOLOGY REVIEW},
      year = {2000},
      volume = {4},
      number = {1},
      pages = {57-75}
    }
    
    McKenna, K. & Bargh, J. Coming out in the age of the Internet: Identity ``demarginalization'' through virtual group participation {1998} JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY
    Vol. {75}({3}), pp. {681-694} 
    article  
    Abstract: Internet newsgroups allow individuals to interact with others in a relatively anonymous fashion and thereby provide individuals with concealable stigmatized identities a place to belong not otherwise available. Thus, membership in these groups should become an important part of identity. Study 1 found that members of newsgroups dealing with marginalized-concealable identities modified their newsgroup behavior on the basis of reactions of other members, unlike members of marginalized-conspicuous or mainstream newsgroups. This increase in identity importance from newsgroup participation was shown in both Study 2 (marginalized sexual identities) and Study 3 (marginalized ideological identities) to lead to greater self-acceptance, as well as coming out about the secret identity to family and friends. Results supported the view that Internet groups obey general principles of social group functioning and have real-life consequences for the individual.
    BibTeX:
    @article{McKenna1998,
      author = {McKenna, KYA and Bargh, JA},
      title = {Coming out in the age of the Internet: Identity ``demarginalization'' through virtual group participation},
      journal = {JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY},
      year = {1998},
      volume = {75},
      number = {3},
      pages = {681-694},
      note = {Annual Meeting of the Society-for-Experimental-Social-Psychology, TORONTO, CANADA, OCT, 1997}
    }
    
    McKenna, K., Green, A. & Gleason, M. Relationship formation on the Internet: What's the big attraction? {2002} JOURNAL OF SOCIAL ISSUES
    Vol. {58}({1}), pp. {9-31} 
    article  
    Abstract: We hypothesized that people who can better disclose their ``true'' or inner self to others on the Internet than in face-to-face settings will be more likely to form close relationships on-line and will tend to bring those virtual relationships into their ``real'' lives. Study 1, a survey of randomly selected Internet newsgroup posters, showed that those who better express their trite self over the Internet were more likely, than others to have formed close on-line relationships and moved these friendships to a face-to-face basis. Study 2 revealed that the majority of these close Internet relationships were still intact 2 years later Finally, a laboratory experiment found that undergraduates liked each other more following an Internet compared to a face-to-face initial meeting.
    BibTeX:
    @article{McKenna2002,
      author = {McKenna, KYA and Green, AS and Gleason, MEJ},
      title = {Relationship formation on the Internet: What's the big attraction?},
      journal = {JOURNAL OF SOCIAL ISSUES},
      year = {2002},
      volume = {58},
      number = {1},
      pages = {9-31}
    }
    
    McKnight, D. & Chervany, N. What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology {2001} INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE
    Vol. {6}({2}), pp. {35-59} 
    article  
    Abstract: Trust is a vital relationship concept that needs clarification because researchers across disciplines have defined it in so many different ways. A typology of trust types would make it easier to compare and communicate results, and would be especially valuable if the types of trust related to one other. The typology should be interdisciplinary because many disciplines research e-commerce. This paper justifies a parsimonious interdisciplinary typology and relates trust constructs to e-commerce consumer actions, defining both conceptual-level and operational-level trust constructs. Conceptual-level constructs consist of disposition to trust (primarily from psychology), institution-based trust (from sociology), and trusting beliefs and trusting intentions (primarily from social psychology). Each construct is decomposed into measurable subconstructs, and the typology shows how trust constructs relate to already existing Internet relationship constructs. The effects of Web vendor interventions on consumer behaviors are posited to be partially mediated by consumer trusting beliefs and trusting intentions in the e-vendor.
    BibTeX:
    @article{McKnight2001,
      author = {McKnight, DH and Chervany, NL},
      title = {What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology},
      journal = {INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE},
      year = {2001},
      volume = {6},
      number = {2},
      pages = {35-59}
    }
    
    McKnight, D., Choudhury, V. & Kacmar, C. The impact of initial consumer trust on intentions to transact with a web site: a trust building model {2002} JOURNAL OF STRATEGIC INFORMATION SYSTEMS
    Vol. {11}({3-4}), pp. {297-323} 
    article  
    Abstract: This paper develops and tests a model of consumer trust in an electronic commerce vendor. Building consumer trust is a strategic imperative for web-based vendors because trust strongly influences consumer intentions to transact with unfamiliar vendors via the web. Trust allows consumers to overcome perceptions of risk and uncertainty, and to engage in the following three behaviors that are critical to the realization of a web-based vendor's strategic objectives: following advice offered by the web vendor, sharing personal information with the vendor, and purchasing from the vendor's web site. Trust in the vendor is defined as a multi-dimensional construct with two inter-related components-trusting beliefs (perceptions of the competence, benevolence, and integrity of the vendor), and trusting intentions-willingness to depend (that is, a decision to make oneself vulnerable to the vendor). Three factors are proposed for building consumer trust in the vendor structural assurance (that is, consumer perceptions of the safety of the web environment), perceived web vendor reputation, and perceived web site quality. The model is tested in the context of a hypothetical web site offering legal advice. All three factors significantly influenced consumer trust in the web vendor. That is. these factors, especially web site quality and reputation. are powerful levers that vendors can use to build consumer trust, in order to overcome the negative perceptions people often have about the safety of the web environment. The study also demonstrates that perceived Internet risk negatively affects consumer intentions to transact with a web-based vendor. (C) 2002 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{McKnight2002,
      author = {McKnight, DH and Choudhury, V and Kacmar, C},
      title = {The impact of initial consumer trust on intentions to transact with a web site: a trust building model},
      journal = {JOURNAL OF STRATEGIC INFORMATION SYSTEMS},
      year = {2002},
      volume = {11},
      number = {3-4},
      pages = {297-323}
    }
    
    MEHTA, R. & SIVADAS, E. COMPARING RESPONSE RATES AND RESPONSE CONTENT IN MAIL VERSUS ELECTRONIC MAIL SURVEYS {1995} JOURNAL OF THE MARKET RESEARCH SOCIETY
    Vol. {37}({4}), pp. {429-439} 
    article  
    Abstract: This study reports results of an experiment conducted to compare response rates and response content in mail and electronic mail surveys. Respondents on a large global network (Internet) were sent mail and e-mail surveys assessing their attitudes towards the commercialisation of the Internet. Respondents were randomly assigned to one of five groups (group 1: regular mail no prenotification, no incentives, no reminders; group 2: regular mail with prenotification, incentives, and wave mailing; group 3 and group 4 were e-mail replications of group 1 and 2; and group 5 international group but otherwise same as group 4). The results highlight the strengths and weaknesses of using electronic mail for gathering information from domestic and international respondents.
    BibTeX:
    @article{MEHTA1995,
      author = {MEHTA, R and SIVADAS, E},
      title = {COMPARING RESPONSE RATES AND RESPONSE CONTENT IN MAIL VERSUS ELECTRONIC MAIL SURVEYS},
      journal = {JOURNAL OF THE MARKET RESEARCH SOCIETY},
      year = {1995},
      volume = {37},
      number = {4},
      pages = {429-439}
    }
    
    Mewes, H., Albermann, K., Heumann, K., Liebl, S. & Pfeiffer, F. MIPS: A database for protein sequences, homology data and yeast genome information {1997} NUCLEIC ACIDS RESEARCH
    Vol. {25}({1}), pp. {28-30} 
    article  
    Abstract: The MIPS group (Martinsried Institute for Protein Sequences) at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, collects, processes and distributes protein sequence data within the framework of the tripartite association of the Protein Sequence Database (1,2), nearly 50% of the data input to the Protein Sequence Database, The database is distributed on CD-ROM together with PATCHX, an exhaustive supplement of unique, unverified protein sequences from external sources compiled by MIPS, Through its WWW server (http://www.mips.biochem.mpg.de/) MIPS permits internet access to sequence databases, homology data and to yeast genome information, (i) Sequence similarity results from the PASTA program (3) are stored in the PASTA database for all proteins from PIR-International and PATCHX. The database is dynamically maintained and permits instant access to PASTA results, (ii) Starting with PASTA database queries, proteins have been classified into families and superfamilies (PROT-FAM), (iii) The HPT (hashed position tree) data structure (4) developed at MIPS is a new approach for rapid sequence and pattern searching, (iv) MIPS provides access to the sequence and annotation of the complete yeast genome (5), the functional classification of yeast genes (FunCat) and its graphical display, the `Genome Browser' (6), A CD-ROM based on the JAVA programming language providing dynamic interactive access to the yeast genome and the related protein sequences has been compiled and is available on request.
    BibTeX:
    @article{Mewes1997,
      author = {Mewes, HW and Albermann, K and Heumann, K and Liebl, S and Pfeiffer, F},
      title = {MIPS: A database for protein sequences, homology data and yeast genome information},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {1997},
      volume = {25},
      number = {1},
      pages = {28-30}
    }
    
    Miller, E., Neal, D., Roberts, L., Baer, J., Cressler, S., Metrik, J. & Marlatt, G. Test-retest reliability of alcohol measures: Is there a difference between Internet-based assessment and traditional methods? {2002} PSYCHOLOGY OF ADDICTIVE BEHAVIORS
    Vol. {16}({1}), pp. {56-63} 
    article DOI  
    Abstract: This study compared Web-based assessment techniques with traditional paper-based methods of commonly used measures of alcohol use. Test-retest reliabilities were obtained, and tests of validity were conducted. A total of 255 participants were randomly assigned to 1 of 3 conditions: paper-based (P&P), Web-based (Web), or Web-based with interruption (Web-1). Follow-up assessments 1 week later indicated reliabilities ranging from .59 to.93 within all measures and across all assessment methods. Significantly high test-retest reliability coefficients support the use of these measures for research and clinical applications. Furthermore, no significant differences were found between assessment techniques, suggesting that Web-based methods are a suitable alternative to more traditional methods. This cost-efficient alternative has the advantage of minimizing data collection and entry errors while increasing survey accessibility.
    BibTeX:
    @article{Miller2002,
      author = {Miller, ET and Neal, DJ and Roberts, LJ and Baer, JS and Cressler, SO and Metrik, J and Marlatt, GA},
      title = {Test-retest reliability of alcohol measures: Is there a difference between Internet-based assessment and traditional methods?},
      journal = {PSYCHOLOGY OF ADDICTIVE BEHAVIORS},
      year = {2002},
      volume = {16},
      number = {1},
      pages = {56-63},
      doi = {{10.1037//0893-164X.16.1.56}}
    }
    
    MILLS, D. INTERNET TIME SYNCHRONIZATION - THE NETWORK TIME PROTOCOL {1991} IEEE TRANSACTIONS ON COMMUNICATIONS
    Vol. {39}({10}), pp. {1482-1493} 
    article  
    Abstract: This paper describes the network time protocol (NTP), which is designed to distribute time information in a large, diverse internet system operating at speeds from mundane to lightwave. It uses a symmetric architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical configuration synchronizes local clocks within the subnet and to national time standards via wire, radio, or calibrated atomic clock. The servers can also redistribute time information within a network via local routing algorithms and time daemons. This paper also discusses the architecture, protocol and algorithms, which were developed over several years of implementation refinement and resulted in the designation of NTP as an Internet Standard protocol. The NTP synchronization system, which has been in regular operation in the Internet for the last several years, is decribed along with performance data which shows that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers or networks.
    BibTeX:
    @article{MILLS1991,
      author = {MILLS, DL},
      title = {INTERNET TIME SYNCHRONIZATION - THE NETWORK TIME PROTOCOL},
      journal = {IEEE TRANSACTIONS ON COMMUNICATIONS},
      year = {1991},
      volume = {39},
      number = {10},
      pages = {1482-1493}
    }
    
    Miyazaki, A. & Fernandez, A. Consumer perceptions of privacy and security risks for online shopping {2001} JOURNAL OF CONSUMER AFFAIRS
    Vol. {35}({1}), pp. {27-44} 
    article  
    Abstract: Government and industry organizations have declared information privacy and security to be major obstacles in the development of consumer-related e-commerce. Risk perceptions regarding Internet privacy and security have been identified as issues for both new and experienced users of Internet technology. This paper explores risk perceptions among consumers of varying levels of Internet experience and how these perceptions relate to online shopping activity. Findings provide evidence of hypothesized relationships among consumers' levels of Internet experience, the use of alternate remote purchasing methods (such as telephone and mail-order shopping), the perceived risks of online shopping, and online purchasing activity. Implications for online commerce and consumer welfare are discussed.
    BibTeX:
    @article{Miyazaki2001,
      author = {Miyazaki, AD and Fernandez, A},
      title = {Consumer perceptions of privacy and security risks for online shopping},
      journal = {JOURNAL OF CONSUMER AFFAIRS},
      year = {2001},
      volume = {35},
      number = {1},
      pages = {27-44}
    }
    
    Moan, J. & Peng, Q. An outline of the hundred-year history of PDT {2003} ANTICANCER RESEARCH
    Vol. {23}({5A}), pp. {3591-3600} 
    article  
    Abstract: Photosensitizing drugs have been known and applied in medicine for several thousand years. However, the scientific basis for such use was vague or non-existent before about 1900. Photodynamic therapy, PDT, has now become an established treatment modality for several medical indications. Notably, in the cases of skin actinic keratosis, several forms of cancer and blindness due to age-related macular degeneration, PDT has been successful. PDT is the combined application of a lesion-localizing photosensitizer and light. PDT with porphyrin derivatives as photosensitizing drugs was developed from about 1960. The basic, underlying mechanisms for tumour localization of photosensitizers and processes explaining the effect of PDT on tumours were elucidated from about that time. It has become clear that PDT is efficient only in the presence of oxygen, and that the oxygen dependency of PDT is similar to that of X-rays. Singlet oxygen, O-1(2), a short-lived product of the reaction between an excited sensitizer molecule and oxygen, plays a key role. In contrast to radiation therapy and chemotherapy, PDT has a low mutagenic potential and, except for skin phototoxicity, few adverse effects. Approvals for clinical use of PDT now exist in many countries. The annual number of scientific articles on PDT, clinical as well as basic, steadily increases and new aspects and applications of it continue to be discovered. Many of the new investigators are obviously not aware of the early work in the field and repeat many of the experiments that had been reported before the Internet and modem data bases were established. Therefore, in the present historical review, the early work is weighted more heavily than recent work that is more easily accessible to the readers.
    BibTeX:
    @article{Moan2003,
      author = {Moan, J and Peng, Q},
      title = {An outline of the hundred-year history of PDT},
      journal = {ANTICANCER RESEARCH},
      year = {2003},
      volume = {23},
      number = {5A},
      pages = {3591-3600}
    }
    
    Mohan, R., Smith, J.R. & Li, C.-S. Adapting Multimedia Internet Content for Universal Access {1999} IEEE TRANSACTIONS ON MULTIMEDIA
    Vol. {1}({1}), pp. {104-114} 
    article  
    Abstract: Content delivery over the Internet needs to address both the multimedia nature of the content and the capabilities of the diverse client platforms the content is being delivered to. We present a system that adapts multimedia Web documents to optimally match the capabilities of the client device requesting it. This system has two key components. 1) A representation scheme called the InfoPyramid that provides a multimodal, multiresolution representation hierarchy for multimedia. 2) A customizer that selects the best content representation to meet the client capabilities while delivering the most value. We model the selection process as a resource allocation problem in a generalized rate-distortion framework. In this framework, we address the issue of both multiple media types in a Web document and multiple resource types at the client. We extend this framework to allow prioritization on the content items in a Web document. We illustrate our content adaptation technique with a web server that adapts multimedia news stories to clients as diverse as workstations, PDA's and cellular phones.
    BibTeX:
    @article{Mohan1999,
      author = {Mohan, Rakesh and Smith, John R. and Li, Chung-Sheng},
      title = {Adapting Multimedia Internet Content for Universal Access},
      journal = {IEEE TRANSACTIONS ON MULTIMEDIA},
      year = {1999},
      volume = {1},
      number = {1},
      pages = {104-114}
    }
    
    Montaner, M., Lopez, B. & de la Rosa, J. A taxonomy of recommender agents on the Internet {2003} ARTIFICIAL INTELLIGENCE REVIEW
    Vol. {19}({4}), pp. {285-330} 
    article  
    Abstract: Recently, Artificial Intelligence techniques have proved useful in helping users to handle the large amount of information on the Internet. The idea of personalized search engines, intelligent software agents, and recommender systems has been widely accepted among users who require assistance in searching, sorting, classifying, filtering and sharing this vast quantity of information. In this paper, we present a state-of-the-art taxonomy of intelligent recommender agents on the Internet. We have analyzed 37 different systems and their references and have sorted them into a list of 8 basic dimensions. These dimensions are then used to establish a taxonomy under which the systems analyzed are classified. Finally, we conclude this paper with a cross-dimensional analysis with the aim of providing a starting point for researchers to construct their own recommender system.
    BibTeX:
    @article{Montaner2003,
      author = {Montaner, M and Lopez, B and de la Rosa, JL},
      title = {A taxonomy of recommender agents on the Internet},
      journal = {ARTIFICIAL INTELLIGENCE REVIEW},
      year = {2003},
      volume = {19},
      number = {4},
      pages = {285-330}
    }
    
    Morahan-Martin, J. & Schumacher, P. Incidence and correlates of pathological Internet use among college students {2000} COMPUTERS IN HUMAN BEHAVIOR
    Vol. {16}({1}), pp. {13-29} 
    article  
    Abstract: This study surveyed 277 undergraduate Internet users, a population considered to be high risk for pathological Internet use (PIU), to assess incidence of PIU as well as characteristics of the Internet and of users associated with PIU. Pathological use was determined by responses to 13 questions which assessed evidence that Internet use was causing academic, work or interpersonal problems, distress, tolerance symptoms, and mood-altering use of the Internet. Approximately one-quarter of students (27.2 reported no symptoms (NO) while 64.7% reported one to three symptoms (Limited Symptoms) and 8.1% reported four or more symptoms (PIU). Based on popular stereotypes as well as previous research, it was predicted that pathological Internet users would more likely be males, technologically sophisticated, use real-time interactive activities such as online games and chat lines, and feel comfortable and competent online. Further, it was hypothesized that pathological users would be more likely to be lonely and to be socially disinhibited online.:Partial confirmation of this model was obtained. Pathological users were more likely to be males and to use online games as well as technologically sophisticated sites, but there was no difference in Internet Relay Chat use. Although reported comfort and competence with the: Internet was in the expected direction, differences were not significant. Pathological users scored significantly higher on the UCLA Loneliness Scale, and were socially disinhibited online. (C) 2000 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Morahan-Martin2000,
      author = {Morahan-Martin, J and Schumacher, P},
      title = {Incidence and correlates of pathological Internet use among college students},
      journal = {COMPUTERS IN HUMAN BEHAVIOR},
      year = {2000},
      volume = {16},
      number = {1},
      pages = {13-29},
      note = {105th Annual Meeting of the American-Psychological-Association, CHICAGO, ILLINOIS, AUG 15-19, 1997}
    }
    
    Moreno, Y., Pastor-Satorras, R. & Vespignani, A. Epidemic outbreaks in complex heterogeneous networks {2002} EUROPEAN PHYSICAL JOURNAL B
    Vol. {26}({4}), pp. {521-529} 
    article DOI  
    Abstract: We present a detailed analytical and numerical study for the spreading of infections with acquired immunity in complex population networks. We show that the large connectivity fluctuations usually found in these networks strengthen considerably the incidence of epidemic outbreaks. Scale-free networks, which are characterized by diverging connectivity fluctuations in the limit of a very large number of nodes, exhibit the lack of an epidemic threshold and always show a finite fraction of infected individuals. This particular weakness, observed also in models without immunity, defines a new epidemiological framework characterized by a highly heterogeneous response of the system to the introduction of infected individuals with different connectivity. The understanding of epidemics in complex networks might deliver new insights in the spread of information and diseases in biological and technological networks that often appear to be characterized by complex heterogeneous architectures.
    BibTeX:
    @article{Moreno2002,
      author = {Moreno, Y and Pastor-Satorras, R and Vespignani, A},
      title = {Epidemic outbreaks in complex heterogeneous networks},
      journal = {EUROPEAN PHYSICAL JOURNAL B},
      year = {2002},
      volume = {26},
      number = {4},
      pages = {521-529},
      doi = {{10.1140/ejpb/e20020122}}
    }
    
    Motter, A. Cascade control and defense in complex networks {2004} PHYSICAL REVIEW LETTERS
    Vol. {93}({9}) 
    article DOI  
    Abstract: Complex networks with a heterogeneous distribution of loads may undergo a global cascade of overload failures when highly loaded nodes or edges are removed due to attacks or failures. Since a small attack or failure has the potential to trigger a global cascade, a fundamental question regards the possible strategies of defense to prevent the cascade from propagating through the entire network. Here we introduce and investigate a costless strategy of defense based on a selective further removal of nodes and edges, right after the initial attack or failure. This intentional removal of network elements is shown to drastically reduce the size of the cascade.
    BibTeX:
    @article{Motter2004,
      author = {Motter, AE},
      title = {Cascade control and defense in complex networks},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2004},
      volume = {93},
      number = {9},
      doi = {{10.1103/PhysRevLett.93.098701}}
    }
    
    Motter, A. & Lai, Y. Cascade-based attacks on complex networks {2002} PHYSICAL REVIEW E
    Vol. {66}({6, Part 2}) 
    article DOI  
    Abstract: We live in a modern world supported by large, complex networks. Examples range from financial markets to communication and transportation systems. In many realistic situations the flow of physical quantities in the network, as characterized by the loads on nodes, is important. We show that for such networks where loads can redistribute among the nodes, intentional attacks can lead to a cascade of overload failures, which can in turn cause the entire or a substantial part of the network to collapse. This is relevant for real-world networks that possess a highly heterogeneous distribution of loads, such as the Internet and power grids. We demonstrate that the heterogeneity of these networks makes them particularly vulnerable to attacks in that a large-scale cascade may be triggered by disabling a single key node. This brings obvious concerns on the security of such systems.
    BibTeX:
    @article{Motter2002,
      author = {Motter, AE and Lai, YC},
      title = {Cascade-based attacks on complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {66},
      number = {6, Part 2},
      doi = {{10.1103/PhysRevE.66.065102}}
    }
    
    Muller, H., Michoux, N., Bandon, D. & Geissbuhler, A. A review of content-based image retrieval systems in medical applications - clinical benefits and future directions {2004} INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS
    Vol. {73}({1}), pp. {1-23} 
    article DOI  
    Abstract: Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and toots have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization (similar to1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although stilt a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it Looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
    BibTeX:
    @article{Muller2004,
      author = {Muller, H and Michoux, N and Bandon, D and Geissbuhler, A},
      title = {A review of content-based image retrieval systems in medical applications - clinical benefits and future directions},
      journal = {INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS},
      year = {2004},
      volume = {73},
      number = {1},
      pages = {1-23},
      doi = {{10.1016/j.ijmedinf.2003.11.024}}
    }
    
    Muller, H., Schloder, F., Stutzki, J. & Winnewisser, G. The Cologne Database for Molecular Spectroscopy, CDMS: a useful tool for astronomers and spectroscopists {2005} JOURNAL OF MOLECULAR STRUCTURE
    Vol. {742}({1-3}), pp. {215-227} 
    article DOI  
    Abstract: The general features of the internet browser-accessible Cologne Database for Molecular Spectroscopy (CDMS) and recent developments in the CDMS are described in the present article. The database consists of several parts; among them is a catalog of transition frequencies from the radio-frequency to the far-infrared region covering atomic and molecular species that (may) occur in the interstellar or circumstellar medium or in planetary atmospheres. As of December 2004, 280 species are present in this catalog. The transition frequencies were predicted from fits of experimental data to established Hamiltonian models. We present some examples to demonstrate how the combination of various input data or a compact representation of the Hamiltonian can be beneficial for the prediction of the line frequencies. &COPY; 2005 Elsevier B.V. All rights reserved.
    BibTeX:
    @article{Muller2005,
      author = {Muller, HSP and Schloder, F and Stutzki, J and Winnewisser, G},
      title = {The Cologne Database for Molecular Spectroscopy, CDMS: a useful tool for astronomers and spectroscopists},
      journal = {JOURNAL OF MOLECULAR STRUCTURE},
      year = {2005},
      volume = {742},
      number = {1-3},
      pages = {215-227},
      doi = {{10.1016/j.molstruc.2005.01.027}}
    }
    
    Munir, S. & Book, W. Internet-based teleoperation using wave variables with prediction {2002} IEEE-ASME TRANSACTIONS ON MECHATRONICS
    Vol. {7}({2}), pp. {124-133} 
    article  
    Abstract: Wave-based teleoperation has been previously attempted over the Internet, however, performance rapidly deteriorates with increasing delay. This paper focuses on the use of a modified Smith predictor, a Kalman filter and an energy regulator to improve the performance of a wave-based teleoperator. This technique is further extended for use over the Internet, where the time delay is varying and unpredictable. It is shown that the resulting system is stable even if there are large uncertainties in the model of the remote system (used in prediction). Successful experimental results using this technique for teleoperation in a master-slave arrangement over the Internet, where the control signal is streamed between Atlanta (Georgia) and Tokyo (Japan), are also given.
    BibTeX:
    @article{Munir2002,
      author = {Munir, S and Book, WJ},
      title = {Internet-based teleoperation using wave variables with prediction},
      journal = {IEEE-ASME TRANSACTIONS ON MECHATRONICS},
      year = {2002},
      volume = {7},
      number = {2},
      pages = {124-133}
    }
    
    Natanson, C., Kern, S.J., Lurie, P., Banks, S.M. & Wolfe, S.M. Cell-free hemoglobin-based blood substitutes and risk of myocardial infarction and death - A meta-analysis {2008} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {299}({19}), pp. {2304-2312} 
    article  
    Abstract: Context Hemoglobin-based blood substitutes (HBBSs) are infusible oxygen-carrying liquids that have long shelf lives, have no need for refrigeration or cross-matching, and are ideal for treating hemorrhagic shock in remote settings. Some trials of HBBSs during the last decade have reported increased risks without clinical benefit. Objective To assess the safety of HBBSs in surgical, stroke, and trauma patients. Data Sources PubMed, EMBASE, and Cochrane Library searches for articles using hemoglobin and blood substitutes from 1980 through March 25, 2008; reviews of Food and Drug Administration (FDA) advisory committee meeting materials; and Internet searches for company press releases. Study Selection Randomized controlled trials including patients aged 19 years and older receiving HBBSs therapeutically. The database searches yielded 70 trials of which 13 met these criteria; in addition, data from 2 other trials were reported in 2 press releases, and additional data were included in 1 relevant FDA review. Data Extraction Data on death and myocardial infarction (MI) as outcome variables. Results Sixteen trials involving 5 different products and 3711 patients in varied patient populations were identified. A test for heterogeneity of the results of these trials was not significant for either mortality or MI (for both, I-2 = 0 P >= .60), and data were combined using a fixed-effects model. Overall, there was a statistically significant increase in the risk of death (164 deaths in the HBBS-treated groups and 123 deaths in the control groups; relative risk [RR], 1.30; 95% confidence interval [CI], 1.05-1.61) and risk of MI (59 MIs in the HBBS-treated groups and 16 MIs in the control groups; RR, 2.71; 95% CI, 1.67-4.40) with these HBBSs. Subgroup analysis of these trials indicated the increased risk was not restricted to a particular HBBS or clinical indication. Conclusion Based on the available data, use of HBBSs is associated with a significantly increased risk of death and MI.
    BibTeX:
    @article{Natanson2008,
      author = {Natanson, Charles and Kern, Steven J. and Lurie, Peter and Banks, Steven M. and Wolfe, Sidney M.},
      title = {Cell-free hemoglobin-based blood substitutes and risk of myocardial infarction and death - A meta-analysis},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2008},
      volume = {299},
      number = {19},
      pages = {2304-2312}
    }
    
    Nauck, D. & Kruse, R. A neuro-fuzzy method to learn fuzzy classification rules from data {1997} FUZZY SETS AND SYSTEMS
    Vol. {89}({3}), pp. {277-288} 
    article  
    Abstract: Neuro-fuzzy systems have recently gained a lot of interest in research and application. Neuro-fuzzy models as we understand them are fuzzy systems that use local learning strategies to learn fuzzy sets and fuzzy rules. Neuro-fuzzy techniques have been developed to support the development of e.g. fuzzy controllers and fuzzy classifiers. In this paper we discuss a learning method for fuzzy classification rules. The learning algorithm is a simple heuristics that is able to derive fuzzy rules from a set of training data very quickly, and tunes them by modifying parameters of membership functions. Our approach is based on NEFCLASS, a neuro-fuzzy model for pattern classification. We also discuss some results obtained by our software implementation of NEFCLASS, which is freely available on the Internet. (C) 1997 Elsevier Science B.V.
    BibTeX:
    @article{Nauck1997,
      author = {Nauck, D and Kruse, R},
      title = {A neuro-fuzzy method to learn fuzzy classification rules from data},
      journal = {FUZZY SETS AND SYSTEMS},
      year = {1997},
      volume = {89},
      number = {3},
      pages = {277-288},
      note = {GI-Workshop on Fuzzy-Neuro-Systems 95 - Theory and Applications, DARMSTADT, GERMANY, NOV 15-17, 1995}
    }
    
    New, B., Pallier, C., Ferrand, L. & Matos, R. A lexical database for contemporary french on internet: Lexique {2001} ANNEE PSYCHOLOGIQUE
    Vol. {101}({3}), pp. {447-462} 
    article  
    Abstract: We present a new lexical database of French, named Lexique. Based on a corpus of texts written since 1950 which contained 31 million words, Lexique yields 130 000 entries including the inflected forms of verbs, nouns and adjectives. Each entry provides several kinds of information including frequency, gender, number, phonological form, graphemic and phonemic unicity points. Several tables give additional statistics such as the frequencies of various units : letters, bigrams, trigrams, phonemes and syllables. The database is available for free on the Internet.
    BibTeX:
    @article{New2001,
      author = {New, B and Pallier, C and Ferrand, L and Matos, R},
      title = {A lexical database for contemporary french on internet: Lexique},
      journal = {ANNEE PSYCHOLOGIQUE},
      year = {2001},
      volume = {101},
      number = {3},
      pages = {447-462}
    }
    
    Newman, M. Detecting community structure in networks {2004} EUROPEAN PHYSICAL JOURNAL B
    Vol. {38}({2}), pp. {321-330} 
    article DOI  
    Abstract: There has been considerable recent interest in algorithms for finding communities in networks--groups of vertices within which connections are dense, but between which connections are sparser. Here we review the progress that has been made towards this end. We begin by describing some traditional methods of community detection, such as spectral bisection, the Kernighan-Lin algorithm and hierarchical clustering based on similarity measures. None of these methods, however, is ideal for the types of real-world network data with which current research is concerned, such as Internet and web data and biological and social networks. We describe a number of more recent algorithms that appear to work well with these data, including algorithms based on edge betweenness scores, on counts of short loops in networks and on voltage differences in resistor networks.
    BibTeX:
    @article{Newman2004a,
      author = {Newman, MEJ},
      title = {Detecting community structure in networks},
      journal = {EUROPEAN PHYSICAL JOURNAL B},
      year = {2004},
      volume = {38},
      number = {2},
      pages = {321-330},
      doi = {{10.1140/epjb/e2004-00124-y}}
    }
    
    Newman, M. The structure and function of complex networks {2003} SIAM REVIEW
    Vol. {45}({2}), pp. {167-256} 
    article  
    Abstract: Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
    BibTeX:
    @article{Newman2003,
      author = {Newman, MEJ},
      title = {The structure and function of complex networks},
      journal = {SIAM REVIEW},
      year = {2003},
      volume = {45},
      number = {2},
      pages = {167-256}
    }
    
    Newman, M. Mixing patterns in networks {2003} PHYSICAL REVIEW E
    Vol. {67}({2, Part 2}) 
    article DOI  
    Abstract: We study assortative mixing in networks, the tendency for vertices in networks to be connected to other vertices that are like (or unlike) them in some way. We consider mixing according to discrete characteristics such as language or race in social networks and scalar characteristics such as age. As a special example of the latter we consider mixing according to vertex degree, i.e., according to the number of connections vertices have to other vertices: do gregarious people tend to associate with other gregarious people? We propose a number of measures of assortative mixing appropriate to the various mixing types, and apply them to a variety of real-world networks, showing that assortative mixing is a pervasive phenomenon found in many networks. We also propose several models of assortatively mixed networks, both analytic ones based on generating function methods, and numerical ones based on Monte Carlo graph generation techniques. We use these models to probe the properties of networks as their level of assortativity is varied. In the particular case of mixing by degree, we find strong variation with assortativity in the connectivity of the network and in the resilience of the network to the removal of vertices.
    BibTeX:
    @article{Newman2003a,
      author = {Newman, MEJ},
      title = {Mixing patterns in networks},
      journal = {PHYSICAL REVIEW E},
      year = {2003},
      volume = {67},
      number = {2, Part 2},
      doi = {{10.1103/PhysRevE.67.026126}}
    }
    
    Newman, M. Spread of epidemic disease on networks {2002} PHYSICAL REVIEW E
    Vol. {66}({1, Part 2}) 
    article DOI  
    Abstract: The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible/infective/removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks.
    BibTeX:
    @article{Newman2002,
      author = {Newman, MEJ},
      title = {Spread of epidemic disease on networks},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {66},
      number = {1, Part 2},
      doi = {{10.1103/PhysRevE.66.016128}}
    }
    
    Newman, M. Scientific collaboration networks. I. Network construction and fundamental results {2001} PHYSICAL REVIEW E
    Vol. {64}({1, Part 2}) 
    article  
    Abstract: Using computer databases of scientific papers in physics, biomedical research, and computer science, we have constructed networks of collaboration between scientists in each of these disciplines. In these networks two scientists are considered connected if they have coauthored one or more papers together. We study a variety of statistical properties of our networks, including numbers of papers written by authors, numbers of authors per paper, numbers of collaborators that scientists have, existence and size of a giant component of connected scientists, and degree of clustering in the networks. We also highlight some apparent differences in collaboration patterns between the subjects studied. In the following paper, we study a number of measures of centrality and connectedness in the same networks.
    BibTeX:
    @article{Newman2001a,
      author = {Newman, MEJ},
      title = {Scientific collaboration networks. I. Network construction and fundamental results},
      journal = {PHYSICAL REVIEW E},
      year = {2001},
      volume = {64},
      number = {1, Part 2}
    }
    
    Newman, M. Models of the small world {2000} JOURNAL OF STATISTICAL PHYSICS
    Vol. {101}({3-4}), pp. {819-841} 
    article  
    Abstract: It is believed that almost any pair of people in the world can be connected to one another by a short chain of intermediate acquaintances, of typical length about six. This phenomenon, colloquially referred to as the ``six degrees of separation,'' has been the subject of considerable recent interest within the physics community. This paper provides a short review of the topic.
    BibTeX:
    @article{Newman2000,
      author = {Newman, MEJ},
      title = {Models of the small world},
      journal = {JOURNAL OF STATISTICAL PHYSICS},
      year = {2000},
      volume = {101},
      number = {3-4},
      pages = {819-841}
    }
    
    Newman, M., Forrest, S. & Balthrop, J. Email networks and the spread of computer viruses {2002} PHYSICAL REVIEW E
    Vol. {66}({3, Part 2A}) 
    article DOI  
    Abstract: Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.
    BibTeX:
    @article{Newman2002b,
      author = {Newman, MEJ and Forrest, S and Balthrop, J},
      title = {Email networks and the spread of computer viruses},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {66},
      number = {3, Part 2A},
      doi = {{10.1103/PhysRevE.66.035101}}
    }
    
    Newman, M. & Girvan, M. Finding and evaluating community structure in networks {2004} PHYSICAL REVIEW E
    Vol. {69}({2, Part 2}) 
    article DOI  
    Abstract: We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible ``betweenness'' measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.
    BibTeX:
    @article{Newman2004,
      author = {Newman, MEJ and Girvan, M},
      title = {Finding and evaluating community structure in networks},
      journal = {PHYSICAL REVIEW E},
      year = {2004},
      volume = {69},
      number = {2, Part 2},
      doi = {{10.1103/PhysRevE.69.026113}}
    }
    
    Newman, M. & Park, J. Why social networks are different from other types of networks {2003} PHYSICAL REVIEW E
    Vol. {68}({3, Part 2}) 
    article DOI  
    Abstract: We argue that social networks differ from most other types of networks, including technological and biological networks, in two important ways. First, they have nontrivial clustering or network transitivity and second, they show positive correlations, also called assortative mixing, between the degrees of adjacent vertices. Social networks are often divided into groups or communities, and it has recently been suggested that this division could account for the observed clustering. We demonstrate that group structure in networks can also account for degree correlations. We show using a simple model that we should expect assortative mixing in such networks whenever there is variation in the sizes of the groups and that the predicted level of assortative mixing compares well with that observed in real-world networks.
    BibTeX:
    @article{Newman2003b,
      author = {Newman, MEJ and Park, J},
      title = {Why social networks are different from other types of networks},
      journal = {PHYSICAL REVIEW E},
      year = {2003},
      volume = {68},
      number = {3, Part 2},
      doi = {{10.1103/PhysRevE.68.036122}}
    }
    
    Newman, M., Strogatz, S. & Watts, D. Random graphs with arbitrary degree distributions and their applications {2001} PHYSICAL REVIEW E
    Vol. {64}({2, Part 2}) 
    article  
    Abstract: Recent work on the structure of social networks and the internet has focused attention on graphs with distributions of vertex degree that are significantly different from the Poisson degree distributions that have been widely studied in the past. In this paper we develop in detail the theory of random graphs with arbitrary degree distributions. In addition to simple undirected, unipartite graphs, we examine the properties of directed and bipartite graphs. Among other results, we derive exact expressions for the position of the phase transition at which a giant component first forms, the mean component size, the size of the giant component if there is one, the mean number of vertices a certain distance away from a randomly chosen vertex, and the average vertex-vertex distance within a graph. We apply our theory to some real-world graphs, including the worldwide web and collaboration graphs of scientists and Fortune 1000 company directors. We demonstrate that in some cases random graphs with appropriate distributions of vertex degree predict with surprising accuracy the behavior of the real world, while in others there is a measurable discrepancy between theory and reality, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.
    BibTeX:
    @article{Newman2001,
      author = {Newman, MEJ and Strogatz, SH and Watts, DJ},
      title = {Random graphs with arbitrary degree distributions and their applications},
      journal = {PHYSICAL REVIEW E},
      year = {2001},
      volume = {64},
      number = {2, Part 2}
    }
    
    Newman, M., Watts, D. & Strogatz, S. Random graph models of social networks {2002} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {99}({Suppl. 1}), pp. {2566-2572} 
    article DOI  
    Abstract: We describe some new exactly solvable models of the structure of social networks, based on random graphs with arbitrary degree distributions. We give models both for simple unipartite networks, such as acquaintance networks, and bipartite networks, such as affiliation networks. We compare the predictions of our models to data for a number of real-world social networks and find that in some cases, the models are in remarkable agreement with the data, whereas in others the agreement is poorer, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.
    BibTeX:
    @article{Newman2002a,
      author = {Newman, MEJ and Watts, DJ and Strogatz, SH},
      title = {Random graph models of social networks},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2002},
      volume = {99},
      number = {Suppl. 1},
      pages = {2566-2572},
      note = {Colloquium of the National-Academy-of-Science on Self-Organized Complexity in the Physical, Biological, and Social Sciences, IRVINE, CALIFORNIA, MAR 23-24, 2001},
      doi = {{10.1073/pnas.012582999}}
    }
    
    Nickovic, S., Kallos, G., Papadopoulos, A. & Kakaliagou, O. A model for prediction of desert dust cycle in the atmosphere {2001} JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES
    Vol. {106}({D16}), pp. {18113-18129} 
    article  
    Abstract: An integrated modeling system has been developed to accurately describe the dust cycle in the atmosphere. It is based on the SKIRON/Eta modeling system and the EtaNCEP regional atmospheric model. The dust modules of the entire system incorporate the state of the art parameterizations of all the major phases of the atmospheric dust life such as production, diffusion, advection, and removal. These modules also include effects of the particle size distribution on aerosol dispersion. The dust production mechanism is based on the viscous/turbulent mixing, shear-free convection diffusion, and soil moisture. In addition to these sophisticated mechanisms, very high resolution databases, including elevation, soil properties, and vegetation cover are utilized. The entire system is easily configurable and transferable to any place on the Earth, it can cover domains on almost any size, and its horizontal resolution can vary from about 100 km up to approximately 4 km. It can run on one-way-nested form if necessary. The performance of the system has been tested for various dust storm episodes, in various places and resolution using gridded analysis or forecasting fields from various sources (ECMWF and NCEP) for initial and boundary conditions. The system is in operational use during the last two years, providing 72 hour forecasts for the Mediterranean region. The results are available on the internet (http://www.icod.org.mt and http://forecast.uoa.gr).
    BibTeX:
    @article{Nickovic2001,
      author = {Nickovic, S and Kallos, G and Papadopoulos, A and Kakaliagou, O},
      title = {A model for prediction of desert dust cycle in the atmosphere},
      journal = {JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES},
      year = {2001},
      volume = {106},
      number = {D16},
      pages = {18113-18129}
    }
    
    Nilsson, S. & Karlsson, G. IP-address lookup using LC-tries {1999} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {17}({6}), pp. {1083-1092} 
    article  
    Abstract: There has recently been a notable interest in the organization of routing information to enable fast lookup of LP addresses. The interest is primarily motivated by the goal of building multigigabit routers for the Internet, without having to rely on multilayer switching techniques. We address this problem by using an LC-trie, a trie structure with combined path and level compression. This data structure enables us to build efficient, compact, and easily searchable implementations of an LP-routing table, The structure can store both unicast and multicast addresses with the same average search times. The search depth increases as Theta(log log n) with the number of entries in the table for a large class of distributions, and it is independent of the length of the addresses. A node in the trie can be coded with four bytes. Only the size of the base vector, which contains the search strings, grows linearly with the length of the addresses when extended from 4 to 16 bytes, as mandated by the shift from IP version 4 to IP version 6, We present the basic structure as well as an adaptive version that roughly doubles the number of lookups/s, More general classifications of packets that are needed for link sharing, quality-of-service provisioning, and multicast and multipath routing are also discussed. Our experimental results compare favorably with those reported previously in the research literature.
    BibTeX:
    @article{Nilsson1999,
      author = {Nilsson, S and Karlsson, G},
      title = {IP-address lookup using LC-tries},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1999},
      volume = {17},
      number = {6},
      pages = {1083-1092}
    }
    
    Nishikawa, T., Motter, A., Lai, Y. & Hoppensteadt, F. Heterogeneity in oscillator networks: Are smaller worlds easier to synchronize? {2003} PHYSICAL REVIEW LETTERS
    Vol. {91}({1}) 
    article DOI  
    Abstract: Small-world and scale-free networks are known to be more easily synchronized than regular lattices, which is usually attributed to the smaller network distance between oscillators. Surprisingly, we find that networks with a homogeneous distribution of connectivity are more synchronizable than heterogeneous ones, even though the average network distance is larger. We present numerical computations and analytical estimates on synchronizability of the network in terms of its heterogeneity parameters. Our results suggest that some degree of homogeneity is expected in naturally evolved structures, such as neural networks, where synchronizability is desirable.
    BibTeX:
    @article{Nishikawa2003,
      author = {Nishikawa, T and Motter, AE and Lai, YC and Hoppensteadt, FC},
      title = {Heterogeneity in oscillator networks: Are smaller worlds easier to synchronize?},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2003},
      volume = {91},
      number = {1},
      doi = {{10.1103/PhysRevLett.91.014101}}
    }
    
    Noh, J. & Rieger, H. Random walks on complex networks {2004} PHYSICAL REVIEW LETTERS
    Vol. {92}({11}) 
    article DOI  
    Abstract: We investigate random walks on complex networks and derive an exact expression for the mean first-passage time (MFPT) between two nodes. We introduce for each node the random walk centrality C, which is the ratio between its coordination number and a characteristic relaxation time, and show that it determines essentially the MFPT. The centrality of a node determines the relative speed by which a node can receive and spread information over the network in a random process. Numerical simulations of an ensemble of random walkers moving on paradigmatic network models confirm this analytical prediction.
    BibTeX:
    @article{Noh2004,
      author = {Noh, JD and Rieger, H},
      title = {Random walks on complex networks},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2004},
      volume = {92},
      number = {11},
      doi = {{10.1103/PhysRevLett.92.118701}}
    }
    
    Nosek, B., Banaji, M. & Greenwald, A. Harvesting implicit group attitudes and beliefs from a demonstration web site {2002} GROUP DYNAMICS-THEORY RESEARCH AND PRACTICE
    Vol. {6}({1}), pp. {101-115} 
    article DOI  
    Abstract: Respondents at an Internet site completed over 600,000 tasks between October 1998 and April 2000 measuring attitudes toward and stereotypes of social groups. Their responses demonstrated, on average, implicit preference for White over Black and young over old and stereotypic associations linking male terms with science and career and female terms with liberal arts and family. The main purpose was to provide a demonstration site at which respondents could experience their implicit attitudes and stereotypes toward social groups. Nevertheless, the data collected are rich in information regarding the operation of attitudes and stereotypes, most notably the strength of implicit attitudes, the association and dissociation between implicit and explicit attitudes, and the effects of group membership on attitudes and stereotypes.
    BibTeX:
    @article{Nosek2002,
      author = {Nosek, BA and Banaji, MR and Greenwald, AG},
      title = {Harvesting implicit group attitudes and beliefs from a demonstration web site},
      journal = {GROUP DYNAMICS-THEORY RESEARCH AND PRACTICE},
      year = {2002},
      volume = {6},
      number = {1},
      pages = {101-115},
      doi = {{10.1037//1089-2699.6.1.101}}
    }
    
    Nosek, B., Greenwald, A. & Banaji, M. Understanding and using the Implicit Association Test: II. Method variables and construct validity {2005} PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN
    Vol. {31}({2}), pp. {166-180} 
    article DOI  
    Abstract: The Implicit Association Test (IAT) assesses relative strengths of four associations involving two pairs of contrasted concepts (e.g., male female and family-career). In four studies, analyses of data from 11 Web IATs, averaging 12, 000 respondents per data set, Supported the following conclusions: (a) Sorting IAT trials into subsets does not yield conceptually distinct measures; (b) valid IAT measures can be produced using as few as two items to represent each concept; (c) there are conditions for which the administration order of IAT and self-report measures does not alter psychometric properties of either measure; and (d) a known extraneous effect of IAT task block order was sharply reduced by using extra practice trials. Together, these analyses provide additional construct validation for the IAT and suggest practical guidelines to users of the IAT.
    BibTeX:
    @article{Nosek2005,
      author = {Nosek, BA and Greenwald, AG and Banaji, MR},
      title = {Understanding and using the Implicit Association Test: II. Method variables and construct validity},
      journal = {PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN},
      year = {2005},
      volume = {31},
      number = {2},
      pages = {166-180},
      doi = {{10.1177/0146167204271418}}
    }
    
    Novak, T., Hoffman, D. & Yung, Y. Measuring the customer experience in online environments: A structural modeling approach {2000} MARKETING SCIENCE
    Vol. {19}({1}), pp. {22-42} 
    article  
    Abstract: Intuition and previous research suggest that creating a compelling online environment for Web consumers will have numerous positive consequences for commercial Web providers. Online executives note that creating a compelling online experience for cyber customers is critical to creating competitive advantage on the Internet. Yet, very little is known about the factors that make using the Web a compelling experience for its users, and of the key consumer behavior outcomes of this compelling experience. Recently, the flow construct has been proposed as important for understanding consumer behavior on the World Wide Web, and as a way of defining the nature of compelling online experience. Although widely studied over the past 20 years, quantitative modeling efforts of the flow construct have been neither systematic nor comprehensive. In large parts, these efforts have been hampered by considerable confusion regarding the exact conceptual definition of flow. Lacking precise definition, it has been difficult to measure flow empirically, let alone apply the concept in practice. Following the conceptual model of flow proposed by Hoffman and Novak (1996), we conceptualize flow an the Web as a cognitive state experienced during navigation that is determined by (1) high levels of skill and control; (2) high levels of challenge and arousal; and (3) focused attention; and (4) is enhanced by interactivity and telepresence, Consumers who achieve flow on the Web are so acutely involved in the act of online navigation that thoughts and perceptions not relevant to navigation are screened out, and the consumer focuses entirely on the interaction. Concentration on the navigation experience is so intense that there is little attention left to consider anything else, and consequently, other events occurring in the consumer's surrounding physical environment lose significance Self-consciousness disappears, the consumer's sense of time becomes distorted, and the state of mind arising as a result of achieving flow on the Web is extremely gratifying. In a quantitative modeling framework, we develop a structural model based on our previous conceptual model of flow that embodies the components of what makes for a compelling online experience We use data collected from a large-sample, Web-based consumer survey to measure these constructs, and we fit a series of structural equation models that test related prior theory. The conceptual model is largely supported, and the improved fit offered by the revised model provides additional insights into the direct and indirect influences of flow, as well as into the relationship of flow to key consumer behavior and Web usage variables. Our formulation provides marketing scientists with operational definitions of key model constructs and establishes reliability and validity in a comprehensive measurement framework. A key insight from the paper is that the degree to which the online experience is compelling can be defined, measured, and related well to important marketing variables. Our model constructs relate in significant ways to key consumer behavior variables, including online shopping and Web use applications such as the extent to which consumers search for product information and participate in chat rooms. As such, our model may be useful both theoretically and in practice as marketers strive to decipher the secrets of commercial success in interactive online environments.
    BibTeX:
    @article{Novak2000,
      author = {Novak, TP and Hoffman, DL and Yung, YF},
      title = {Measuring the customer experience in online environments: A structural modeling approach},
      journal = {MARKETING SCIENCE},
      year = {2000},
      volume = {19},
      number = {1},
      pages = {22-42}
    }
    
    O'Mahony, M., Simeonidou, D., Hunter, D. & Tzanakaki, A. The application of optical packet switching in future communication networks {2001} IEEE COMMUNICATIONS MAGAZINE
    Vol. {39}({3}), pp. {128-135} 
    article  
    Abstract: Telecommunication networks are experiencing a dramatic increase in demand for capacity, much of it related to the exponential takeup of the Internet and associated services. To support this demand economically, transport networks are evolving to provide a reconfigurable optical layer which, with optical cross-connects, will realize a high-bandwidth flexible core. As well as providing large capacity, this new layer will be required to support new services such as rapid provisioning of an end-to-end connection under customer control. The first phase of network evolution, therefore, will provide a circuit-switched optical layer characterized by high capacity and fast circuit provisioning. In the longer term, it is currently envisaged that the bandwidth efficiency associated with optical packet switching (a transport technology that matches the bursty nature of multimedia traffic) will be required to ensure economic use of network resources. This article considers possible network application scenarios for optical packet switching. In particular, it focuses on the concept of an optical packet router as an edge network device, functioning as an interface between the electronic and optical domains. In this application it can provide a scalable and efficient IP traffic aggregator that may provide greater flexibility and efficiency than an electronic terabit router with reduced cost. The discussion considers the main technical issues relating to the concept and its implementation.
    BibTeX:
    @article{O'Mahony2001,
      author = {O'Mahony, MJ and Simeonidou, D and Hunter, DK and Tzanakaki, A},
      title = {The application of optical packet switching in future communication networks},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2001},
      volume = {39},
      number = {3},
      pages = {128-135}
    }
    
    Ohmori, S., Yamao, Y. & Nakajima, N. The future generations of mobile communications based on broadband access technologies {2000} IEEE COMMUNICATIONS MAGAZINE
    Vol. {38}({12}), pp. {134-142} 
    article  
    Abstract: The forthcoming mobile communication systems are expected to provide a wide variety of services, from high-quality voice to high-definition videos, through high-data-rate wireless channels anywhere in the world. The high data rate requires broad frequency bands, and sufficient broadband can be achieved in higher frequency bands such as microwave, Ka-band, and millimeter-wave. Broadband wireless channels have to be connected to broadband fixed networks such as the Internet and local area networks. The future-generation systems will include not only cellular phones, but also many new types of communication systems such as broadband wireless access systems, millimeter-wave LANs, intelligent transport systems, and high altitude stratospheric platform station systems. Key to the future generations of mobile communications are multimedia communications, wireless access to broad-band fixed networks, and seamless roaming among different systems. This article discusses future-generation mobile communication systems.
    BibTeX:
    @article{Ohmori2000,
      author = {Ohmori, S and Yamao, Y and Nakajima, N},
      title = {The future generations of mobile communications based on broadband access technologies},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2000},
      volume = {38},
      number = {12},
      pages = {134-142}
    }
    
    Pal, M. & Mather, P. An assessment of the effectiveness of decision tree methods for land cover classification {2003} REMOTE SENSING OF ENVIRONMENT
    Vol. {86}({4}), pp. {554-565} 
    article DOI  
    Abstract: Choice of a classification algorithm is generally based upon a number of factors, among which are availability of software, ease of use, and performance, measured here by overall classification accuracy. The maximum likelihood (ML) procedure is, for many users, the algorithm of choice because of its ready availability and the fact that it does not require an extended training process. Artificial neural networks (ANNs) are now widely used by researchers, but their operational applications are hindered by the need for the user to specify the configuration of the network architecture and to provide values for a number of parameters, both of which affect performance. The ANN also requires an extended training phase. In the past few years, the use of decision trees (DTs) to classify remotely sensed data ha's increased. Proponents of the method claim that it has a number of advantages over the ML and ANN algorithms. The DT is computationally fast, make no statistical assumptions, and can handle data that are represented on different measurement scales. Software to implement DTs is readily available over the Internet. Pruning of DTs can make them smaller and more easily interpretable, while the use of boosting techniques can improve performance. In this study, separate test and training data sets from two different geographical areas and two different sensors-multispectral Landsat ETM+ and hyperspectral DAIS-are used to evaluate the performance of univariate and multivariate DTs for land cover classification. Factors considered are: the effects of variations in training data set size and of the dimensionality of the feature space, together with the impact of boosting, attribute selection measures, and pruning. The level of classification accuracy achieved by the DT is compared to results from back-propagating ANN and the ML classifiers. Our results indicate that the performance of the univariate DT is acceptably good in comparison with that of other classifiers, except with high-dimensional data. Classification accuracy increases linearly with training data set size to a limit of 300 pixels per class in this case. Multivariate DTs do not appear to perform better than univariate DTs. While boosting produces an increase in classification accuracy of between 3% and 6 the use of attribute selection methods does not appear to be justified in terms of accuracy increases. However, neither the univariate DT nor the multivariate DT performed as well as the ANN or ML classifiers with high-dimensional data. (C) 2003 Elsevier Inc. All rights reserved.
    BibTeX:
    @article{Pal2003,
      author = {Pal, M and Mather, PM},
      title = {An assessment of the effectiveness of decision tree methods for land cover classification},
      journal = {REMOTE SENSING OF ENVIRONMENT},
      year = {2003},
      volume = {86},
      number = {4},
      pages = {554-565},
      doi = {{10.1016/S0034-4257(03)00132-9}}
    }
    
    Palmer, J. Web site usability, design, and performance metrics {2002} INFORMATION SYSTEMS RESEARCH
    Vol. {13}({2}), pp. {151-167} 
    article  
    Abstract: Web sites provide the key interface for consumer use of the Internet. This research reports on a series of three studies that develop and validate Web site usability, design and performance metrics, including download delay, navigability, site content, interactivity, and responsiveness. The performance metric that was developed includes the subconstructs user satisfaction, the likelihood of return, and the frequency of use. Data was collected in 1997, 1999, and 2000 from corporate Web sites via three methods, namely, a jury, third-party ratings, and a software agent. Significant associations between Web site design elements and Web site performance indicate that the constructs demonstrate good nomological validity. Together, the three studies provide a set of measures with acceptable validity and reliability. The findings also suggest lack of significant common methods biases across the jury-collected data, third-party data, and agent-collected data. Results suggest that Web site success is a first-order construct. Moreover, Web site success is significantly associated with Web site download delay (speed of access and display rate within the Web site), navigation (organization, arrangement, layout, and sequencing), content (amount and variety of product information), interactivity (customization and interactivity), and responsiveness (feedback options and FAQs).
    BibTeX:
    @article{Palmer2002,
      author = {Palmer, JW},
      title = {Web site usability, design, and performance metrics},
      journal = {INFORMATION SYSTEMS RESEARCH},
      year = {2002},
      volume = {13},
      number = {2},
      pages = {151-167}
    }
    
    Paolucci, M., Kawamura, T., Payne, T. & Sycara, K. Semantic matching of Web services capabilities {2002}
    Vol. {2342}SEMANTIC WEB - ISWC 2002, pp. {333-347} 
    inproceedings  
    Abstract: The Web is moving from being a collection of pages toward a collection of services that interoperate through the Internet. The first step toward this interoperation is the location of other services that can help toward the solution of a problem. In this paper we claim that location of web services should be based on the semantic match between a declarative description of the service being sought, and a description of the service being offered. Furthermore, we claim that this match is outside the representation capabilities of registries such as UDDI and languages such as WSDL. We propose a solution based on DAML-S, a DAML-based language for service description, and we show how service capabilities are presented in the Profile section of a DAML-S description and how a semantic match between advertisements and requests is performed.
    BibTeX:
    @inproceedings{Paolucci2002,
      author = {Paolucci, M and Kawamura, T and Payne, TR and Sycara, K},
      title = {Semantic matching of Web services capabilities},
      booktitle = {SEMANTIC WEB - ISWC 2002},
      year = {2002},
      volume = {2342},
      pages = {333-347},
      note = {1st International Semantic Web Conference (ISWC), SARDINIA, ITALY, JUN 09-12, 2002}
    }
    
    Papacharissi, Z. & Rubin, A. Predictors of Internet use {2000} JOURNAL OF BROADCASTING & ELECTRONIC MEDIA
    Vol. {44}({2}), pp. {175-196} 
    article  
    Abstract: We examined audience uses of the Internet from a uses-and-gratifications perspective. We expected contextual age, unwillingness to communicate, social presence, and Internet motives to predict outcomes of Internet exposure, affinity, and satisfaction. The analyses identified five motives for using the Internet and multivariate links among the antecedents and motives. The results suggested distinctions between instrumental and ritualized Internet use, as well as Internet use serving as a functional alternative to face-to-face interaction.
    BibTeX:
    @article{Papacharissi2000,
      author = {Papacharissi, Z and Rubin, AM},
      title = {Predictors of Internet use},
      journal = {JOURNAL OF BROADCASTING & ELECTRONIC MEDIA},
      year = {2000},
      volume = {44},
      number = {2},
      pages = {175-196},
      note = {Meeting of the National-Communication-Association, NEW YORK, NEW YORK, NOV, 1998}
    }
    
    Papadimitriou, G., Papazoglou, C. & Pomportsis, A. Optical switching: Switch fabrics, techniques, and architectures {2003} JOURNAL OF LIGHTWAVE TECHNOLOGY
    Vol. {21}({2}), pp. {384-405} 
    article DOI  
    Abstract: The switching speeds of electronics cannot keep up with the transmission capacity offered by optics. All-optical switch fabrics play a central role in the effort to migrate the switching functions to the optical layer. Optical packet switching provides an almost arbitrary fine granularity but faces significant challenges in the processing and buffering of bits at high speeds. Generalized multiprotocol label switching seeks to eliminate the asynchronous transfer mode and synchronous optical network layers, thus implementing Internet protocol over wavelength-division multiplexing. Optical burst switching attempts to minimize the need for processing and buffering by aggregating flows of data packets into bursts. In this paper, we present an extensive overview of the current technologies and techniques concerning optical switching.
    BibTeX:
    @article{Papadimitriou2003,
      author = {Papadimitriou, GI and Papazoglou, C and Pomportsis, AS},
      title = {Optical switching: Switch fabrics, techniques, and architectures},
      journal = {JOURNAL OF LIGHTWAVE TECHNOLOGY},
      year = {2003},
      volume = {21},
      number = {2},
      pages = {384-405},
      doi = {{10.1106/JLT.2003.808766}}
    }
    
    Park, K. & Kanehisa, M. Prediction of protein subcellular locations by support vector machines using compositions of amino acids and amino acid pairs {2003} BIOINFORMATICS
    Vol. {19}({13}), pp. {1656-1663} 
    article DOI  
    Abstract: Motivation: The subcellular location of a protein is closely correlated to its function. Thus, computational prediction of subcellular locations from the amino acid sequence information would help annotation and functional prediction of protein coding genes in complete genomes. We have developed a method based on support vector machines (SVMs). Results: We considered 12 subcellular locations in eukaryotic cells: chloroplast, cytoplasm, cytoskeleton, endoplasmic reticulum, extracellular medium, Golgi apparatus, lysosome, mitochondrion, nucleus, peroxisome, plasma membrane, and vacuole. We constructed a data set of proteins with known locations from the SWISS-PROT database. A set of SVMs was trained to predict the subcellular location of a given protein based on its amino acid, amino acid pair, and gapped amino acid pair compositions. The predictors based on these different compositions were then combined using a voting scheme. Results obtained through 5-fold cross-validation tests showed an improvement in prediction accuracy over the algorithm based on the amino acid composition only. This prediction method is available via the Internet.
    BibTeX:
    @article{Park2003,
      author = {Park, KJ and Kanehisa, M},
      title = {Prediction of protein subcellular locations by support vector machines using compositions of amino acids and amino acid pairs},
      journal = {BIOINFORMATICS},
      year = {2003},
      volume = {19},
      number = {13},
      pages = {1656-1663},
      doi = {{10.1093/bioinformatics/btg222}}
    }
    
    Park, S., Lee, C., Jeong, K., Park, H., Ahn, J. & Song, K. Fiber-to-the-home services based on wavelength-division-multiplexing passive optical network {2004} JOURNAL OF LIGHTWAVE TECHNOLOGY
    Vol. {22}({11}), pp. {2582-2591} 
    article DOI  
    Abstract: It is anticipated that more than 75 Mb/s per subscriber is required for the convergence service such as triple-play service (TPS). Among several types of high-speed access network technologies, wavelength-division-multiplexing passive optical network (WDM-PON) is the most favorable for the required bandwidth in the near future. Furthermore, WDM technologies, such as athermal arrayed-waveguide grating (AWG) and low-cost light source, have matured enough to be applied in the access network. In this paper, the authors propose and implement a WDM-PON system as a platform for TPS. The system employs an amplified spontaneous emission (ASE)-injected Fabry-Perot laser diode scheme. It has 32 channels of 125 Mb/s and adopts Ethernet as Layer 2. Multicast and virtual local area network features are used for the integration of services such as Internet protocol high-definition broadcast, voice-over Internet protocol, video on demand, and video telephone. The services were demonstrated using the WDM-PON system.
    BibTeX:
    @article{Park2004,
      author = {Park, SJ and Lee, CH and Jeong, KT and Park, HJ and Ahn, JG and Song, KH},
      title = {Fiber-to-the-home services based on wavelength-division-multiplexing passive optical network},
      journal = {JOURNAL OF LIGHTWAVE TECHNOLOGY},
      year = {2004},
      volume = {22},
      number = {11},
      pages = {2582-2591},
      doi = {{10.1109/JLT.2004.834504}}
    }
    
    Parker, R., Williams, M., Weiss, B., Baker, D., Davis, T., Doak, C., Doak, L., Hein, K., Meade, C., Nurss, J., Schwartzberg, J., Somers, S., Davis, R., Riggs, J., Champion, H., Howe, J., Altman, R., Deitchman, S., Genel, M., Karlan, M., Khan, M., Nielsen, N., Williams, M., Young, D., Schwartzberg, J., Bresolin, L., Dickinson, B. & Amer Med Assoc Health literacy - Report of the Council on Scientific Affairs {1999} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {281}({6}), pp. {552-557} 
    article  
    Abstract: Context Patients with the greatest health care needs may have the least ability to read and comprehend information needed to function successfully as patients. Objective To examine the scope and consequences of poor hearth literacy in the United States, characterize its implications for patients and physicians, and identify policy and research issues. Participants The 12 members of the Ad Hoc Committee on Health Literacy, American Medical Association Council on Scientific Affairs, were selected by a key informant process as experts in the field of health literacy from a variety of backgrounds in clinical medicine, medical and health services research, medical education, psychology, adult literacy, nursing, and health education. Evidence Literature review using the MEDLINE database for January 1966 through October 1, 1996, searching Medical Subject Heading (MeSH) reading combined with text words health or literacy in the title, abstract, or MeSH. A subsequent search using reading as a search term identified articles published between 1993 and August 1998. Authors of relevant published abstracts were asked to provide manuscripts. Experts in health services research, health education, and medical law identified proprietary and other unpublished references. Consensus Process Consensus among committee members was reached through review of 216 published articles and additional unpublished manuscripts and telephone and Internet conferencing. All committee members approved the final report. Conclusions Patients with inadequate health literacy have a complex array of communications difficulties, which may interact to influence health outcome. These patients report worse health status and have less understanding about their medical conditions and treatment. Preliminary studies indicate inadequate health literacy may increase the risk of hospitalization. Professional and public awareness of the health literacy issue must be increased, beginning with education of medical students and physicians and improved patient-physician communication skills. Future research should focus on optimal methods of screening patients to identify those with poor health literacy, effective health education techniques, outcomes and costs associated with poor health literacy, and the causal pathway of how poor health literacy influences health.
    BibTeX:
    @article{Parker1999,
      author = {Parker, RM and Williams, MV and Weiss, BD and Baker, DW and Davis, TC and Doak, CC and Doak, LG and Hein, K and Meade, CD and Nurss, J and Schwartzberg, JG and Somers, SA and Davis, RM and Riggs, JA and Champion, HC and Howe, JP and Altman, RD and Deitchman, SD and Genel, M and Karlan, MS and Khan, MK and Nielsen, NH and Williams, MA and Young, DC and Schwartzberg, J and Bresolin, LB and Dickinson, BD and Amer Med Assoc},
      title = {Health literacy - Report of the Council on Scientific Affairs},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1999},
      volume = {281},
      number = {6},
      pages = {552-557}
    }
    
    Paschalidis, I. & Tsitsiklis, J. Congestion-dependent pricing of network services {2000} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {8}({2}), pp. {171-184} 
    article  
    Abstract: We consider a service provider (SP) who provides access to a communication network or some other form of on-line services. Users initiate calls that belong to a set of diverse service classes, differing in resource requirements, demand pattern, and call duration. The SP charges a fee per call, which can depend on the current congestion level, and which affects users' demand for calls. We provide a dynamic programming formulation of the problems of revenue and welfare maximization, and derive some qualitative properties of the optimal solution. We also provide a number of approximate approaches, together with an analysis that indicates that near-optimality is obtained for the case of many, relatively small, users. In particular, we show analytically as well as computationally, that the performance of an optimal pricing strategy is closely matched by a suitably chosen static price, which does not depend on instantaneous congestion. This indicates that the easily implementable time-of-day pricing will often suffice. Throughout, we compare the alternative formulations involving revenue or welfare maximization, respectively, and draw some qualitative conclusions.
    BibTeX:
    @article{Paschalidis2000,
      author = {Paschalidis, IC and Tsitsiklis, JN},
      title = {Congestion-dependent pricing of network services},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2000},
      volume = {8},
      number = {2},
      pages = {171-184}
    }
    
    Pastor-Satorras, R., Vazquez, A. & Vespignani, A. Dynamical and correlation properties of the Internet {2001} PHYSICAL REVIEW LETTERS
    Vol. {87}({25}) 
    article DOI  
    Abstract: The description of the Internet topology is an important open problem, recently tackled with the introduction of scale-free networks. We focus on the topological and dynamical properties of real Internet maps in a three-year time interval. We study higher order correlation functions as well as the dynamics of several quantities. We find that the Internet is characterized by nontrivial correlations among nodes and different dynamical regimes. We point out the importance of node hierarchy and aging in the Internet structure and growth. Our results provide hints towards the realistic modeling of the Internet evolution.
    BibTeX:
    @article{Pastor-Satorras2001a,
      author = {Pastor-Satorras, R and Vazquez, A and Vespignani, A},
      title = {Dynamical and correlation properties of the Internet},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2001},
      volume = {87},
      number = {25},
      doi = {{10.1103/PhysRevLett.87.258701}}
    }
    
    Pastor-Satorras, R. & Vespignani, A. Immunization of complex networks {2002} PHYSICAL REVIEW E
    Vol. {65}({3, Part 2A}) 
    article DOI  
    Abstract: Complex networks such as the sexual partnership web or the Internet often show a high degree of redundancy and heterogeneity in their connectivity properties. This peculiar connectivity provides an ideal environment for the spreading of infective agents. Here we show that the random uniform immunization of individuals does not lead to the eradication of infections in all complex networks. Namely, networks with scale-free properties do not acquire global immunity from major epidemic outbreaks even in the presence of unrealistically high densities of randomly immunized individuals. The absence of any critical immunization threshold is due to the unbounded connectivity fluctuations of scale-free networks. Successful immunization strategies can be developed only by taking into account the inhomogeneous connectivity properties of scale-free networks. In particular, targeted immunization schemes, based on the nodes' connectivity hierarchy, sharply lower the network's vulnerability to epidemic attacks.
    BibTeX:
    @article{Pastor-Satorras2002,
      author = {Pastor-Satorras, R and Vespignani, A},
      title = {Immunization of complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {65},
      number = {3, Part 2A},
      doi = {{10.1103/PhysRevE.65.036104}}
    }
    
    Pastor-Satorras, R. & Vespignani, A. Epidemic dynamics in finite size scale-free networks {2002} PHYSICAL REVIEW E
    Vol. {65}({3, Part 2A}) 
    article DOI  
    Abstract: Many real networks present a bounded scale-free behavior with a connectivity cutoff due to physical constraints or a finite network size. We study epidemic dynamics in bounded scale-free networks with soft and hard connectivity cutoffs. The finite size effects introduced by the cutoff induce an epidemic threshold that approaches zero at increasing sizes. The induced epidemic threshold is very small even at a relatively small cutoff, showing that the neglection of connectivity fluctuations in bounded scale-free networks leads to a strong overestimation of the epidemic threshold. We provide the expression for the infection prevalence and discuss its finite size corrections. The present paper shows that the highly heterogeneous nature of scale-free networks does not allow the use of homogeneous approximations even for systems of a relatively small number of nodes.
    BibTeX:
    @article{Pastor-Satorras2002a,
      author = {Pastor-Satorras, R and Vespignani, A},
      title = {Epidemic dynamics in finite size scale-free networks},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {65},
      number = {3, Part 2A},
      doi = {{10.1103/PhysRevE.65.035108}}
    }
    
    Pastor-Satorras, R. & Vespignani, A. Epidemic spreading in scale-free networks {2001} PHYSICAL REVIEW LETTERS
    Vol. {86}({14}), pp. {3200-3203} 
    article  
    Abstract: The Internet has a very complex connectivity recently modeled by the class of scale-free networks. This feature, which appears to be very efficient for a communications network, favors at the same time the spreading of computer viruses. We analyze real data from computer virus infections and find the average lifetime and persistence of viral strains on the Internet. We define a dynamical model for the spreading of infections on scale-free networks. finding the absence of an epidemic threshold and its associated critical behavior. This new epidemiological framework rationalizes data of computer viruses and could help in the understanding of other spreading phenomena on communication and social networks.
    BibTeX:
    @article{Pastor-Satorras2001,
      author = {Pastor-Satorras, R and Vespignani, A},
      title = {Epidemic spreading in scale-free networks},
      journal = {PHYSICAL REVIEW LETTERS},
      year = {2001},
      volume = {86},
      number = {14},
      pages = {3200-3203}
    }
    
    Pastor-Satorras, R. & Vespignani, A. Epidemic dynamics and endemic states in complex networks {2001} PHYSICAL REVIEW E
    Vol. {63}({6, Part 2}) 
    article  
    Abstract: We study by analytical methods and large scale simulations a dynamical model for the spreading of epidemics in complex networks. in networks with exponentially bounded connectivity we recover the usual epidemic behavior with a threshold defining a critical point below that the infection prevalence is null. On the contrary, on a wide range of scale-free networks we observe the absence of an epidemic threshold and its associated critical behavior. This implies that scale-free networks are prone to the spreading and the persistence of infections whatever spreading rate the epidemic agents might possess. These results can help understanding. computer virus epidemics and other spreading phenomena on communication and social networks.
    BibTeX:
    @article{Pastor-Satorras2001b,
      author = {Pastor-Satorras, R and Vespignani, A},
      title = {Epidemic dynamics and endemic states in complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2001},
      volume = {63},
      number = {6, Part 2}
    }
    
    Paul, S., Sabnani, K., Lin, J. & Bhattacharyya, S. Reliable multicast transport protocol (RMTP) {1997} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {15}({3}), pp. {407-421} 
    article  
    Abstract: This paper presents the design, implementation, and performance of a reliable multicast transport protocol (RMTP). RMTP is based on a hierarchical structure in which receivers are grouped into local regions or domains and in each domain there is a special receiver called a designated receiver (DR) which is responsible for sending acknowledgments periodically to the sender, for processing acknowledgment from receivers in its domain, and for retransmitting lost packets to the corresponding receivers. Since lost packets are recovered by local retransmissions as opposed to retransmissions from the original sender, end-to-end latency is significantly reduced, and the overall throughput is improved as well. Also, since only the DR's send their acknowledgments to the sender, instead of all receivers sending their acknowledgments to the sender, a single acknowledgment is generated per local region, and this prevents acknowledgment implosion. Receivers in RMTP send their acknowledgments to the DR's periodically, thereby simplifying error recovery. In addition, lost packets are recovered by selective repeat retransmissions, leading to improved throughput at the cost of minimal additional buffering at the receivers. This paper also describes the implementation of RMTP and its performance on the Internet.
    BibTeX:
    @article{Paul1997,
      author = {Paul, S and Sabnani, KK and Lin, JCH and Bhattacharyya, S},
      title = {Reliable multicast transport protocol (RMTP)},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1997},
      volume = {15},
      number = {3},
      pages = {407-421}
    }
    
    Pavlik, I., Horvathova, A., Dvorska, L., Bartl, J., Svastova, P., du Maine, R. & Rychlik, I. Standardisation of restriction fragment length polymorphism analysis for Mycobacterium avium subspecies paratuberculosis {1999} JOURNAL OF MICROBIOLOGICAL METHODS
    Vol. {38}({1-2}), pp. {155-167} 
    article  
    Abstract: DNA from 1008 strains of Mycobacterium avium subspecies paratuberculosis, digested by restriction endonucleases PstI and BstEII, was hybridised with a standard IS900 probe prepared by PCR and labelled non-radioactively by ECL. DNA fingerprints were scanned by CCD camera and analysed using the software Gel Compar (Applied Maths, Kortrijk, Belgium). Thirteen restriction fragment length polymorphism (RFLP) (PstI) types were detected, which where designated as A, B, C, D, E, F, G, H, I, J, K, L and M in accordance with the study of Pavlik et al. (1995) [Pavlik, I., Bejckova, L., Pavlas, M., Rozsypalova, V., Koskova, S., 1995. Characterization by restriction endonuclease analysis and DNA hybridization using IS900 of bovine, ovine, caprine and human dependent strains of Mycobacterium paratuberculosis isolated in various localities. Vet. Microbiol. 45, 311-318]. Twenty RFLP (BstEII) types were detected and designated as C1-3, C5, C7-20, S1 and Il in accordance with the study by Collins et al. 1990 [Collins, D.M., Gabric, D.M., de Lisle, G.W., 1990. Identification of two groups of il Mycobacterium paratuberculosis strains by restriction endonuclease analysis and DNA hybridization. J. Clin. Microbiol. 28, 1591-1596]. A combination of both RFLP (PstI) and RFLP (BstEII) results revealed a total of 28 different RFLP types. All the RFLP types and detailed protocols are available at Internet web site WWW...: http:/ /www.vri.cz/wwwrflptext.htm. (C) 1999 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Pavlik1999,
      author = {Pavlik, I and Horvathova, A and Dvorska, L and Bartl, J and Svastova, P and du Maine, R and Rychlik, I},
      title = {Standardisation of restriction fragment length polymorphism analysis for Mycobacterium avium subspecies paratuberculosis},
      journal = {JOURNAL OF MICROBIOLOGICAL METHODS},
      year = {1999},
      volume = {38},
      number = {1-2},
      pages = {155-167}
    }
    
    Pavlou, P. Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model {2003} INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE
    Vol. {7}({3}), pp. {101-134} 
    article  
    Abstract: This paper aims to predict consumer acceptance of e-commerce by proposing a set of key drivers for engaging consumers in on-line transactions. The primary constructs for capturing consumer acceptance of e-commerce are intention to transact and on-line transaction behavior. Following the theory of reasoned action (TRA) as applied to a technology-driven environment, technology acceptance model (TAM) variables (perceived usefulness and ease of use) are posited as key drivers of e-commerce acceptance. The practical utility of TAM stems from the fact that e-commerce is technology-driven. The proposed model integrates trust and perceived risk, which are incorporated given the implicit uncertainty of the e-commerce environment. The proposed integration of the hypothesized independent variables is justified by placing all the variables under the nomological TRA structure and proposing their interrelationships. The resulting research model is tested using data from two empirical studies. The first, exploratory study comprises three experiential scenarios with 103 students. The second, confirmatory study uses a sample of 155 on-line consumers. Both studies strongly support the e-commerce acceptance model by validating the proposed hypotheses. The paper discusses the implications for e-commerce theory, research, and practice, and makes several suggestions for future research.
    BibTeX:
    @article{Pavlou2003,
      author = {Pavlou, PA},
      title = {Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model},
      journal = {INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE},
      year = {2003},
      volume = {7},
      number = {3},
      pages = {101-134}
    }
    
    Paxson, V. End-to-end internet packet dynamics {1999} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {7}({3}), pp. {277-292} 
    article  
    Abstract: We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20 000 TCP bulk transfers between 35 Internet sites. Because we traced each 100-kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We: 1) characterize the prevalence of unusual network events such as out-of-order delivery and packet replication; 2) discuss a robust receiver-based algorithm for estimating ``bottleneck bandwidth'' that addresses deficiencies discovered in techniques based on ``packet pair;'' 3) investigate patterns of packet loss, finding that loss events are not well modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and 4) analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.
    BibTeX:
    @article{Paxson1999,
      author = {Paxson, V},
      title = {End-to-end internet packet dynamics},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1999},
      volume = {7},
      number = {3},
      pages = {277-292}
    }
    
    Paxson, V. End-to-end routing behavior in the Internet {1997} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {5}({5}), pp. {601-615} 
    article  
    Abstract: The large-scale behavior of routing in the Internet has gone virtually without any formal study, the exceptions being Chinoy's analysis of the dynamics of Internet routing information, and recent work, similar in spirit, by Labovitz, Malan, and Jahanian. We report on an analysis of 40 000 end-to-end route measurements conducted using repeated `'traceroutes'' between 37 Internet sites. We analyze the routing behavior for pathological conditions, routing stability, and routing symmetry, For pathologies, we characterize the prevalence of routing loops, erroneous routing, infrastructure failures, and temporary outages. We find that the likelihood of encountering a major routing pathology more than doubled between the end of 1994 and the end of 1995, rising from 1.5% to 3.3 For routing stability, we define two separate types of stability, `'prevalence,'' meaning the overall likelihood that a particular route is encountered, and `'persistence,'' the likelihood that a route remains unchanged over a long period of time. We find that Internet paths are heavily dominated by a single prevalent route, but that the time periods over which routes persist show wide variation, ranging from seconds up to days. About two-thirds of the Internet paths had routes persisting for either days or weeks. For routing symmetry, we look at the likelihood that a path through the Internet visits at least one different city in the two directions. At the end of 1995, this was the case half the time, and at least one different autonomous system was visited 30% of the time.
    BibTeX:
    @article{Paxson1997,
      author = {Paxson, V},
      title = {End-to-end routing behavior in the Internet},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {1997},
      volume = {5},
      number = {5},
      pages = {601-615}
    }
    
    Peitsch, M. ProMod and Swiss-model: Internet-based tools for automated comparative protein modelling {1996} BIOCHEMICAL SOCIETY TRANSACTIONS
    Vol. {24}({1}), pp. {274-279} 
    article  
    BibTeX:
    @article{Peitsch1996,
      author = {Peitsch, MC},
      title = {ProMod and Swiss-model: Internet-based tools for automated comparative protein modelling},
      journal = {BIOCHEMICAL SOCIETY TRANSACTIONS},
      year = {1996},
      volume = {24},
      number = {1},
      pages = {274-279},
      note = {656th Meeting of the Biochemical-Society, DUBLIN, IRELAND, SEP 11-15, 1995}
    }
    
    Pennock, D., Flake, G., Lawrence, S., Glover, E. & Giles, C. Winners don't take all: Characterizing the competition for links on the web {2002} PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
    Vol. {99}({8}), pp. {5207-5211} 
    article  
    Abstract: As a whole, the World Wide Web displays a striking `'rich get richer'' behavior, with a relatively small number of sites receiving a disproportionately large share of hyperlink references and traffic. However, hidden in this skewed global distribution, we discover a qualitatively different and considerably less biased link distribution among subcategories of pages-for example, among all university homepages or all newspaper homepages. Although the connectivity distribution over the entire web is close to a pure power law, we find that the distribution within specific categories is typically unimodal on a log scale, with the location of the mode, and thus the extent of the rich get richer phenomenon, varying across different categories. Similar distributions occur in many other naturally occurring networks, including research paper citations, movie actor collaborations, and United States power grid connections. A simple generative model, incorporating a mixture of preferential and uniform attachment, quantifies the degree to which the rich nodes grow richer, and how new (and poorly connected) nodes can compete. The model accurately accounts for the true connectivity distributions of category-specific web pages, the web as a whole, and other social networks.
    BibTeX:
    @article{Pennock2002,
      author = {Pennock, DM and Flake, GW and Lawrence, S and Glover, EJ and Giles, CL},
      title = {Winners don't take all: Characterizing the competition for links on the web},
      journal = {PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA},
      year = {2002},
      volume = {99},
      number = {8},
      pages = {5207-5211}
    }
    
    Perkins, C. Mobile IP {1997} IEEE COMMUNICATIONS MAGAZINE
    Vol. {35}({5}), pp. {84-\&} 
    article  
    Abstract: Mobile IP has been designed within the IETF to serve the needs of the burgeoning population of mobile computer users who wish to connect to the Internet and maintain communications as they move from place to place. The basic protocol is described, with details given on the three major component protocols: Agent Advertisement, Registration, and Tunneling. Then route optimization procedures are outlined, and further topics of current interest are described.
    BibTeX:
    @article{Perkins1997,
      author = {Perkins, CE},
      title = {Mobile IP},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {1997},
      volume = {35},
      number = {5},
      pages = {84-&}
    }
    
    Pesole, G., Liuni, S., Grillo, G., Licciulli, F., Mignone, F., Gissi, C. & Saccone, C. UTRdb and UTRsite: specialized databases of sequences and functional elements of 5 ` and 3 ` untranslated regions of eukaryotic mRNAs. Update 2002 {2002} NUCLEIC ACIDS RESEARCH
    Vol. {30}({1}), pp. {335-340} 
    article  
    Abstract: The 5'- and 3'-untranslated regions (5'- and 3'-UTRs) of eukaryotic mRNAs are known to play a crucial role in post-transcriptional regulation of gene expression modulating nucleo-cytoplasmic mRNA transport, translation efficiency, subcellular localization and stability. UTRdb is a specialized database of 5' and 3' untranslated sequences of eukaryotic mRNAs cleaned from redundancy. UTRdb entries are enriched with specialized information not present in the primary databases including the presence of nucleotide sequence patterns already demonstrated by experimental analysis to have some functional role. All these patterns have been collected in the UTRsite database so that it is possible to search any input sequence for the presence of annotated functional motifs. Furthermore, UTRdb entries have been annotated for the presence of repetitive elements. All Internet resources we implemented for retrieval and functional analysis of 5'- and 3'-UTRs of eukaryotic mRNAs are accessible at http://bighost.area.ba.cnr.it/BIG/UTRHome/.
    BibTeX:
    @article{Pesole2002,
      author = {Pesole, G and Liuni, S and Grillo, G and Licciulli, F and Mignone, F and Gissi, C and Saccone, C},
      title = {UTRdb and UTRsite: specialized databases of sequences and functional elements of 5 ` and 3 ` untranslated regions of eukaryotic mRNAs. Update 2002},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2002},
      volume = {30},
      number = {1},
      pages = {335-340}
    }
    
    Peterson, R., Balasubramanian, S. & Bronnenberg, B. Exploring the implications of the Internet for consumer marketing {1997} JOURNAL OF THE ACADEMY OF MARKETING SCIENCE
    Vol. {25}({4}), pp. {329-346} 
    article  
    Abstract: Past commentaries on the potential impact of the internet on consumer marketing have typically failed to acknowledge that consumer markets are heterogeneous and complex and that the Internet is but one possible distribution, transaction, and communication channel in a world dominated by conventional retailing channels. This failure has led to excessively broad predictions regarding the effect of the Internet on the structure and performance of product and service markets. The objective of this article is to provide a framework for understanding possible impacts of the Internet on marketing to consumers. This is done by analyzing channel intermediary functions that can be performed on the Internet, suggesting classification schemes that clarify the potential impact of the Internet across different products and services, positioning the Internet against conventional retailing channels, and identifying similarities and differences that exist between them The article concludes with a series of questions designed to stimulate the development of theory and strategy in the context of internet-based marketing.
    BibTeX:
    @article{Peterson1997,
      author = {Peterson, RA and Balasubramanian, S and Bronnenberg, BJ},
      title = {Exploring the implications of the Internet for consumer marketing},
      journal = {JOURNAL OF THE ACADEMY OF MARKETING SCIENCE},
      year = {1997},
      volume = {25},
      number = {4},
      pages = {329-346}
    }
    
    Pham, V. & Karmouch, A. Mobile software agents: An overview {1998} IEEE COMMUNICATIONS MAGAZINE
    Vol. {36}({7}), pp. {26-37} 
    article  
    Abstract: The anticipated increase in popular use of the Internet will create more opportunities in distance learning, electronic commerce, and multimedia communication, but it will also create more challenges in organizing information and facilitating its efficient retrieval. From the network perspective, there will be additional challenges and problems in meeting bandwidth requirements and network management. Many researchers believed that the mobile agent paradigm (mobile object) could propose several attractive solutions to deal with such challenges and problems. A number of mobile agent systems have been designed and implemented in academic institutions and commercial firms. However, few applications were found to take advantage of the mobile agent. Among the hurdles facing this emerging paradigm are concerns about security requirements and efficient resource management. This article introduces the core concepts of this emerging paradigm, and attempts to present an account of current research efforts in the context of telecommunications. The goal is to provide the interested reader with a clear background of the opportunities and challenges this emerging paradigm brings about, and a descriptive look at some of the forerunners that are providing experimental technologies supporting this paradigm.
    BibTeX:
    @article{Pham1998,
      author = {Pham, VA and Karmouch, A},
      title = {Mobile software agents: An overview},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {1998},
      volume = {36},
      number = {7},
      pages = {26-37}
    }
    
    Piccoli, G., Ahmad, R. & Ives, B. Web-based virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training {2001} MIS QUARTERLY
    Vol. {25}({4}), pp. {401-426} 
    article  
    Abstract: Internet technologies are having a significant impact on the learning industry. For-profit organizations and traditional institutions of higher education have developed and are using web-based courses, but little is known about their effectiveness compared to traditional classroom education. Our work focuses on the effectiveness of a web-based virtual learning environment (VLE) in the context of basic information technology skills training. This article provides three main contributions. First, it introduces and defines the concept of VLE, discussing how a VLE differs from the traditional classroom and differentiating it from the related, but narrower, concept of computer aided instruction (CAI). Second, it presents 2 framework of VLE effectiveness, grounded in the technology-mediated learning literature, which frames the VLE research domain, and addresses the relationship between the main constructs. Finally, it focuses on one essential VLE design variable, learner control, and compares 2 web-based VLE to a traditional classroom through a longitudinal experimental design. Our results indicate that, in the context of IT basic skills training in undergraduate education, there are no significant differences in performance between students enrolled in the two environments. However, the VLE leads to higher reported computer self-efficacy, while participants report being less satisfied with the learning process.
    BibTeX:
    @article{Piccoli2001,
      author = {Piccoli, G and Ahmad, R and Ives, B},
      title = {Web-based virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training},
      journal = {MIS QUARTERLY},
      year = {2001},
      volume = {25},
      number = {4},
      pages = {401-426}
    }
    
    Podilchuk, C. & Zeng, W. Image-adaptive watermarking using visual models {1998} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {16}({4}), pp. {525-539} 
    article  
    Abstract: The huge success of the Internet allows for the transmission, wide distribution, and access of electronic data in an effortless manner. Content providers are faced with the challenge of how to protect their electronic data. This problem has generated a flurry of recent research activity in the area of digital watermarking of electronic content for copyright protection, Unlike the traditional visible watermark found on paper, the challenge here is to introduce a digital watermark that does not alter the perceived quality of the electronic content, while being extremely robust to attack. For instance, in the case of image data, editing the picture or illegal tampering should not destroy or transform the watermark into another valid signature, Equally important, the watermark should not alter the perceived visual quality of the image. From a signal processing perspective, the two basic requirements for an effective watermarking scheme, robustness and transparency, conflict with each other. We propose two watermarking techniques for digital images that are based on utilizing visual models which have been developed in the context of image compression. Specifically, we propose watermarking schemes where visual models are used to determine image dependent upper bounds on watermark insertion. This allows us to provide the maximum strength transparent watermark which, in turn, is extremely robust to common image processing and editing such as JPEG compression, rescaling, and cropping, We propose perceptually based watermarking schemes in two frameworks: the block-based discrete cosine transform and multiresolution wavelet framework and discuss the merits of each one. Our schemes are shown to provide very good results both in terms of image transparency and robustness.
    BibTeX:
    @article{Podilchuk1998,
      author = {Podilchuk, CI and Zeng, WJ},
      title = {Image-adaptive watermarking using visual models},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1998},
      volume = {16},
      number = {4},
      pages = {525-539}
    }
    
    Pojmanski, G. The All Sky Automated Survey. Catalog of variable stars. I. 0(h)-6(h) quarter of the southern hemisphere {2002} ACTA ASTRONOMICA
    Vol. {52}({4}), pp. {397-427} 
    article  
    Abstract: This paper describes the first part of the photometric data from the 9degrees x 9degrees ASAS camera monitoring the whole southern hemisphere in the V-band. Data acquisition and reduction pipeline are described and preliminary list of variable stars presented. Over 1300 000 stars brighter than V = 15 mag on 10000 frames were analyzed and 3126 were found to be variable (1055 eclipsing, 770 regularly pulsating, 132 Mira and 1169 other, mostly SR, IR and LPV stars). Periodic light curves have been classified using the fully automated algorithm, which is described in detail. Basic photometric properties are presented in the tables and exemplary light curves are printed for reference. All photometric data are available over the INTERNET at http://www.astrouw.edu.pl/gp/asas/asas.html or http://archive.princeton.edu/asas.
    BibTeX:
    @article{Pojmanski2002,
      author = {Pojmanski, G},
      title = {The All Sky Automated Survey. Catalog of variable stars. I. 0(h)-6(h) quarter of the southern hemisphere},
      journal = {ACTA ASTRONOMICA},
      year = {2002},
      volume = {52},
      number = {4},
      pages = {397-427}
    }
    
    Porter, M. Strategy and the Internet {2001} HARVARD BUSINESS REVIEW
    Vol. {79}({3}), pp. {62+} 
    article  
    Abstract: Many of the pioneers of Internet business, both dot-corns and established companies, have competed in ways that violate nearly every precept of good strategy. Rather than focus on profits, they have chased customers indiscriminately through discounting, channel incentives, and advertising. Rather than concentrate on delivering value that earns an attractive price from customers, they have pursued indirect revenues such as advertising and click-through fees. Rather than make trade-offs, they have rushed to offer every conceivable product or service. It did not have to be this way - and it does not have to be in the future. When it comes to reinforcing a distinctive strategy, Michael Porter argues, the Internet provides a better technological platform than previous generations of IT. Gaining competitive advantage does not require a radically new approach to business; it requires building on the proven principles of effective strategy. Porter argues that, contrary to recent thought, the Internet is not disruptive to most existing industries and established companies. It rarely nullifies important sources of competitive advantage in an industry; it off en makes them even more valuable. And as all companies embrace Internet technology, the Internet itself will be neutralized as a source of advantage. Robust competitive advantages will arise instead from traditional strengths such as unique products, proprietary content, and distinctive physical activities. Internet technology may be able to fortify those advantages, but it is unlikely to supplant them. Porter debunks such Internet myths as first-mover advantage, the power of virtual companies, and the multiplying rewards of network effects.. He disentangles the distorted signals from the marketplace, explains why the Internet complements rather than cannibalizes existing ways of doing business, and outlines strategic imperatives for dot-coms and traditional companies.
    BibTeX:
    @article{Porter2001,
      author = {Porter, ME},
      title = {Strategy and the Internet},
      journal = {HARVARD BUSINESS REVIEW},
      year = {2001},
      volume = {79},
      number = {3},
      pages = {62+}
    }
    
    Prahalad, C. & Ramaswamy, V. Co-opting customer competence {2000} HARVARD BUSINESS REVIEW
    Vol. {78}({1}), pp. {79+} 
    article  
    Abstract: Major business trends such as deregulation, globalization, technological convergence, and the rapid evolution of the Internet have transformed the roles that companies play in their dealings with other companies. Business practitioners and scholars talk about alliances, networks, and collaboration among companies. But managers and researchers have largely ignored the agent that is most dramatically transforming the industrial system as we know it: the consumer. In a market in which technology enabled consumers can now engage themselves in an active dialogue with manufacturers-a dialogue that customers can control - companies have to recognize that the customer is becoming a partner in creating value. In this article, authors C.K. Prahalad and Venkatram Ramaswamy demonstrate how the shifting role of the consumer affects the notion of it company's core competencies. Where previously, businesses learned to draw on the competencies and resources of their business partners and suppliers to compete effectively, they must now include consumers as part of the extended enterprise, the authors say. Harnessing those customer competencies won't be easy. At a minimum, managers must come to grips with four fundamental realities in co-opting customer competence: they have to engage their customers in an active, explicit, and ongoing dialogue; mobilize communities of customers; manage customer diversity; and engage customers in cocreating personalized experiences. Companies will also need to revise some of the traditional mechanisms of the marketplace - pricing and billing systems, for instance-to account for their customers' new role.
    BibTeX:
    @article{Prahalad2000,
      author = {Prahalad, CK and Ramaswamy, V},
      title = {Co-opting customer competence},
      journal = {HARVARD BUSINESS REVIEW},
      year = {2000},
      volume = {78},
      number = {1},
      pages = {79+}
    }
    
    Pravikoff, D., Tanner, A. & Pierce, S. Readiness of U.S nurses for evidence-based practice {2005} AMERICAN JOURNAL OF NURSING
    Vol. {105}({9}), pp. {40-51} 
    article  
    Abstract: Evidence-based practice is a systematic approach to problem solving for health care providers, including RNs, characterized by the use of the best evidence currently available for clinical decision making, in order to provide the most consistent and best possible care to patients. Are RNs in the United States prepared to engage in this process? This study examines nurses' perceptions of their access to tools with which to obtain evidence and whether they have the skills to do so. Using a stratified random sample of 3,000 RNs across the United States, 1,097 nurses (37 responded to the 93-item questionnaire. Seven hundred sixty respondents (77% of those who were employed at the time of the survey) worked in clinical settings and are the focus of this article. Although these nurses acknowledge that they frequently need information for practice, they feel much more confident asking colleagues or peers and searching the Internet and World Wide Web than they do using bibliographic databases such as PubMed or CINAHL to find specific information. They don't understand or value research and have received little or no training in the use of tools that would help them find evidence on which to base their practice. Implications for nursing and nursing education are discussed.
    BibTeX:
    @article{Pravikoff2005,
      author = {Pravikoff, DS and Tanner, AB and Pierce, ST},
      title = {Readiness of U.S nurses for evidence-based practice},
      journal = {AMERICAN JOURNAL OF NURSING},
      year = {2005},
      volume = {105},
      number = {9},
      pages = {40-51}
    }
    
    Prelec, D. & Loewenstein, G. The red and the black: Mental accounting of savings and debt {1998} MARKETING SCIENCE
    Vol. {17}({1}), pp. {4-28} 
    article  
    Abstract: In the standard economic account of consumer behavior the cost of a purchase takes the form of a reduction in future utility when expenditures that otherwise could have been made are forgone. The reality of consumer hedonics is different. When people make purchases, they often experience an immediate pain of paying, which can undermine the pleasure derived from consumption. The ticking of the taxi meter, for example, reduces one's pleasure from the ride. We propose a ``double-entry'' mental accounting theory that describes the nature of these reciprocal interactions between the pleasure of consumption and the pain of paring and draws out their implications for consumer behavior and hedonics. A central assumption of the model, which we call prospective accounting, is that consumption that has already been paid for can be enjoyed as if it were free and that the pain associated with payments made prior to consumption (but not after) is buffered by thoughts of the benefits that the payments will finance. Another important concept is coupling, which refers to the degree to which consumption calls to mind thoughts of payment, and vice versa. Some financing methods, such as credit cards, tend to weaken coupling, whereas others, such as cash payment, produce tight coupling. Our model makes a variety of predictions that are at variance with economic formulations. Contrary to the standard prediction that people will finance purchases to minimize the present value of payments, our model predicts strong debt aversion-that they should prefer to prepay for consumption or to get paid for work after it is performed. Such pay-before sequences confer hedonic benefits because consumption can be enjoyed without thinking about the need to pay for it in the future. Likewise, when paring beforehand, the pain of paying is mitigated by thoughts of future consumption benefits. Contrary to the economic prediction that consumers should prefer to pay, at the margin, for what they consume, our model predicts that consumers will find it less painful to pay for, and hence will prefer, flat-rate pricing schemes such as unlimited Internet access at a fixed monthly price, even if it involves paying more for the same usage. Other predictions concern spending patterns with cash, charge, or credit cards, and preferences for the earmarking of purchases. We test these predictions in a series of surveys and in a conjoint-like analysis that pitted our double-entry mental accounting model against a standard discounting formulation and another benchmark that did not incorporate hedonic interactions between consumption and payments. Our model provides a better fit of the data for 60% of the subjects; the discounting formulation provides a better fit for only 29% of the subjects (even when allowing for positive and negative discount rates). The pain of paying, we argue, plays an important role in consumer self-regulation, but is hedonically costly. From a hedonic perspective the ideal situation is one in which payments are tightly coupled to consumption (so that paying evokes thoughts about the benefits being financed) but consumption is decoupled from payments (so that consumption does not evoke thoughts about payment). From an efficiency perspective, however, it is important for consumers to be aware of what they are paying for consumption. This creates a tension between hedonic efficiency and what we call decision efficiency. Various institutional arrangements, such as financing of public parks through taxes or usage fees, play into this tradeoff. A producer developing a pricing structure for their product or sen iee should be aware of these two conflicting objectives, and should try to devise a structure that reconciles them.
    BibTeX:
    @article{Prelec1998,
      author = {Prelec, D and Loewenstein, G},
      title = {The red and the black: Mental accounting of savings and debt},
      journal = {MARKETING SCIENCE},
      year = {1998},
      volume = {17},
      number = {1},
      pages = {4-28}
    }
    
    Prendergast, J., Quinn, R. & Lawton, J. The gaps between theory and practice in selecting nature reserves {1999} CONSERVATION BIOLOGY
    Vol. {13}({3}), pp. {484-492} 
    article  
    Abstract: Over the last three decades a great deal of research, money, and effort have been put into the development of theory and techniques designed to make conservation more efficient. Much of the recent emphasis has been on methods to identify areas of high conservation interest and to design efficient networks of nature reserves. Reserve selection algorithms, gap analysis, and other computerized approaches have much potential to transform conservation planning, yet these methods are used only infrequently by those charged with managing landscapes. We briefly describe different approaches to identifying potentially valuable areas and methods for reserve selection and then discuss the reasons they remain largely unused by conservationists and land-use planners. Our informal discussions with ecologists, conservationists, and land managers from Europe and the United States suggested that the main reason for the low level of adoption of these sophisticated tools is simply that land managers have been unaware of them. Where this has been the case, low levels of funding, lack of understanding about the purpose of these tools, and general antipathy toward what is seen as a prescriptive approach to conservation all play a part. We recognize there is no simple solution but call for a closer dialogue between theoreticians and practitioners in conservation biology. The two communities night be brought into closer contact in numerous ways, including carefully targeted publication of research and Internet communication. However it is done, we feel that the needs of land managers need to be catered to by those engaged in conservation research and that managers need to be more aware of what science can contribute to practical conservation.
    BibTeX:
    @article{Prendergast1999,
      author = {Prendergast, JR and Quinn, RM and Lawton, JH},
      title = {The gaps between theory and practice in selecting nature reserves},
      journal = {CONSERVATION BIOLOGY},
      year = {1999},
      volume = {13},
      number = {3},
      pages = {484-492}
    }
    
    Pronovost, P., Angus, D., Dorman, T., Robinson, K., Dremsizov, T. & Young, T. Physician staffing patterns and clinical outcomes in critically ill patients - A systematic review {2002} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {288}({17}), pp. {2151-2162} 
    article  
    Abstract: Context Intensive care unit (ICU) physician staffing varies widely, and its association with patient outcomes remains unclear. Objective To evaluate the association between ICU physician staffing and patient outcomes. Data Sources We searched MEDLINE (January 1, 1965, through September 30, 2001) for the following medical subject heading (MeSH) terms: intensive care units, ICU, health resources/utilization, hospitalization, medical staff hospital organization and administration, personnel staffing and scheduling, length of stay, and LOS. We also used the following text words: staffing, intensivist, critical, care, and specialist. To identify observational studies, we added the MeSH terms case-control study and retrospective study. Although we searched for non-English-language citations, we reviewed only English-language articles. We also searched EMBASE, HealthStar (Health Services, Technology, Administration, and Research), and HSRPROJ (Health Services Research Projects in Progress) via Internet Grateful Med and The Cochrane Library and hand searched abstract proceedings from intensive care national scientific meetings (January 1, 1994, through December 31, 2001). Study Selection We selected randomized and observational controlled trials of critically ill adults or children. Studies examined ICU attending physician staffing strategies and the outcomes of hospital and ICU mortality and length of stay (LOS). Studies were selected and critiqued by 2 reviewers. We reviewed 2590 abstracts and identified 26 relevant observational studies (of which 1 included 2 comparisons), resulting in 27 comparisons of alternative staffing strategies. Twenty studies focused on a single ICU. Data Synthesis We grouped ICU physician staffing into low-intensity (no intensivist or elective intensivist consultation) or high-intensity (mandatory intensivist consultation or closed ICU [all care directed by intensivist]) groups. High-intensity staffing was associated with lower hospital mortality in 16 of 17 studies (94 and with a pooled estimate of the relative risk for hospital mortality of 0.71 (95% confidence interval [CI], 0.62-0.82). High-intensity staffing was associated with a lower ICU mortality in 14 of 15 studies (93 and with a pooled estimate of the relative risk for ICU mortality of 0.61 (95 % CI, 0.50-0.75). High-intensity staffing reduced hospital LOS in 10 of 13 studies and reduced ICU LOS in 14 of 18 studies without case-mix adjustment. Conclusions High-intensity staffing was associated with reduced hospital LOS in 2 of 4 studies and ICU LOS in both studies that adjusted for case mix. No study found increased LOS with high-intensity staffing after case-mix adjustment.
    BibTeX:
    @article{Pronovost2002,
      author = {Pronovost, PJ and Angus, DC and Dorman, T and Robinson, KA and Dremsizov, TT and Young, TL},
      title = {Physician staffing patterns and clinical outcomes in critically ill patients - A systematic review},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2002},
      volume = {288},
      number = {17},
      pages = {2151-2162},
      note = {Society-of-Critical-Care-Medicine-Educational-and-Scientific Symposium, ORLANDO, FLORIDA, FEB 12-16, 2000}
    }
    
    Qiao, C. Labeled optical burst switching for IP-over-WDM integration {2000} IEEE COMMUNICATIONS MAGAZINE
    Vol. {38}({9}), pp. {104-114} 
    article  
    Abstract: The rapid pace of development in both Internet applications and emerging optical technologies is bringing about fundamental changes in networking philosophies. Key trends are the emergence of dynamic wavelength provisioning and a corresponding reduction in wavelength provisioning timescales. As this transition continues, the current use of the wavelength-routing paradigm for carrying bursty Internet traffic will likely suffer from various shortcomings associated with circuit-switched networks. Meanwhile, optical packet switching technology is still facing significant cost and technological hurdles. Recently, optical burst switching, or OBS, which represents a balance between circuit and packet switching, has opened up some exciting new dimensions in optical networking. This article describes the OBS paradigm, and also proposes the use of labeled OBS, or LOBS, as a natural control and provisioning solution under the ubiquitous IP multiprotocol label switching framework.
    BibTeX:
    @article{Qiao2000,
      author = {Qiao, CM},
      title = {Labeled optical burst switching for IP-over-WDM integration},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2000},
      volume = {38},
      number = {9},
      pages = {104-114}
    }
    
    Qiao, C. & Yoo, M. Optical burst switching (OBS) - a new paradigm for an optical Internet {1999} JOURNAL OF HIGH SPEED NETWORKS
    Vol. {8}({1}), pp. {69-84} 
    article  
    Abstract: To support bursty traffic on the Internet (and especially WWW) efficiently, optical burst switching (OBS) is proposed as a way to streamline both protocols and hardware in building the future generation Optical Internet. By leveraging the attractive properties of optical communications and at the same time, taking into account its limitations, OBS combines the best of optical circuit switching and packet/cell switching. In this paper, the general concept of OBS protocols and in particular, those based on Just-Enough-Time (JET), is described, along with the applicability of OBS protocols to IP over WDM. Specific issues such as the use of fiber delay-lines (FDLs) for accommodating processing delay and/or resolving conflicts are also discussed. In addition, the performance of JET-based OBS protocols which use an offset time along with delayed reservation to achieve efficient utilization of both bandwidth and FDLs as well as to support priority-based routing is evaluated.
    BibTeX:
    @article{Qiao1999,
      author = {Qiao, CM and Yoo, MS},
      title = {Optical burst switching (OBS) - a new paradigm for an optical Internet},
      journal = {JOURNAL OF HIGH SPEED NETWORKS},
      year = {1999},
      volume = {8},
      number = {1},
      pages = {69-84}
    }
    
    Quelch, J. & Klein, L. The Internet and international marketing {1996} SLOAN MANAGEMENT REVIEW
    Vol. {37}({3}), pp. {60-75} 
    article  
    Abstract: Is the internet just another marketing channel like direct mail or home shopping? Or will it revolutionize global marketing? will large multinationals lose the advantages of size, while small start-ups leverage the technology and become big players internationally? The authors discuss the different opportunities and challenges that the Internet offers to large and small companies worldwide. They examine the impact on global markets and new product development, the advantages of an intranet for large corporations, and the need for foreign government support and cooperation.
    BibTeX:
    @article{Quelch1996,
      author = {Quelch, JA and Klein, LR},
      title = {The Internet and international marketing},
      journal = {SLOAN MANAGEMENT REVIEW},
      year = {1996},
      volume = {37},
      number = {3},
      pages = {60-75}
    }
    
    Racusen, L., Solez, K., Colvin, R., Bonsib, S., Castro, M., Cavallo, T., Croker, B., Demetris, A., Drachenberg, C., Fogo, A., Furness, P., Gaber, L., Gibson, I., Glotz, D., Goldberg, J., Grande, J., Halloran, P., Hansen, H., Hartley, B., Hayry, P., Hill, C., Hoffman, E., Hunsicker, L., Lindblad, A., Marcussen, N., Mihatsch, M., Nadasdy, T., Nickerson, P., Olsen, T., Papadimitriou, J., Randhawa, P., Rayner, D., Roberts, I., Rose, S., Rush, D., Salinas-Madrigal, L., Salomon, D., Sund, S., Taskinen, E., Trpkov, K. & Yamaguchi, Y. The Banff 97 working classification of renal allograft pathology {1999} KIDNEY INTERNATIONAL
    Vol. {55}({2}), pp. {713-723} 
    article  
    Abstract: Background. Standardization of renal allograft biopsy interpretation is necessary to guide therapy and to establish an objective end point for clinical trials. This manuscript describes a classification, Banff 97, developed by investigators using the Banff Schema and the Collaborative Clinical Trials in Transplantation (CCTT) modification for diagnosis of renal allograft pathology. Methods. Banff 97 grew from an international consensus discussion begun at Banff and continued via the Internet. This schema developed from (a) analysis of data using the Banff classification, (b) publication of and experience with the CCTT modification, (c) international conferences, and (d) data from recent studies on impact of vasculitis on transplant outcome. Results. Semiquantitative lesion scoring continues to focus on tubulitis and arteritis but includes a minimum threshold for interstitial inflammation. Banff 97 defines ``types'' of acute/active rejection. Type I is tubulointerstitial rejection without arteritis. Type II is vascular rejection with intimal arteritis, and type III is severe rejection with transmural arterial changes. Biopsies with only mild inflammation are graded as ``borderline/suspicious for rejection.'' Chronic/sclerosing allograft changes are graded based on severity of tubular atrophy and interstitial fibrosis. Antibody-mediated rejection, hyperacute or accelerated acute in presentation, is also categorized, as are other significant allograft findings. Conclusions. The Banff 97 working classification refines earlier schemas and represents input from two classifications most widely used in clinical rejection trials and in clinical practice worldwide. Major changes include the following: rejection with vasculitis is separated from tubulointerstitial rejection; severe rejection requires transmural changes in arteries; ``borderline'' rejection can only be interpreted in a clinical context; antibody-mediated rejection is further defined, and lesion scoring focuses on most severely involved structures. Criteria for specimen adequacy have also been modified. Banff 97 represents a significant refinement of allograft assessment, developed via international consensus discussions.
    BibTeX:
    @article{Racusen1999,
      author = {Racusen, LC and Solez, K and Colvin, RB and Bonsib, SM and Castro, MC and Cavallo, T and Croker, BP and Demetris, AJ and Drachenberg, CB and Fogo, AB and Furness, P and Gaber, LW and Gibson, IW and Glotz, D and Goldberg, JC and Grande, J and Halloran, PF and Hansen, HE and Hartley, B and Hayry, PJ and Hill, CM and Hoffman, EO and Hunsicker, LG and Lindblad, AS and Marcussen, N and Mihatsch, MJ and Nadasdy, T and Nickerson, P and Olsen, TS and Papadimitriou, JC and Randhawa, PS and Rayner, DC and Roberts, I and Rose, S and Rush, D and Salinas-Madrigal, L and Salomon, DR and Sund, S and Taskinen, E and Trpkov, K and Yamaguchi, Y},
      title = {The Banff 97 working classification of renal allograft pathology},
      journal = {KIDNEY INTERNATIONAL},
      year = {1999},
      volume = {55},
      number = {2},
      pages = {713-723}
    }
    
    Radha, H., van der Schaar, M. & Chen, Y. The MPEG-4 fine-grained scalable video coding method for multimedia streaming over IP {2001} IEEE TRANSACTIONS ON MULTIMEDIA
    Vol. {3}({1}), pp. {53-68} 
    article  
    Abstract: Real-time streaming of audiovisual content over the Internet is emerging as an important technology area in multimedia communications. Due to the wide variation of available bandwidth over Internet sessions, there is a need for scalable video coding methods and (corresponding) flexible streaming approaches that are capable of adapting to changing network conditions in real time. In this paper, we describe a new scalable video-coding framework that has been adopted recently by the MPEG-4 video standard. This new MPEG-4 video approach, which is known as Fine-Granular-Scalability (FGS), consists,of a rich set of video coding tools that support quality (i.e., SNR), temporal, and hybrid temporal-SNR scalabilities. Moreover, one of the desired features of the MPEG-4 FGS method is its simplicity and flexibility in supporting unicast and multicast streaming applications over IP.
    BibTeX:
    @article{Radha2001,
      author = {Radha, HM and van der Schaar, M and Chen, YW},
      title = {The MPEG-4 fine-grained scalable video coding method for multimedia streaming over IP},
      journal = {IEEE TRANSACTIONS ON MULTIMEDIA},
      year = {2001},
      volume = {3},
      number = {1},
      pages = {53-68}
    }
    
    Ragnarsson, K., Moses, L., Clarke, W., Daling, J., Garber, S., Gustafson, C., Holland, A., Jordan, B., Parker, J., Riddle, M., Roth, E., Seltzer, M., Small, S., Therrien, B., Wexler, B., Yawn, B. & NIH Consensus Dev Panel Rehabil Persons Traumat Rehabilitation of persons with traumatic brain injury {1999} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {282}({10}), pp. {974-983} 
    article  
    Abstract: Objective To provide biomedical researchers and clinicians with information regarding and recommendations for effective rehabilitation measures for persons who have experienced a traumatic brain injury (TBI). Participants A nonfederal, nonadvocate, 16-member panel representing the fields of neuropsychology, neurology, psychiatry, behavioral medicine, family medicine, pediatrics, physical medicine and rehabilitation, speech and hearing, occupational therapy, nursing, epidemiology, biostatistics, and the public. In addition, 31 experts from these same fields presented data to the panel and a conference audience of 883 members of the public. The conference consisted of (1) presentations by investigators working in areas relevant to the consensus questions during a 2-day public session; (2) questions and statements from conference attendees during open discussions that were part of the public session; and (3) closed deliberations by the panel during the remainder of the second day and part of the third. Primary sponsors of the conference were the National Institute of Child Health and Human Development and the National Institutes of Health Office of Medical Applications of Research. Evidence The literature was searched through MEDLINE for articles from January 1988 through August 1998 and an extensive bibliography of 2563 references was provided to the panel and the conference audience. Experts prepared abstracts for their conference presentations with relevant citations from the literature. The panel prepared a compendium of evidence, including a patient contribution and reports from federal agencies. Scientific evidence was given precedence over clinical anecdotal experience. Consensus Process The panel, answering predefined questions, developed their conclusions based on the scientific evidence presented during the open forum (October 26-28, 1998) and in the scientific literature. The panel composed a draft statement that was read in its entirety and circulated to the experts and the audience for comment. Thereafter the panel resolved conflicting recommendations and released a revised statement at the end of the conference. The panel finalized the revisions within a few weeks after the conference. The draft statement was made available on the internet immediately following its release at the conference and was updated with the panel's final revisions. Conclusions Traumatic brain injury results principally from vehicular incidents, falls, acts of violence, and sports injuries and is more than twice as likely to occur in men as in women. The estimated incidence rate is 100 per 100 000 persons, with 52 000 annual deaths. The highest incidence is among persons aged 15 to 24 years and 75 years or older, with a less striking peak in incidence in children aged 5 years or younger. Since TBI may result in lifelong impairment of physical, cognitive, and psychosocial functioning and prevalence is estimated at 2.5 million to 6.5 million individuals, TBI is a disorder of major public health significance. Mild TBI is significantly underdiagnosed and the likely societal burden is therefore even greater. Given the large toll of TBI and absence of a cure, prevention is of paramount importance. However, the focus of this conference was the evaluation of rehabilitative measures for the cognitive and behavioral consequences of TBI. Evidence supports the use of certain cognitive and behavioral rehabilitation strategies for individuals with TBI. This research needs to be replicated in larger, more definitive clinical trials and, thus, funding for research on TBI needs to be increased.
    BibTeX:
    @article{Ragnarsson1999,
      author = {Ragnarsson, KT and Moses, LG and Clarke, WR and Daling, JR and Garber, SL and Gustafson, CF and Holland, AL and Jordan, BD and Parker, JC and Riddle, MA and Roth, EJ and Seltzer, MM and Small, SL and Therrien, B and Wexler, BE and Yawn, BP and NIH Consensus Dev Panel Rehabil Persons Traumat},
      title = {Rehabilitation of persons with traumatic brain injury},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1999},
      volume = {282},
      number = {10},
      pages = {974-983}
    }
    
    Ranganathan, C. & Ganapathy, S. Key dimensions of business-to-consumer web sites {2002} INFORMATION & MANAGEMENT
    Vol. {39}({6}), pp. {457-465} 
    article  
    Abstract: The rapid growth in the electronic commerce over the Internet has fuelled predictions and speculations about what makes a business-to-consumer (B2C) web site effective. Yet, there are very few empirical studies that examine this issue. We examined the key characteristics of a B2C web site as perceived by online consumers. Based on a questionnaire survey of 214 online shoppers, we empirically derived four key dimensions of B2C web sites: information content, design, security, and privacy. Though all these dimensions seem to have an impact on the online purchase intent of consumers, security and privacy were Found to have greater effect on the purchase intent of consumers. The implications of the findings for online merchants are discussed. (C) 2002 Elscvier Science B.V. All rights reserved.
    BibTeX:
    @article{Ranganathan2002,
      author = {Ranganathan, C and Ganapathy, S},
      title = {Key dimensions of business-to-consumer web sites},
      journal = {INFORMATION & MANAGEMENT},
      year = {2002},
      volume = {39},
      number = {6},
      pages = {457-465}
    }
    
    Rao, F. & Caflisch, A. The protein folding network {2004} JOURNAL OF MOLECULAR BIOLOGY
    Vol. {342}({1}), pp. {299-306} 
    article DOI  
    Abstract: The conformation space of a 20 residue antiparallel beta-sheet peptide, sampled by molecular dynamics simulations, is mapped to a network. Snapshots saved along the trajectory are grouped according to secondary structure into nodes of the network and the transitions between them are links. The conformation space network describes the significant free energy minima and their dynamic connectivity without requiring arbitrarily chosen reaction coordinates. As previously found for the Internet and the World-Wide Web as well as for social and biological networks, the conformation space network is scale-free and contains highly connected hubs like the native state which is the most populated free energy basin. Furthermore, the native basin exhibits a hierarchical organization, which is not found for a random heteropolymer lacking a predominant free-energy minimum. The network topology is used to identify conformations in the folding transition state (TS) ensemble, and provides a basis for understanding the heterogeneity of the TS and denatured state ensemble as well as the existence of multiple pathways. (C) 2004 Elsevier Ltd. All rights reserved.
    BibTeX:
    @article{Rao2004,
      author = {Rao, F and Caflisch, A},
      title = {The protein folding network},
      journal = {JOURNAL OF MOLECULAR BIOLOGY},
      year = {2004},
      volume = {342},
      number = {1},
      pages = {299-306},
      doi = {{10.1016/j.jmb.2004.06.063}}
    }
    
    Ratnasamy, S., Karp, B., Shenker, S., Estrin, D., Govindan, R., Yin, L. & Yu, F. Data-centric storage in sensornets with GHT, a geographic hash table {2003} MOBILE NETWORKS & APPLICATIONS
    Vol. {8}({4}), pp. {427-442} 
    article  
    Abstract: Making effective use of the vast amounts of data gathered by large-scale sensor networks (sensornets) will require scalable, self-organizing, and energy-efficient data dissemination algorithms. For sensornets, where the content of the data is more important than the identity of the node that gathers them, researchers have found it useful to move away from the Internet's point-to-point communication abstraction and instead adopt abstractions that are more data-centric. This approach entails naming the data and using communication abstractions that refer to those names rather than to node network addresses [1,11]. Previous work on data-centric routing has shown it to be an energy-efficient data dissemination method for sensornets [12]. Herein, we argue that a companion method, data-centric storage ( DCS), is also a useful approach. Under DCS, sensed data are stored at a node determined by the name associated with the sensed data. In this paper, we first define DCS and predict analytically where it outperforms other data dissemination approaches. We then describe GHT, a Geographic Hash Table system for DCS on sensornets. GHT hashes keys into geographic coordinates, and stores a key-value pair at the sensor node geographically nearest the hash of its key. The system replicates stored data locally to ensure persistence when nodes fail. It uses an efficient consistency protocol to ensure that key-value pairs are stored at the appropriate nodes after topological changes. And it distributes load throughout the network using a geographic hierarchy. We evaluate the performance of GHT as a DCS system in simulation against two other dissemination approaches. Our results demonstrate that GHT is the preferable approach for the application workloads we analytically predict, offers high data availability, and scales to large sensornet deployments, even when nodes fail or are mobile.
    BibTeX:
    @article{Ratnasamy2003,
      author = {Ratnasamy, S and Karp, B and Shenker, S and Estrin, D and Govindan, R and Yin, L and Yu, F},
      title = {Data-centric storage in sensornets with GHT, a geographic hash table},
      journal = {MOBILE NETWORKS & APPLICATIONS},
      year = {2003},
      volume = {8},
      number = {4},
      pages = {427-442},
      note = {1st International Workshop on Wireless Sensor Networks and Applications (WSNA 2002), ATLANTA, GEORGIA, SEP 28, 2002}
    }
    
    Ravasz, E. & Barabasi, A. Hierarchical organization in complex networks {2003} PHYSICAL REVIEW E
    Vol. {67}({2, Part 2}) 
    article DOI  
    Abstract: Many real networks in nature and society share two generic properties: they are scale-free and they display a high degree of clustering. We show that these two features are the consequence of a hierarchical organization, implying that small groups of nodes organize in a hierarchical manner into increasingly large groups, while maintaining a scale-free topology. In hierarchical networks, the degree of clustering characterizing the different groups follows a strict scaling law, which can be used to identify the presence of a hierarchical organization in real networks. We find that several real networks, such as the Worldwideweb, actor network, the Internet at the domain level, and the semantic web obey this scaling law, indicating that hierarchy is a fundamental characteristic of many complex systems.
    BibTeX:
    @article{Ravasz2003,
      author = {Ravasz, E and Barabasi, AL},
      title = {Hierarchical organization in complex networks},
      journal = {PHYSICAL REVIEW E},
      year = {2003},
      volume = {67},
      number = {2, Part 2},
      doi = {{10.1103/PhysRevE.67.026112}}
    }
    
    Ravdin, P., Siminoff, L., Davis, G., Mercer, M., Hewlett, J., Gerson, N. & Parker, H. Computer program to assist in making decisions about adjuvant therapy for women with early breast cancer {2001} JOURNAL OF CLINICAL ONCOLOGY
    Vol. {19}({4}), pp. {980-991} 
    article  
    Abstract: Purpose: The goal of the computer program Adjuvant! is to allow health professionals and their patients with early breast cancer to make more informed decisions about adjuvant therapy. Methods: Actuarial analysis was used to project outcomes of patients with and without adjuvant therapy based on estimates of prognosis largely derived from Surveillance, Epidemiology, and End-Results data and estimates of the efficacy of adjuvant therapy based on the 1998 overviews of randomized trials of adjuvant therapy. These estimates can be refined using the Prognostic Factor Impact Calculator, which uses a Bayesian method to make adjustments based on relative risks conferred and prevalence of positive test results. Results: From the entries of patient information (age, menopausal status, comorbidity estimate) and tumor staging and characteristics (tumor size, number of positive axillary nodes, estrogen receptor status), baseline prognostic estimates are made. Estimates for the efficacy of endocrine therapy (5 years of tamoxifen) and of polychemotherapy (cyclophosphamide/methotrexate/fluorouracil-like regimens, or anthracyclinebased therapy, or therapy based on both an anthracycline and a taxane) can then be used to project outcomes presented in both numerical and graphical formats. Outcomes for overall survival and disease-free survival and the improvement seen in clinical trials, are reasonably modeled by Adjuvantl, although an ideal validation for all patient subsets with all treatment options is not possible. Additional speculative estimates of years of remaining life expectancy and long-term survival curves can also be produced. Help files supply general information about breast cancer. The program's Internet links supply national treatment guidelines, cooperative group trial options, and other related information. Conclusion: The computer program Adjuvantl can play practical and educational roles in clinical settings.
    BibTeX:
    @article{Ravdin2001,
      author = {Ravdin, PM and Siminoff, LA and Davis, GJ and Mercer, MB and Hewlett, J and Gerson, N and Parker, HL},
      title = {Computer program to assist in making decisions about adjuvant therapy for women with early breast cancer},
      journal = {JOURNAL OF CLINICAL ONCOLOGY},
      year = {2001},
      volume = {19},
      number = {4},
      pages = {980-991}
    }
    
    Raymo, F. Digital processing and communication with molecular switches {2002} ADVANCED MATERIALS
    Vol. {14}({6}), pp. {401+} 
    article  
    Abstract: Molecular switches based on chemical, electrical, or optical stimulations are a hot topic as they can provide the future for faster computers and internet applications. Basic logic operations of AND, NOT, and OR gates have been reproduced relying on simple molecular switches (see Figure). The fabrication of nanoelectronic circuits and all-optical networks from molecular components can be envisaged.
    BibTeX:
    @article{Raymo2002,
      author = {Raymo, FM},
      title = {Digital processing and communication with molecular switches},
      journal = {ADVANCED MATERIALS},
      year = {2002},
      volume = {14},
      number = {6},
      pages = {401+}
    }
    
    Reed, M., Syverson, P. & Goldschlag, D. Anonymous connections and onion routing {1998} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {16}({4}), pp. {482-494} 
    article  
    Abstract: Onion routing is an infrastructure for private communication over a public network, It provides anonymous connections that are strongly resistant to bath eavesdropping and traffic analysis. Onion routing's anonymous connections are bidirectional, near real-time, and can be used anywhere a socket connection can be used, Any identifying information must be in the dam stream carried over ale anonymous connection, An onion is a data structure that is treated as the destination address by onion routers; thus, it is used to establish an anonymous connection, Onions themselves appear different to each onion router as well as to network observers, The same goes for data carried over the connections they establish. Proxy-aware applications, such as web browsers and e-mail clients, require no modification to use onion routing, and do so through a series of proxies a prototype Onion routing network is running between our lab and other sites. This paper describes anonymous connections and their implementation using onion routing, This paper also describes several application proxies for onion routing, as well as configurations of onion routing networks.
    BibTeX:
    @article{Reed1998,
      author = {Reed, MG and Syverson, PF and Goldschlag, DM},
      title = {Anonymous connections and onion routing},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1998},
      volume = {16},
      number = {4},
      pages = {482-494}
    }
    
    Reichheld, F. & Schefter, P. E-loyalty - Your secret weapon on the Web {2000} HARVARD BUSINESS REVIEW
    Vol. {78}({4}), pp. {105+} 
    article  
    Abstract: In the rush to build Internet businesses, many executives mistakenly concentrate all their attention on attracting customers rather than retaining them. But chief executives at the cutting edge of e-commerce-from eBay's Meg Whitman to Vanguard's Jack Brennan-know that customer loyalty is an economic necessity: acquiring customers on the Internet is very expensive, and unless customers stick around and make lots of repeat purchases, profits will remain elusive. For the past two years, Frederick Reichheld and Phil Schefter have studied e-loyalty-analyzing the strategies and practices of many leading Internet companies and surveying thousands of their customers-with surprising results. Conrary to the popular perception that online customers are fickle by nature, they found that the Web is actually a very sticky space. Most of today's on-line consumers exhibit a clear proclivity toward loyalty, and Web technologies, if used correctly reinforce that inherent loyalty. In this article, the authors explain the enormous advantages of retaining online buyers. They warn that if executives don't quickly gain the loyalty of their most profitable existing customers and acquire the right new customers, they'll end up catering to the whims of only the most price-sensitive customers. They also describe what Grainger, Dell, America Online, and other Internet leaders are doing to gain their customers' trust and earn their loyalty. By encouraging repeat purchases among a core of profitable customers, companies can initiate a spiral of economic advantages. This loyalty effect enables them to compensate their employees more generously, provide investors with superior cash flows, and reinvest more aggressively to further enhance the value delivered to customers.
    BibTeX:
    @article{Reichheld2000,
      author = {Reichheld, FF and Schefter, P},
      title = {E-loyalty - Your secret weapon on the Web},
      journal = {HARVARD BUSINESS REVIEW},
      year = {2000},
      volume = {78},
      number = {4},
      pages = {105+}
    }
    
    Rindova, V. & Kotha, S. Continuous ``morphing'': Competing through dynamic capabilities, form, and function {2001} ACADEMY OF MANAGEMENT JOURNAL
    Vol. {44}({6}), pp. {1263-1280} 
    article  
    Abstract: In hypercompetitive environments, the established paradigms of sustainability of competitive advantage and stability of organizational form may have limited applicability. Using an in-depth case analysis of the firms Yahoo! and Excite, this study examines how the organizational form, function, and competitive advantage of these firms dynamically coevolved. The study introduces the concept of continuous morphing to describe the comprehensive ongoing transformations through which the focal firms sought to regenerate their transient competitive advantage on the Internet.
    BibTeX:
    @article{Rindova2001,
      author = {Rindova, VP and Kotha, S},
      title = {Continuous ``morphing'': Competing through dynamic capabilities, form, and function},
      journal = {ACADEMY OF MANAGEMENT JOURNAL},
      year = {2001},
      volume = {44},
      number = {6},
      pages = {1263-1280}
    }
    
    Robertson, B., Myers, G., Howard, C., Brettin, T., Bukh, J., Gaschen, B., Gojobori, T., Maertens, G., Mizokami, M., Nainan, O., Netesov, S., Nishioka, K., Shin-i, T., Simmonds, P., Smith, D., Stuyver, L. & Weiner, A. Classification, nomenclature, and database development for hepatitis C virus (HCV) and related viruses: proposals for standardization {1998} ARCHIVES OF VIROLOGY
    Vol. {143}({12}), pp. {2493-2503} 
    article  
    Abstract: This paper presents a summary of the recommendations that were formulated for the purposes of unifying the nomenclature for hepatitis C virus (HCV), based upon guidelines of the International Committee on Virus Taxonomy (ICTV), and provides guidelines for the incorporation of sequence data into an HCV database that will be available to researchers through the internet, Based upon the available data, the genus Hepacivirus should be regarded as comprising a single species with HCV-1 as the prototype. All currently known isolates of HCV can be divided into six phylogenetically distinct groups, and we recommend that these groups are described as clades 1 to 6. Whether or not these should be regarded as different species within the Hepacivirus,genus requires additional clinical, virological, and immunological information. Clades 1, 2, 4, and 5 would correspond to genotype 1, 2, 4, and 5 while clade 3 would comprise genotype 3 and genotype 10, and clade 6 comprise genotypes 6, 7, 8, 9, and II. We propose that existing subtype designations are reassigned within these clades based upon publication priority, the existence of a complete genome sequence and prevalence. The assignment of isolates to new clades and subtypes should be confined to isolates characterized from epidemiologically unlinked individuals. Comparisons should be based on nucleotide sequences of at least two coding regions and preferably of complete genome sequences, and should be based on phylogenetic analysis rather than percent identity. A forum for discussion and contributions to these recommendations will be made available at the international HCV database at http://s2as02.genes.nig.ac.jp.
    BibTeX:
    @article{Robertson1998,
      author = {Robertson, B and Myers, G and Howard, C and Brettin, T and Bukh, J and Gaschen, B and Gojobori, T and Maertens, G and Mizokami, M and Nainan, O and Netesov, S and Nishioka, K and Shin-i, T and Simmonds, P and Smith, D and Stuyver, L and Weiner, A},
      title = {Classification, nomenclature, and database development for hepatitis C virus (HCV) and related viruses: proposals for standardization},
      journal = {ARCHIVES OF VIROLOGY},
      year = {1998},
      volume = {143},
      number = {12},
      pages = {2493-2503}
    }
    
    Robinson, J. The end of managed care {2001} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {285}({20}), pp. {2622-2628} 
    article  
    Abstract: Managed care embodies an effort by employers, the insurance industry, and some elements of the medical profession to establish priorities and decide who gets what from the health care system. After a turbulent decade of trial and error, that experiment can be characterized as an economic success but a political failure. The strategy of giving with one hand while taking away with the other, of offering comprehensive benefits while restricting access through utilization review, has infuriated everyone involved. The protagonists of managed care now are in full retreat, broadening physician panels, removing restrictions, and reverting to fee-for-service payment. Governmental entities are avoiding politically volatile initiatives to balance limited resources and unlimited expectations, By default, if not by design, the consumer is emerging as the locus of priority setting in health care. The shift to consumerism is driven by a widespread skepticism of governmental, corporate, and professional dominance; unprecedented economic prosperity that reduces social tolerance for interference with individual autonomy; and the Internet technology revolution, which broadens access to information and facilitates the mass customization of insurance and delivery.
    BibTeX:
    @article{Robinson2001,
      author = {Robinson, JC},
      title = {The end of managed care},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2001},
      volume = {285},
      number = {20},
      pages = {2622-2628}
    }
    
    Robinson, T., Patrick, K., Eng, T., Gustafson, D. & Sci Panel Interactive Commun Hlth An evidence-based approach to interactive health communication - A challenge to medicine in the information age {1998} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {280}({14}), pp. {1264-1269} 
    article  
    Abstract: Objective.-To examine the current status of interactive health communication (IHC) and propose evidence-based approaches to improve the quality of such applications. Participants.-The Science Panel on Interactive Communication and Health, a 14-member, nonfederal panel with expertise in clinical medicine and nursing, public health, media and instructional design, health systems engineering, decision sciences, computer and communication technologies, and health communication, convened by the Office of Disease Prevention and Health Promotion, US Department of Health and Human Services. Evidence.-Published studies, online resources, expert panel opinions, and opinions from outside experts in fields related to IHC. Consensus Process.-The panel met 9 times during more than 2 years. Government agencies and private-sector experts provided review and feedback on the panel's work. Conclusions.-Interactive health communication applications have great potential to improve health, but they may also cause harm. To date, few applications have been adequately evaluated. Physicians and other health professionals should promote and participate in an evidence-based approach to the development and diffusion of IHC applications and endorse efforts to rigorously evaluate the safety, quality, and utility of these resources. A standardized reporting template is proposed to help developers and evaluators of IHC applications conduct evaluations and disclose their results and to help clinicians, purchasers, and consumers judge the quality of IHC applications.
    BibTeX:
    @article{Robinson1998,
      author = {Robinson, TN and Patrick, K and Eng, TR and Gustafson, D and Sci Panel Interactive Commun Hlth},
      title = {An evidence-based approach to interactive health communication - A challenge to medicine in the information age},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1998},
      volume = {280},
      number = {14},
      pages = {1264-1269}
    }
    
    Roewer, L., Krawczak, M., Willuweit, S., Nagy, M., Alves, C., Amorim, A., Anslinger, K., Augustin, C., Betz, A., Bosch, E., Caglia, A., Carracedo, A., Corach, D., Dekairelle, A., Dobosz, T., Dupuy, B., Furedi, S., Gehrig, C., Gusmao, L., Henke, J., Henke, L., Hidding, M., Hohoff, C., Hoste, B., Jobling, M., Kargel, H., de Knijff, P., Lessig, R., Liebeherr, E., Lorente, M., Martinez-Jarreta, B., Nievas, P., Nowak, M., Parson, W., Pascali, V., Penacino, G., Ploski, R., Rolf, B., Sala, A., Schmidt, U., Schmitt, C., Schneider, P., Szibor, R., Teifel-Greding, J. & Kayser, M. Online reference database of European Y-chromosomal short tandem repeat (STR) haplotypes {2001} FORENSIC SCIENCE INTERNATIONAL
    Vol. {118}({2-3}), pp. {106-113} 
    article  
    Abstract: The reference database of highly informative Y-chromosomal short tandem repeat (STR haplotypes (YHRD), available online at http://ystr.charite.de, represents the largest collection of male-specific genetic profiles currently available for European populations. By September 2000, YHRD contained 4688 9-locus (so-called ``minimal'') haplotypes, 40% of which have been extended further to include two additional loci. Establishment of YHRD has been facilitated by the joint efforts of 31 forensic and anthropological institutions. All contributing laboratories have agreed to standardize their Y-STR haplotyping protocols and to participate in a quality assurance exercise prior to the inclusion of any data. In view of its collaborative character, and in order to put YHRD to its intended use, viz, the support of forensic caseworkers in their routine decision-making process, the database has been made publicly available via the Internet in February 2000. Online searches for complete or partial Y-STR haplotypes from evidentiary or non-probative material can be performed on a non-commercial basis, and yield observed haplotype counts as well as extrapolated population frequency estimates. In addition, the YHRD website provides information about the quality control test, genotyping protocols, haplotype formats and informativity, population genetic analysis, literature references, and a list of contact addresses of the contributing laboratories. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.
    BibTeX:
    @article{Roewer2001,
      author = {Roewer, L and Krawczak, M and Willuweit, S and Nagy, M and Alves, C and Amorim, A and Anslinger, K and Augustin, C and Betz, A and Bosch, E and Caglia, A and Carracedo, A and Corach, D and Dekairelle, AF and Dobosz, T and Dupuy, BM and Furedi, S and Gehrig, C and Gusmao, L and Henke, J and Henke, L and Hidding, M and Hohoff, C and Hoste, B and Jobling, MA and Kargel, HJ and de Knijff, P and Lessig, R and Liebeherr, E and Lorente, M and Martinez-Jarreta, B and Nievas, P and Nowak, M and Parson, W and Pascali, VL and Penacino, G and Ploski, R and Rolf, B and Sala, A and Schmidt, U and Schmitt, C and Schneider, PM and Szibor, R and Teifel-Greding, J and Kayser, M},
      title = {Online reference database of European Y-chromosomal short tandem repeat (STR) haplotypes},
      journal = {FORENSIC SCIENCE INTERNATIONAL},
      year = {2001},
      volume = {118},
      number = {2-3},
      pages = {106-113},
      note = {2nd Forensic Y Chromosome User Workshop, BERLIN, GERMANY, JUN 16-17, 2000}
    }
    
    Rost, B. & Liu, J. The PredictProtein server {2003} NUCLEIC ACIDS RESEARCH
    Vol. {31}({13}), pp. {3300-3304} 
    article DOI  
    Abstract: PredictProtein (PP, http://cubic.bioc.columbia.edu/pp/) is an internet service for sequence analysis and the prediction of aspects of protein structure and function. Users submit protein sequence or alignments; the server returns a multiple sequence alignment, PROSITE sequence motifs, low-complexity regions (SEG), ProDom domain assignments, nuclear localisation signals, regions lacking regular structure and predictions of secondary structure, solvent accessibility, globular regions, transmembrane helices, coiled-coil regions, structural switch regions and disulfide-bonds. Upon request, fold recognition by prediction-based threading is available. For all services, users can submit their query either by electronic mail or interactively from World Wide Web.
    BibTeX:
    @article{Rost2003,
      author = {Rost, B and Liu, JF},
      title = {The PredictProtein server},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2003},
      volume = {31},
      number = {13},
      pages = {3300-3304},
      doi = {{10.1093/nar/gkg508}}
    }
    
    Rost, B., Yachdav, G. & Liu, J. The PredictProtein server {2004} NUCLEIC ACIDS RESEARCH
    Vol. {32}({Suppl. 2}), pp. {W321-W326} 
    article DOI  
    Abstract: PredictProtein (http://www.predictprotein.org) is an Internet service for sequence analysis and the prediction of protein structure and function. Users submit protein sequences or alignments; PredictProtein returns multiple sequence alignments, PROSITE sequence motifs, low-complexity regions (SEG), nuclear localization signals, regions lacking regular structure (NORS) and predictions of secondary structure, solvent accessibility, globular regions, transmembrane helices, coiled-coil regions, structural switch regions, disulfide-bonds, sub-cellular localization and functional annotations. Upon request fold recognition by prediction-based threading, CHOP domain assignments, predictions of transmembrane strands and inter-residue contacts are also available. For all services, users can submit their query either by electronic mail or interactively via the World Wide Web.
    BibTeX:
    @article{Rost2004,
      author = {Rost, B and Yachdav, G and Liu, JF},
      title = {The PredictProtein server},
      journal = {NUCLEIC ACIDS RESEARCH},
      year = {2004},
      volume = {32},
      number = {Suppl. 2},
      pages = {W321-W326},
      doi = {{10.1093/nar/gkh377}}
    }
    
    Roth, A. & Ockenfels, A. Last-minute bidding and the rules for ending second-price auctions: Evidence from eBay and Amazon auctions on the Internet {2002} AMERICAN ECONOMIC REVIEW
    Vol. {92}({4}), pp. {1093-1103} 
    article  
    BibTeX:
    @article{Roth2002,
      author = {Roth, AE and Ockenfels, A},
      title = {Last-minute bidding and the rules for ending second-price auctions: Evidence from eBay and Amazon auctions on the Internet},
      journal = {AMERICAN ECONOMIC REVIEW},
      year = {2002},
      volume = {92},
      number = {4},
      pages = {1093-1103}
    }
    
    Ruiz-Sanchez, M., Biersack, E. & Dabbous, W. Survey and taxonomy of IP address lookup algorithms {2001} IEEE NETWORK
    Vol. {15}({2}), pp. {8-23} 
    article  
    Abstract: Due to the rapid growth of traffic in the Internet, backbone links of several gigabits per second ore commonly deployed. To handle gigabit-per-second traffic rates, the backbone routers must be able to forward millions of packets per second on each of their ports. Fast IP address lookup in the routers, which uses the packet's destination address to determine for each packet the next hop, is therefore crucial to achieve the packet forwarding rates required. IP address lookup is difficult because it requires a longest matching prefix search. In the last couple of years, various algorithms for high-performance IP address lookup have been proposed. We present a survey of state-of-the-art IP address lookup algorithms and compare their performance in terms of lookup speed, scalability, and update overhead.
    BibTeX:
    @article{Ruiz-Sanchez2001,
      author = {Ruiz-Sanchez, MA and Biersack, EW and Dabbous, W},
      title = {Survey and taxonomy of IP address lookup algorithms},
      journal = {IEEE NETWORK},
      year = {2001},
      volume = {15},
      number = {2},
      pages = {8-23}
    }
    
    Sacks, D., Bruns, D., Goldstein, D., Maclaren, N., McDonald, J. & Parrott, M. Guidelines and recommendations for laboratory analysis in the diagnosis and management of diabetes mellitus {2002} CLINICAL CHEMISTRY
    Vol. {48}({3}), pp. {436-472} 
    article  
    Abstract: Background: Multiple laboratory tests are used in the diagnosis and management of patients with diabetes mellitus. The quality of the scientific evidence supporting the use of these assays varies substantially. Approach: An expert committee drafted evidence-based recommendations for the use of laboratory analysis in patients with diabetes. An external panel of experts reviewed a draft of the guidelines, which were modified in response to the reviewers' suggestions. A revised draft was posted on the Internet and was presented at the AACC Annual Meeting in July, 2000. The recommendations were modified again in response to oral and written comments. The guidelines were reviewed by the Professional Practice Committee of the American Diabetes Association. Content. Measurement of plasma glucose remains the sole diagnostic criterion for diabetes. Monitoring of glycemic control is performed by the patients'. who measure their own plasma or blood glucose with meters, and by laboratory analysis of glycated hemoglobin. The potential roles of noninvasive glucose monitoring, genetic testing, autoantibodies, microalbumin, proinsulin, C-peptide, and other analytes are addressed. Summary: The guidelines provide specific recommendations based on published data or derived from expert consensus. Several analytes; are of minimal clinical value at the present time, and measurement of them is not recommended. (C) 2002 American Association for Clinical Chemistry.
    BibTeX:
    @article{Sacks2002,
      author = {Sacks, DB and Bruns, DE and Goldstein, DE and Maclaren, NK and McDonald, JM and Parrott, M},
      title = {Guidelines and recommendations for laboratory analysis in the diagnosis and management of diabetes mellitus},
      journal = {CLINICAL CHEMISTRY},
      year = {2002},
      volume = {48},
      number = {3},
      pages = {436-472}
    }
    
    Salvi, J., Armangue, X. & Batlle, J. A comparative review of camera calibrating methods with accuracy evaluation {2002} PATTERN RECOGNITION
    Vol. {35}({7}), pp. {1617-1635} 
    article  
    Abstract: Camera calibrating is a crucial problem for further metric scene measurement. Many techniques and some studies concerning calibration have been presented in the last few years. However, it is still difficult to go into details of a determined calibrating technique and compare its accuracy with respect to other methods. Principally, this problem emerges from the lack of a standardized notation and the existence of various methods of accuracy evaluation to choose from. This article presents a detailed review of some of the most used calibrating techniques in which the principal idea has been to present them all with the same notation. Furthermore, the techniques surveyed have been tested and their accuracy evaluated. Comparative results are shown and discussed in the article. Moreover, code and results are available in internet. (C) 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Salvi2002,
      author = {Salvi, J and Armangue, X and Batlle, J},
      title = {A comparative review of camera calibrating methods with accuracy evaluation},
      journal = {PATTERN RECOGNITION},
      year = {2002},
      volume = {35},
      number = {7},
      pages = {1617-1635}
    }
    
    Sampath, H., Talwar, S., Tellado, J., Erceg, V. & Paulraj, A. A fourth-generation MIMO-OFDM broadband wireless system: Design, performance, and field trial results {2002} IEEE COMMUNICATIONS MAGAZINE
    Vol. {40}({9}), pp. {143-149} 
    article  
    Abstract: Increasing demand for high-performance 4G broadband wireless is enabled by the use of multiple antennas at, both base station and subscriber ends. Multiple antenna technologies enable high capacities suited for Internet and multimedia services, and also dramatically increase range and reliability. In this article we describe a multiple-input multiple-output OFDM wireless communication system, lab test results, and recent field test results obtained in San Jose, California. These are the first MIMO system field tests to establish the performance of MIMO communication systems. Increased capacity, coverage, and reliability are clearly evident from the test results presented in this article.
    BibTeX:
    @article{Sampath2002,
      author = {Sampath, H and Talwar, S and Tellado, J and Erceg, V and Paulraj, A},
      title = {A fourth-generation MIMO-OFDM broadband wireless system: Design, performance, and field trial results},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2002},
      volume = {40},
      number = {9},
      pages = {143-149}
    }
    
    SATO, M., HANSEN, J., MCCORMICK, M. & POLLACK, J. STRATOSPHERIC AEROSOL OPTICAL DEPTHS, 1850-1990 {1993} JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES
    Vol. {98}({D12}), pp. {22987-22994} 
    article  
    Abstract: A global stratospheric aerosol database employed for climate simulations is described. For the period 1883-1990, aerosol optical depths are estimated from optical extinction data, whose quality increases with time over that period. For the period 1850-1882, aerosol optical depths are more crudely estimated from volcanological evidence for the volume of ejecta from major known volcanoes. The data set is available over Internet.
    BibTeX:
    @article{SATO1993,
      author = {SATO, M and HANSEN, JE and MCCORMICK, MP and POLLACK, JB},
      title = {STRATOSPHERIC AEROSOL OPTICAL DEPTHS, 1850-1990},
      journal = {JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES},
      year = {1993},
      volume = {98},
      number = {D12},
      pages = {22987-22994}
    }
    
    Savage, S., Wetherall, D., Karlin, A. & Anderson, T. Network support for IP traceback {2001} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {9}({3}), pp. {226-237} 
    article  
    Abstract: This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back toward their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or ``spoofed,'' source addresses. In this paper, we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed ``post mortem''-after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backward compatible, and can be efficiently implemented using conventional technology.
    BibTeX:
    @article{Savage2001,
      author = {Savage, S and Wetherall, D and Karlin, A and Anderson, T},
      title = {Network support for IP traceback},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2001},
      volume = {9},
      number = {3},
      pages = {226-237}
    }
    
    Scollo, M., Lal, A., Hyland, A. & Glantz, S. Review of the quality of studies on the economic effects of smoke-free policies on the hospitality industry {2003} TOBACCO CONTROL
    Vol. {12}({1}), pp. {13-20} 
    article  
    Abstract: Objective: To compare the quality and funding source of studies concluding a negative economic impact of smoke-free policies in the hospitality industry to studies concluding no such negative impact. Data sources: Researchers sought all studies produced before 3 1 August 2002. Articles published in scientific journals were located with Medline, Science Citation Index, Social Sciences Citation Index, Current Contents, PsychInfo, Econlit, and Healthstar. Unpublished studies were located from tobacco company websites and through internet searches. Study selection: 97 studies that made statements about economic impact were included. 93% of the studies located met the selection criteria as determined by consensus between multiple reviewers. Data extraction: Findings and characteristics of studies (apart from funding source) were classified independently by two researchers. A third assessor blind to both the objective of the present stud and to funding source also classified each study. Data synthesis: In studies concluding a negative impact, the odds of using a subjective outcome measure was 4.0 times (95% confidence interval (CI) 1.4 to 9.6; p = 0.007) and the odds of not being peer reviewed was 20 times (95% Cl 2.6 to 166.7; p = 0.004) that of studies concluding no such negative impact. All of the studies concluding a negative impact were supported by the tobacco industry. 94% of the tobacco industry supported studies concluded a negative economic impact compared to none of the non-industry supported studies. Conclusion: All of the best designed studies report no impact or a positive impact of smoke-free restaurant and bar laws on sales or employment. Policymakers can act to protect workers and patrons from the toxins in secondhand smoke confident in rejecting industry claims that there will be an adverse economic impact.
    BibTeX:
    @article{Scollo2003,
      author = {Scollo, M and Lal, A and Hyland, A and Glantz, S},
      title = {Review of the quality of studies on the economic effects of smoke-free policies on the hospitality industry},
      journal = {TOBACCO CONTROL},
      year = {2003},
      volume = {12},
      number = {1},
      pages = {13-20}
    }
    
    Seligman, M., Steen, T., Park, N. & Peterson, C. Positive psychology progress - Empirical validation of interventions {2005} AMERICAN PSYCHOLOGIST
    Vol. {60}({5}), pp. {410-421} 
    article DOI  
    Abstract: Positive psychology has flourished in the last 5 years. The authors review recent developments in the field, including books, meetings, courses, and conferences. They also discuss the newly created classification of character strengths and virtues, a positive complement to the various editions of the Diagnostic and Statistical Manual of Mental Disorders (e. g., American Psychiatric Association, 1994), and present some cross-cultural findings that suggest a surprising ubiquity of strengths and virtues. Finally, the authors focus on psychological interventions that increase individual happiness. In a 6-group, random-assignment, placebo-controlled Internet study, the authors tested 5 purported happiness interventions and I plausible control exercise. They found that 3 of the interventions lastingly increased happiness and decreased depressive symptoms. Positive interventions can supplement traditional interventions that relieve suffering and may someday be the practical legacy of positive psychology.
    BibTeX:
    @article{Seligman2005,
      author = {Seligman, MEP and Steen, TA and Park, N and Peterson, C},
      title = {Positive psychology progress - Empirical validation of interventions},
      journal = {AMERICAN PSYCHOLOGIST},
      year = {2005},
      volume = {60},
      number = {5},
      pages = {410-421},
      doi = {{10.1037/0003-066X.60.5.410}}
    }
    
    Shah, D., Kwak, N. & Holbert, R. ``Connecting'' and ``disconnecting'' with civic life: Patterns of Internet use and the production of social capital {2001} POLITICAL COMMUNICATION
    Vol. {18}({2}), pp. {141-162} 
    article  
    Abstract: This article explores the relationship between Internet use and the individual-level production of serial capital. To do so, the authors adopt a motivational perspective to distinguish among types of Internet use when examining the factors predicting civic engagement, interpersonal trust, and life contentment. The predictive power of new media use is then analyzed relative to key demographic, contextual, and traditional media use variables using the 1999 DDB Life Style Study. Although the size of associations is generally small, the data suggest that informational uses of the Internet are positively related to individual differences in the production of social capital, whereas social-recreational uses are negatively related to these civic indicators. Analyses within subsamples defined by generational age breaks further suggest that social capital production is related to Internet use among Generation X, while it is tied to television use among Baby Boomers cmd newspaper use among members of the Civic Generation. The possibility of life cycle and cohort effects is discussed.
    BibTeX:
    @article{Shah2001,
      author = {Shah, DV and Kwak, N and Holbert, RL},
      title = {``Connecting'' and ``disconnecting'' with civic life: Patterns of Internet use and the production of social capital},
      journal = {POLITICAL COMMUNICATION},
      year = {2001},
      volume = {18},
      number = {2},
      pages = {141-162}
    }
    
    Shah, D., McLeod, J. & Yoon, S. Communication, context, and community - An exploration of print, broadcast, and Internet influences {2001} COMMUNICATION RESEARCH
    Vol. {28}({4}), pp. {464-506} 
    article  
    Abstract: This research explores the influence of mass media use and community context on civic engagement. The article presents a multilevel test of print, broadcast, and Internet effects on interpersonal trust and civic participation that acknowledges there are (a) micro-level differences in the motives underlying media use, (b) age-cohort differences in patterns of media use and levels of civic engagement, and (c) macro-level differences in community/communication context. Accordingly, the effects of individual differences in media use and aggregate differences in community context are analyzed within generational subsamples using a pooled data set developed from the 1998 and 1999 DDB Life Style Studies. The data suggest that informational uses of mass media are positively related to the production of social capital, whereas social-recreational uses are negatively related to these civic indicators. Informational uses of mass media were also found to interact with community context to influence civic engagement. Analyses within subsamples find that among the youngest adult Americans, use of the Internet for information exchange more strongly influences trust in people and civic participation than do uses of traditional print and broadcast news media.
    BibTeX:
    @article{Shah2001a,
      author = {Shah, DV and McLeod, JM and Yoon, SH},
      title = {Communication, context, and community - An exploration of print, broadcast, and Internet influences},
      journal = {COMMUNICATION RESEARCH},
      year = {2001},
      volume = {28},
      number = {4},
      pages = {464-506}
    }
    
    Shakkottai, S., Rappaport, T. & Karlsson, P. Cross-layer design for wireless networks {2003} IEEE COMMUNICATIONS MAGAZINE
    Vol. {41}({10}), pp. {74-80} 
    article  
    Abstract: As the cellular and PCS world collides with wireless LANs and Internet-based packet data, new networking approaches will support the integration of voice and data on the composite infrastructure of cellular base stations and Ethernet-based wireless access points. This article highlights some of the past accomplishments and promising research avenues for an important topic in the creation of future wireless networks. In this article we address the issue of cross-layer networking, where the physical and MAC layer knowledge of the wireless medium is shared with higher layers, in order to provide efficient methods of allocating network resources and applications over the Internet. In essence, future networks will need to provide ``impedance matching'' of the instantaneous radio channel conditions and capacity needs with the traffic and congestion conditions found over the packet-based world of the Internet. Furthermore, such matching will need to be coordinated with a wide range of particular applications and user expectations, making the topic of cross-layer networking increasingly important for the evolving wireless buildout.
    BibTeX:
    @article{Shakkottai2003,
      author = {Shakkottai, S and Rappaport, TS and Karlsson, PC},
      title = {Cross-layer design for wireless networks},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2003},
      volume = {41},
      number = {10},
      pages = {74-80}
    }
    
    Shankar, V., Smith, A. & Rangaswamy, A. Customer satisfaction and loyalty in online and offline environments {2003} INTERNATIONAL JOURNAL OF RESEARCH IN MARKETING
    Vol. {20}({2}), pp. {153-175} 
    article DOI  
    Abstract: We address the following questions that are becoming increasingly important to managers in service industries: Are the levels of customer satisfaction and loyalty for the same service different when customers choose the service online versus offline? If yes, what factors might explain these differences? How is the relationship between customer satisfaction and loyalty in the online environment different from that in the offline environment? We propose a conceptual framework and develop hypotheses about the effects of the online medium on customer satisfaction and loyalty and on the relationships between satisfaction and loyalty. We test the hypotheses through a simultaneous equation model using two data sets of online and offline customers of the lodging industry. The results are somewhat counterintuitive in that they show that whereas the levels of customer satisfaction for a service chosen online is the same as when it is chosen offline, loyalty to the service provider is higher when the service is chosen online than offline. We also find that loyalty and satisfaction have a reciprocal relationship such that each positively reinforces the other, and this relationship between overall satisfaction and loyalty is further strengthened online. (C) 2003 Elsevier Science B.V All rights reserved.
    BibTeX:
    @article{Shankar2003,
      author = {Shankar, V and Smith, AK and Rangaswamy, A},
      title = {Customer satisfaction and loyalty in online and offline environments},
      journal = {INTERNATIONAL JOURNAL OF RESEARCH IN MARKETING},
      year = {2003},
      volume = {20},
      number = {2},
      pages = {153-175},
      doi = {{10.1016/S0167-8116(03)00016-8}}
    }
    
    Shapira, N., Goldsmith, T., Keck, P., Khosla, U. & McElroy, S. Psychiatric features of individuals with problematic internet use {2000} JOURNAL OF AFFECTIVE DISORDERS
    Vol. {57}({1-3}), pp. {267-272} 
    article  
    Abstract: Background: Problematic internet use has been described in the psychological literature as `internet addiction' and `pathological internet use'. However, there are no studies using face-to-face standardized psychiatric evaluations to identify behavioral characteristics, psychiatric comorbidity or family psychiatric history of individuals with this behavior. Methods. Twenty individuals with problematic internet use were evaluated. Problematic internet use was defined as (1)uncontrollable, (2) markedly distressing, time-consuming or resulting in social, occupational or financial difficulties and (3) not solely present during hypomanic or manic symptoms. Evaluations included a semistructured interview about subjects' internet use, the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders-IV (SCID-TV), family psychiatric history and the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) modified for internet use. Results: All (100 subjects' problematic internet use met DSM-TV criteria for an impulse control disorder (ICD) not otherwise specified (NOS). All 20 subjects had at least one lifetime DSM-IV Axis I diagnosis in addition to their problematic internet use (mean+/-SD = 5.11-3.5 diagnoses); 14 (70.0 had a lifetime diagnosis of bipolar disorder (with 12 having bipolar I disorder). Limitations: Methodological limitations of this study included its small sample size, evaluation of psychiatric diagnoses by unblinded investigators, and lack of a control group. Conclusions: Problematic internet use may be associated with subjective distress, functional impairment and Axis I psychiatric disorders. (C) 2000 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Shapira2000,
      author = {Shapira, NA and Goldsmith, TD and Keck, PE and Khosla, UM and McElroy, SL},
      title = {Psychiatric features of individuals with problematic internet use},
      journal = {JOURNAL OF AFFECTIVE DISORDERS},
      year = {2000},
      volume = {57},
      number = {1-3},
      pages = {267-272}
    }
    
    Sharf, B. Communicating breast cancer on-line: Support and empowerment on the Internet {1997} WOMEN & HEALTH
    Vol. {26}({1}), pp. {65-84} 
    article  
    Abstract: Using participant-observation and discourse analysis, this study explores the communication occurring on the Breast Cancer List, an on-line discussion group which continues to grow in membership and activity. Issues discussed include the evolution of the Last, who participates, what topics ate discussed. Three major dimensions are identified: exchange of information, social support, and personal empowerment. Social support via computer is compared with face-to-face groups. Empowerment centers on enhanced decision-making and preparation for new illness-related experiences. The influence of gender is considered in terms of communicative style and limitations of access. It is concluded that the List fulfills the functions of a community, with future concerns about information control and the potential to enhance patient-provider understanding.
    BibTeX:
    @article{Sharf1997,
      author = {Sharf, BF},
      title = {Communicating breast cancer on-line: Support and empowerment on the Internet},
      journal = {WOMEN & HEALTH},
      year = {1997},
      volume = {26},
      number = {1},
      pages = {65-84}
    }
    
    Shattuck, D., Sandor-Leahy, S., Schaper, K., Rottenberg, D. & Leahy, R. Magnetic resonance image tissue classification using a partial volume model {2001} NEUROIMAGE
    Vol. {13}({5}), pp. {856-876} 
    article DOI  
    Abstract: We describe a sequence of low-level operations to isolate and classify brain tissue within T1-weighted magnetic resonance images (MRI). Our method first removes nonbrain tissue using a combination of anisotropic diffusion filtering, edge detection, and mathematical morphology. We compensate for image nonuniformities due to magnetic field inhomogeneities by fitting a tricubic B-spline gain field to local estimates of the image nonuniformity spaced throughout the MRI volume. The local estimates are computed by fitting a partial volume tissue measurement model to histograms of neighborhoods about each estimate point. The measurement model uses mean tissue intensity and noise variance values computed from the global image and a multiplicative bias parameter that is estimated for each region during the histogram fit. Voxels in the intensity-normalized image are then classified into six tissue types using a maximum a posteriori classifier. This classifier combines the partial volume tissue measurement model with a Gibbs prior that models the spatial properties of the brain. We validate each stage of our algorithm on real and phantom data. Using data from the 20 normal MRI brain data sets of the Internet Brain Segmentation Repository, our method achieved average kappa indices of kappa = 0.746 +/- 0.114 for gray matter (GM) and kappa = 0.798 +/- 0.089 for white matter (WM) compared to expert labeled data. Our method achieved average kappa indices kappa = 0.893 +/- 0.041 for GM and kappa = 0.928 +/- 0.039 for WM compared to the ground truth labeling on 12 volumes from the Montreal Neurological Institute's BrainWeb phantom. (C) 2001 Academic Press.
    BibTeX:
    @article{Shattuck2001,
      author = {Shattuck, DW and Sandor-Leahy, SR and Schaper, KA and Rottenberg, DA and Leahy, RM},
      title = {Magnetic resonance image tissue classification using a partial volume model},
      journal = {NEUROIMAGE},
      year = {2001},
      volume = {13},
      number = {5},
      pages = {856-876},
      doi = {{10.1006/nimg.2000.0730}}
    }
    
    Shectman, S., Landy, S., Oemler, A., Tucker, D., Lin, H., Kirshner, R. & Schechter, P. The Las Campanas Redshift Survey {1996} ASTROPHYSICAL JOURNAL
    Vol. {470}({1, Part 1}), pp. {172-188} 
    article  
    Abstract: The Las Campanas Redshift Survey (LCRS) consists of 26,418 redshifts of galaxies selected from a CCD-based catalog obtained in the R band. The survey covers over 700 deg(2) in six strips, each 1.degrees 5 x 80 degrees, three each in the north and south Galactic caps. The median redshift in the survey is about 30,000 km s(-1). Essential features of the galaxy selection and redshift measurement methods are described and tabulated here. These details are important for subsequent analysis of the LCRS data, Two-dimensional representations of the redshift distributions reveal many repetitions of voids, on the scale of about 5000 km s(-1), sharply bounded by large walls of galaxies as seen in nearby surveys. Statistical investigations of the mean galaxy properties and of clustering on the large scale are reported elsewhere. These include studies of the luminosity function, power spectrum in two and three dimensions, correlation function, pairwise velocity distribution, identification of large-scale structures, and a group catalog. The LCRS redshift catalog will be made available to interested investigators at an internet web site and in archival form as an AAS CD-ROM.
    BibTeX:
    @article{Shectman1996,
      author = {Shectman, SA and Landy, SD and Oemler, A and Tucker, DL and Lin, HA and Kirshner, RP and Schechter, PL},
      title = {The Las Campanas Redshift Survey},
      journal = {ASTROPHYSICAL JOURNAL},
      year = {1996},
      volume = {470},
      number = {1, Part 1},
      pages = {172-188}
    }
    
    SHENKER, S. FUNDAMENTAL DESIGN ISSUES FOR THE FUTURE INTERNET {1995} IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
    Vol. {13}({7}), pp. {1176-1188} 
    article  
    Abstract: The Internet has been a startling and dramatic success, Originally designed to link together a small group of researchers, the Internet is now used by many millions of people, However, multimedia applications, with their novel traffic characteristics and service requirements, pose an interesting challenge to the technical foundations of the Internet, In this paper we address some of the fundamental architectural design issues facing the future Internet, In particular, we discuss whether the Internet should adopt a new service model, how this service model should be invoked, and whether this service model should include admission control. These architectural issues are discussed in a nonrigorous manner, through the use of a utility function formulation and some simple models, While we do advocate some design choices over others, the main purpose here is to provide a framework for discussing the various architectural alternatives.
    BibTeX:
    @article{SHENKER1995,
      author = {SHENKER, S},
      title = {FUNDAMENTAL DESIGN ISSUES FOR THE FUTURE INTERNET},
      journal = {IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS},
      year = {1995},
      volume = {13},
      number = {7},
      pages = {1176-1188}
    }
    
    Shim, J., Warkentin, M., Courtney, J., Power, D., Sharda, R. & Carlsson, C. Past, present, and future of decision support technology {2002} DECISION SUPPORT SYSTEMS
    Vol. {33}({2}), pp. {111-126} 
    article  
    Abstract: Since the early 1970s, decision support systems (DSS) technology and applications have evolved significantly. Many technological and organizational developments have exerted an impact on this evolution. DSS once utilized more limited database, modeling, and user interface functionality, but technological innovations have enabled far more powerful DSS functionality. DSS once supported individual decision-makers, but later DSS technologies were applied to workgroups or team, especially virtual teams. The advent of the Web has enabled inter-organizational decision support systems, and has given rise to numerous new applications of existing technology as well as many new decision support technologies themselves. It seems likely that mobile tools, mobile e-services, and wireless Internet protocols will mark the next major set of developments in DSS. This paper discusses the evolution of DSS technologies and issues related to DSS definition, application, and impact. It then presents four powerful decision support tools, including data warehouses, OLAP, data mining, and Web-based DSS. Issues in the field of collaborative support systems and virtual teams are presented. This paper also describes the state of the art of optimization-based decision support and active decision support for the next millennium. Finally, some implications for the future of the field are discussed. (C) 2002 Published by Elsevier Science B.V.
    BibTeX:
    @article{Shim2002,
      author = {Shim, JP and Warkentin, M and Courtney, JF and Power, DJ and Sharda, R and Carlsson, C},
      title = {Past, present, and future of decision support technology},
      journal = {DECISION SUPPORT SYSTEMS},
      year = {2002},
      volume = {33},
      number = {2},
      pages = {111-126}
    }
    
    Siess, L., Dufour, E. & Forestini, M. An internet server for pre-main sequence tracks of low- and intermediate-mass stars {2000} ASTRONOMY & ASTROPHYSICS
    Vol. {358}({2}), pp. {593-599} 
    article  
    Abstract: We present new grids of pre-main sequence (PMS) tracks for stars in the mass range 0.1 to 7.0 M.. The computations were performed for four different metallicities (Z=0.01, 0.02, 0.03 and 0.04). A fifth table has been computed for the solar composition (Z=0.02), including a moderate overshooting. We describe the update in the physics of the Grenoble stellar evolution code which concerns mostly changes in the equation of state (EOS) adopting the formalism proposed by Pols et al. (1995) and in the treatment of the boundary condition. Comparisons of our models with other grids demonstrate the validity of this EOS in the domain of very low-mass stars. Finally, we present a new server dedicated to PMS stellar evolution which allows the determination of stellar parameters from observational data, the calculation of isochrones, the retrieval of evolutionary files and the possibility to generate graphic outputs.
    BibTeX:
    @article{Siess2000,
      author = {Siess, L and Dufour, E and Forestini, M},
      title = {An internet server for pre-main sequence tracks of low- and intermediate-mass stars},
      journal = {ASTRONOMY & ASTROPHYSICS},
      year = {2000},
      volume = {358},
      number = {2},
      pages = {593-599}
    }
    
    Silberg, W., Lundberg, G. & Musacchio, R. Assessing, controlling, and assuring the quality of medical information on the Internet - Caveant lector et viewer - Let the reader and viewer beware {1997} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {277}({15}), pp. {1244-1245} 
    article  
    BibTeX:
    @article{Silberg1997,
      author = {Silberg, WM and Lundberg, GD and Musacchio, RA},
      title = {Assessing, controlling, and assuring the quality of medical information on the Internet - Caveant lector et viewer - Let the reader and viewer beware},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {1997},
      volume = {277},
      number = {15},
      pages = {1244-1245}
    }
    
    SILVA, D. & CORNELL, M. A NEW LIBRARY OF STELLAR OPTICAL-SPECTRA {1992} ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES
    Vol. {81}({2}), pp. {865-881} 
    article  
    Abstract: A new digital optical stellar library is presented. It consists of spectra covering 3510-8930 R at 11 angstrom resolution for 72 different stellar types. These types extend over the spectral classes O-M and luminosity classes I-V. Most spectra are of solar metallicity stars but some metal-rich and metal-poor spectra are included. This new library is quantitatively compared to two previously published libraries. The library has been submitted to the Astronomical Data Center at the NASA Goddard Space Flight Center for convenient distribution. It is also available via anonymous ftp over Internet.
    BibTeX:
    @article{SILVA1992,
      author = {SILVA, DR and CORNELL, ME},
      title = {A NEW LIBRARY OF STELLAR OPTICAL-SPECTRA},
      journal = {ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES},
      year = {1992},
      volume = {81},
      number = {2},
      pages = {865-881}
    }
    
    Smith, G. & Pell, J. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials {2003} BRITISH MEDICAL JOURNAL
    Vol. {327}({7429}), pp. {1459-1461} 
    article  
    Abstract: Objectives To determine whether parachutes are effective in preventing major trauma related to gravitational challenge. Design Systematic review of randomised controlled trials. Data sources: Medline, Web of Science, Embase, and the Cochrane Library databases; appropriate internet sites and citation lists. Study selection: Studies showing the effects of using a parachute during free fall. Main outcome measure Death or major trauma, defined as an injury severity score > 15. Results We were unable to identify any randomiscd controlled trials of parachute intervention. Conclusions As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
    BibTeX:
    @article{Smith2003,
      author = {Smith, GCS and Pell, JP},
      title = {Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials},
      journal = {BRITISH MEDICAL JOURNAL},
      year = {2003},
      volume = {327},
      number = {7429},
      pages = {1459-1461}
    }
    
    Sole, R. & Valverde, S. Information transfer and phase transitions in a model of internet traffic {2001} PHYSICA A
    Vol. {289}({3-4}), pp. {595-605} 
    article  
    Abstract: In a recent study, Ohira and Sawatari presented a simple model of computer network traffic dynamics. These authors showed that a phase transition point is present separating the low-traffic phase with no congestion from the congestion phase as the packet creation rate increases. We further investigated this model by relaxing the network topology using a random location of routers. It is shown that the model exhibits nontrivial scaling properties close to the critical point, which reproduce some of the observed real Internet features. At criticality, the net shows maximum information transfer and efficiency. It is shown that some of the key properties of this model are shared by highway traffic models, as previously conjectured by some authors. The relevance to Internet dynamics and to the performance of parallel arrays of processors is discussed. (C) 2001 Published by Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Sole2001,
      author = {Sole, RV and Valverde, S},
      title = {Information transfer and phase transitions in a model of internet traffic},
      journal = {PHYSICA A},
      year = {2001},
      volume = {289},
      number = {3-4},
      pages = {595-605}
    }
    
    Solomon, D., Davey, D., Kurman, R., Moriary, A., O'Connor, D., Prey, M., Raab, S., Sherman, M., Wilbur, D., Wright, T. & Young, N. The 2001 Bethesda System - Terminology for reporting results of cervical cytology {2002} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {287}({16}), pp. {2114-2119} 
    article  
    Abstract: Objectives The Bethesda 2001 Workshop was convened to evaluate and update the 1991 Bethesda System terminology for reporting the results of cervical cytology. A primary objective was to develop a new approach to broaden participation in the consensus process. Participants Forum groups composed of 6 to 10 individuals were responsible for developing recommendations for discussion at the workshop. Each forum group included at least 1 cytopathologist, cytotechnologist, clinician, and international representative to ensure a broad range of views and interests. More than 400 cytopathologists, cytotechnologists, histopathologists, family practitioners, gynecologists, public health physicians, epidemiologists, patient advocates, and attorneys participated in the workshop, which was convened by the National Cancer Institute and cosponsored by 44 professional societies. More than 20 countries were represented. Evidence Literature review, expert opinion, and input from an Internet bulletin board were all considered in developing recommendations. The strength of evidence of the scientific data was considered of paramount importance. Consensus Process Bethesda 2001 was a year-long iterative review process. An Internet bulletin board was used for discussion of issues and drafts of recommendations. More than 1000 comments were posted to the bulletin board over the course of 6 months. The Bethesda Workshop, held April 30-May 2, 2001, was open to the public. Postworkshop recommendations were posted on the bulletin board for a last round of critical review prior to finalizing the terminology. Conclusions Bethesda 2001 was developed with broad participation in the consensus process. The 2001 Bethesda System terminology reflects important advances in biological understanding of cervical neoplasia and cervical screening technology.
    BibTeX:
    @article{Solomon2002,
      author = {Solomon, D and Davey, D and Kurman, R and Moriary, A and O'Connor, D and Prey, M and Raab, S and Sherman, M and Wilbur, D and Wright, T and Young, N},
      title = {The 2001 Bethesda System - Terminology for reporting results of cervical cytology},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2002},
      volume = {287},
      number = {16},
      pages = {2114-2119}
    }
    
    Song, C., Havlin, S. & Makse, H. Self-similarity of complex networks {2005} NATURE
    Vol. {433}({7024}), pp. {392-395} 
    article DOI  
    Abstract: Complex networks have been studied extensively owing to their relevance to many real systems such as the world-wide web, the Internet, energy landscapes and biological and social networks(1-5). A large number of real networks are referred to as `scale-free' because they show a power-law distribution of the number of links per node(1,6,7). However, it is widely believed that complex networks are not invariant or self-similar under a length-scale transformation. This conclusion originates from the `small-world' property of these networks, which implies that the number of nodes increases exponentially with the `diameter' of the network(8-11), rather than the power-law relation expected for a self-similar structure. Here we analyse a variety of real complex networks and find that, on the contrary, they consist of self-repeating patterns on all length scales. This result is achieved by the application of a renormalization procedure that coarse-grains the system into boxes containing nodes within a given `size'. We identify a power-law relation between the number of boxes needed to cover the network and the size of the box, defining a finite self-similar exponent. These fundamental properties help to explain the scale-free nature of complex networks and suggest a common self-organization dynamics.
    BibTeX:
    @article{Song2005,
      author = {Song, CM and Havlin, S and Makse, HA},
      title = {Self-similarity of complex networks},
      journal = {NATURE},
      year = {2005},
      volume = {433},
      number = {7024},
      pages = {392-395},
      doi = {{10.1038/nature03248}}
    }
    
    Spek, V., Cuijpers, P., Nykicek, I., Riper, H., Keyzer, J. & Pop, V. Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: a meta-analysis {2007} PSYCHOLOGICAL MEDICINE
    Vol. {37}({3}), pp. {319-328} 
    article DOI  
    Abstract: Background. We studied to what extent internet-based cognitive behaviour therapy (CBT) programs for symptoms of depression and anxiety are effective. Method. A meta-analysis of 12 randomized controlled trials. Results. The effects of internet-based CBT were compared to control conditions in 13 contrast groups with a total number of 2334 participants. A meta-analysis on treatment contrasts resulted in a moderate to large mean effect size [fixed effects analysis (FEA) d=0-40, mixed effects analysis (MEA) d=0-60] and significant heterogeneity. Therefore, two sets of post hoc subgroup analyses were carried out. Analyses on the type of symptoms revealed that interventions for symptoms of depression had a small mean effect size (FEA d= 0-27, MEA d= 0-32) and significant heterogeneity. Further analyses showed that one study could be regarded as an outlier. Analyses without this study showed a small mean effect size and moderate, non-significant heterogeneity. Interventions for anxiety had a large mean effect size (FEA and MEA d=0-96) and very low heterogeneity. When examining the second set of subgroups, based on therapist assistance, no significant heterogeneity was found. Interventions with therapist support (n=5) had a large mean effect size, while interventions without therapist support (n=6) had a small mean effect size (FEA d=0-24, MEA d=0-26). Conclusions. In general, effect sizes of internet-based interventions for symptoms of anxiety were larger than effect sizes for depressive symptoms; however, this might be explained by differences in the amount of therapist support.
    BibTeX:
    @article{Spek2007,
      author = {Spek, Viola and Cuijpers, Pim and Nykicek, Ivan and Riper, Heleen and Keyzer, Jules and Pop, Victor},
      title = {Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: a meta-analysis},
      journal = {PSYCHOLOGICAL MEDICINE},
      year = {2007},
      volume = {37},
      number = {3},
      pages = {319-328},
      doi = {{10.1017/S0033291706008944}}
    }
    
    Srinivasan, S., Anderson, R. & Ponnavolu, K. Customer loyalty in e-commerce: an exploration of its antecedents and consequences {2002} JOURNAL OF RETAILING
    Vol. {78}({1}), pp. {41-50} 
    article  
    Abstract: This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs-customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-of- mouth promotion and willingness to pay more. (C) 2002 by New York University. All rights reserved.
    BibTeX:
    @article{Srinivasan2002,
      author = {Srinivasan, SS and Anderson, R and Ponnavolu, K},
      title = {Customer loyalty in e-commerce: an exploration of its antecedents and consequences},
      journal = {JOURNAL OF RETAILING},
      year = {2002},
      volume = {78},
      number = {1},
      pages = {41-50}
    }
    
    Srinivasan, V. & Varghese, G. Fast address lookups using controlled prefix expansion {1999} ACM TRANSACTIONS ON COMPUTER SYSTEMS
    Vol. {17}({1}), pp. {1-40} 
    article  
    Abstract: Internet (IP) address lookup is a major bottleneck in high-performance routers. IP address lookup is challenging because it requires a longest matching prefix lookup. It is compounded by increasing routing table sizes, increased traffic, higher-speed links, and the migration to 128-bit IPv6 addresses. We describe how IP lockups and updates can be made faster using a set of transformation techniques. Our main technique, controlled prefix expansion, transforms a set of prefixes into an equivalent set with fewer prefix lengths, In addition, we use optimization techniques based on dynamic programming, and local transformations of data structures to improve cache behavior. When applied to trie search, our techniques provide a range of algorithms (Expanded Tries) whose performance can be tuned. For example, using a processor with 1MB of L2 cache, search of the MaeEast database containing 38000 prefixes can be done in 3 L2 cache accesses. On a 300MHz Pentium II which takes 4 cycles for accessing the first word of the L2 cacheline, this algorithm has a worst-case search time of 180 nsec., a worst-case insert/delete time of 2.5 msec., and an average insert/delete time of 4 usec. Expanded tries provide faster search and faster insert/delete times than earlier lookup algorithms. When applied to Binary Search on Levels, our techniques improve worst-case search times by nearly a factor of 2 (using twice as much storage) for the MaeEast database. Our approach to algorithm design is based on measurements using the VTune tool on a Pentium to obtain dynamic clock cycle counts. Our techniques also apply to similar address lookup problems in other network protocols.
    BibTeX:
    @article{Srinivasan1999,
      author = {Srinivasan, V and Varghese, G},
      title = {Fast address lookups using controlled prefix expansion},
      journal = {ACM TRANSACTIONS ON COMPUTER SYSTEMS},
      year = {1999},
      volume = {17},
      number = {1},
      pages = {1-40}
    }
    
    Srivastava, S., John, O., Gosling, S. & Potter, J. Development of personality in early and middle adulthood: Set like plaster or persistent change? {2003} JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY
    Vol. {84}({5}), pp. {1041-1053} 
    article DOI  
    Abstract: Different theories make different predictions about how mean levels of personality traits change in adulthood. The biological view of the Five-factor theory proposes the plaster hypothesis: All personality traits stop changing by age 30. In contrast, contextualist perspectives propose that changes should be more varied and should persist throughout adulthood. This study compared these perspectives in a large (N = 132,515) sample of adults aged 21-60 who completed a Big Five personality measure on the Internet. Conscientiousness and Agreeableness increased throughout early and middle adulthood at varying rates; Neuroticism declined among women but did not change among men. The variety in patterns of change suggests that the Big Five traits are complex phenomena subject to a variety of developmental influences.
    BibTeX:
    @article{Srivastava2003,
      author = {Srivastava, S and John, OP and Gosling, SD and Potter, J},
      title = {Development of personality in early and middle adulthood: Set like plaster or persistent change?},
      journal = {JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY},
      year = {2003},
      volume = {84},
      number = {5},
      pages = {1041-1053},
      doi = {{10.1037/0022-3514.84.5.1041}}
    }
    
    Stanton, J. An empirical assessment of data collection using the Internet {1998} PERSONNEL PSYCHOLOGY
    Vol. {51}({3}), pp. {709-725} 
    article  
    Abstract: Identical questionnaire items were used to gather data from 2 samples of employees. One sample (n = 50) responded to a survey implemented on the World Wide Web. Another sample (n = 181) filled out a paper version of the survey. Analyses of the 2 data sets supported an exploration of the viability of World Wide Web data collection. The World Wide Web data had fewer missing values than the paper and pencil data. A covariance analysis simultaneously conducted in both samples indicated similar covariance structures among the tested variables. The costs and benefits of using access controls to improve sampling are discussed. Four applications that do not require such access controls are discussed.
    BibTeX:
    @article{Stanton1998,
      author = {Stanton, JM},
      title = {An empirical assessment of data collection using the Internet},
      journal = {PERSONNEL PSYCHOLOGY},
      year = {1998},
      volume = {51},
      number = {3},
      pages = {709-725}
    }
    
    Star, S. & Ruhleder, K. Steps toward an ecology of infrastructure: Design and access for large information spaces {1996} INFORMATION SYSTEMS RESEARCH
    Vol. {7}({1}), pp. {111-134} 
    article  
    Abstract: We analyze a large-scale custom software effort, the Worm Community System (WCS), a collaborative system designed for a geographically dispersed community of geneticists. There were complex challenges in creating this infrastructural tool, ranging from simple lack of resources to complex organizational and intellectual communication failures and tradeoffs. Despite high user satisfaction with the system and interface, and extensive user needs assessment, feedback, and analysis, many users experienced difficulties in signing on and use. The study was conducted during a time of unprecedented growth in the Internet and its utilities (1991-1994), and many respondents turned to the World Wide Web for their information exchange. Using Bateson's model of levels of learning, we analyze the levels of infrastructural complexity involved in system access and designer-user communication. We analyze the connection between systems development aimed at supporting specific forms of collaborative knowledge work, local organizational transformation, and large-scale infrastructural change.
    BibTeX:
    @article{Star1996,
      author = {Star, SL and Ruhleder, K},
      title = {Steps toward an ecology of infrastructure: Design and access for large information spaces},
      journal = {INFORMATION SYSTEMS RESEARCH},
      year = {1996},
      volume = {7},
      number = {1},
      pages = {111-134}
    }
    
    Stenseth, N., Ottersen, G., Hurrell, J., Mysterud, A., Lima, M., Chan, K., Yoccoz, N. & Adlandsvik, B. Studying climate effects on ecology through the use of climate indices: the North Atlantic Oscillation, El Nino Southern Oscillation and beyond {2003} PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON SERIES B-BIOLOGICAL SCIENCES
    Vol. {270}({1529}), pp. {2087-2096} 
    article DOI  
    Abstract: Whereas the El Nino Southern Oscillation (ENSO) affects weather and climate variability worldwide, the North Atlantic Oscillation (NAO) represents the dominant climate pattern in the North Atlantic region. Both climate systems have been demonstrated to considerably influence ecological processes. Several other large-scale climate patterns also exist. Although less well known outside the field of climatology, these patterns are also likely to be of ecological interest. We provide an overview of these climate patterns within the context of the ecological effects of climate variability. The application of climate indices by definition reduces complex space and time variability into simple measures, `packages of weather'. The disadvantages of using global climate indices are all related to the fact that another level of problems are added to the ecology-climate interface, namely the link between global climate indices and local climate. We identify issues related to: (i) spatial variation; (ii) seasonality; (iii) non-stationarity; (iv) nonlinearity; and (v) lack of correlation in the relationship between global and local climate. The main advantages of using global climate indices are: (i) biological effects may be related more strongly to global indices than to any single local climate variable; (ii) it helps to avoid problems of model selection; (iii) it opens the possibility for ecologists to make predictions; and (iv) they are typically readily available on Internet.
    BibTeX:
    @article{Stenseth2003,
      author = {Stenseth, NC and Ottersen, G and Hurrell, JW and Mysterud, A and Lima, M and Chan, KS and Yoccoz, NG and Adlandsvik, B},
      title = {Studying climate effects on ecology through the use of climate indices: the North Atlantic Oscillation, El Nino Southern Oscillation and beyond},
      journal = {PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON SERIES B-BIOLOGICAL SCIENCES},
      year = {2003},
      volume = {270},
      number = {1529},
      pages = {2087-2096},
      doi = {{10.1098/rspb.2003.2415}}
    }
    
    Stenson, P., Ball, E., Mort, M., Phillips, A., Shiel, J., Thomas, N., Abeysinghe, S., Krawczak, M. & Cooper, D. Human gene mutation database (HGMD (R)): 2003 update {2003} HUMAN MUTATION
    Vol. {21}({6}), pp. {577-581} 
    article DOI  
    Abstract: The Human Gene Mutation Database (HGMD) constitutes a comprehensive core collection of data on germ,line mutations in nuclear genes underlying or associated with human inherited disease (www.hgmd.org). Data catalogued includes: single base-pair substitutions in coding, regulatory and splicing-relevant regions; micro,deletions and micro,insertions; indels; triplet repeat expansions as well as gross deletions; insertions; duplications; and complex rearrangements. Each mutation is entered into HGMD only once in order to avoid confusion between recurrent and identical,by. descent lesions. By March 2003, the database contained in excess of 39,415 different lesions detected in 1,516 different nuclear genes, with new entries currently accumulating at a rate exceeding 5,000 per annum. Since its inception, HGMD has been expanded to include cDNA reference sequences for more than 87% of listed genes, splice junction sequences, disease,associated and functional polymorphisms, as well as links to data present in publicly available online locus,specific mutation databases. Although HGMD has recently entered into a licensing agreement with Celera Genomics (Rockville, MD), mutation data will continue to be made freely available via the Internet. (C) 2003 Wiley-Liss, Inc.
    BibTeX:
    @article{Stenson2003,
      author = {Stenson, PD and Ball, EV and Mort, M and Phillips, AD and Shiel, JA and Thomas, NST and Abeysinghe, S and Krawczak, M and Cooper, DN},
      title = {Human gene mutation database (HGMD (R)): 2003 update},
      journal = {HUMAN MUTATION},
      year = {2003},
      volume = {21},
      number = {6},
      pages = {577-581},
      doi = {{10.1002/humu.10212}}
    }
    
    Stockle, C., Donatelli, M. & Nelson, R. CropSyst, a cropping systems simulation model {2003} EUROPEAN JOURNAL OF AGRONOMY
    Vol. {18}({3-4}), pp. {289-307} 
    article  
    Abstract: CropSyst is a multi-year, multi-crop, daily time step cropping systems simulation model developed to serve as an analytical tool to study the effect of climate, soils, and management on cropping systems productivity and the environment. CropSyst simulates the soil water and nitrogen budgets, crop growth and development, crop yield, residue production and decomposition, soil erosion by water, and salinity. The development of CropSyst started in the early 1990s, evolving to a suite of programs including a cropping systems simulator (CropSyst), a weather generator (ClimGen), GIS-CropSyst cooperator program (ArcCS), a watershed model (CropSyst Watershed), and several miscellaneous utility programs. CropSyst and associated programs can be downloaded free of charge over the Internet. One key feature of CropSyst is the implementation of a generic crop simulator that enables the simulation of both yearly and multi-year crops and crop rotations via a single set of parameters. Simulations can last a fraction of a year to hundreds of years. The model has been evaluated in many world locations by comparing model estimates to data collected in field experiments. CropSyst has been applied to perform risk and economic analyses of scenarios involving different cropping systems, management options, and soil and climatic conditions. An extensive list of references related to model development, evaluation, and application is provided. (C) 2002 Elsevier Science B.V. All rights reserved.
    BibTeX:
    @article{Stockle2003,
      author = {Stockle, CO and Donatelli, M and Nelson, R},
      title = {CropSyst, a cropping systems simulation model},
      journal = {EUROPEAN JOURNAL OF AGRONOMY},
      year = {2003},
      volume = {18},
      number = {3-4},
      pages = {289-307},
      note = {2nd International Symposium on Modeling Cropping Systems, FLORENCE, ITALY, JUL 16-18, 2001}
    }
    
    Stockwell, D. & Peters, D. The GARP modelling system: problems and solutions to automated spatial prediction {1999} INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE
    Vol. {13}({2}), pp. {143-158} 
    article  
    Abstract: This paper is concerned with the problems and solutions to reliable analysis of arbitrary datasets. Our approach is to describe components of a system called the GARP Modelling System (CMS) which we have developed for automating predictive spatial modelling of the distribution of species of plants and animals. The essence of the system is an underlying generic spatial modelling method which filters out potential sources of errors. The approach is generally applicable however, as the statistical problems arising in arbitrary spatial data analysis potentially apply to any domain. For ease of development, GMS is integrated with the facilities of existing database and visualization tools, and Internet browsers. The GMS is an example of a class of application which has been very successful for providing spatial data analysis in a simple to use way via the Internet.
    BibTeX:
    @article{Stockwell1999,
      author = {Stockwell, D and Peters, D},
      title = {The GARP modelling system: problems and solutions to automated spatial prediction},
      journal = {INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE},
      year = {1999},
      volume = {13},
      number = {2},
      pages = {143-158}
    }
    
    Stohl, A., Eckhardt, S., Forster, C., James, P. & Spichtinger, N. On the pathways and timescales of intercontinental air pollution transport {2002} JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES
    Vol. {107}({D23}) 
    article DOI  
    Abstract: [1] This paper presents results of a 1- year simulation of the transport of six passive tracers, released over the continents according to an emission inventory for carbon monoxide (CO). Lagrangian concepts are introduced to derive age spectra of the tracer concentrations on a global grid in order to determine the timescales and pathways of pollution export from the continents. Calculating these age spectra is equivalent to simulating many (quasi continuous) plumes, each starting at a different time, which are subsequently merged. Movies of the tracer dispersion have been made available on an Internet website. It is found that emissions from Asia experience the fastest vertical transport, whereas European emissions have the strongest tendency to remain in the lower troposphere. European emissions are transported primarily into the Arctic and appear to be the major contributor to the Arctic haze problem. Tracers from an upwind continent first arrive over a receptor continent in the upper troposphere, typically after some 4 days. Only later foreign tracers also arrive in the lower troposphere. Assuming a 2- day lifetime, the domestic tracers dominate total tracer columns over all continents except over Australia where foreign tracers account for 20% of the tracer mass. In contrast, for a 20-day lifetime even continents with high domestic emissions receive more than half of their tracer burden from foreign continents. Three special regions were identified where tracers are transported to, and tracer dilution is slow. Future field studies therefore should be deployed in the following regions: (1) In the winter, the Asia tracer accumulates over Indonesia and the Indian Ocean, a region speculated to be a stratospheric fountain. (2) In the summer, the highest concentrations of the Asia tracer are found in the Middle East. (3) In the summer, the highest concentrations of the North America tracer are found in the Mediterranean.
    BibTeX:
    @article{Stohl2002,
      author = {Stohl, A and Eckhardt, S and Forster, C and James, P and Spichtinger, N},
      title = {On the pathways and timescales of intercontinental air pollution transport},
      journal = {JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES},
      year = {2002},
      volume = {107},
      number = {D23},
      doi = {{10.1029/2001JD001396}}
    }
    
    Stohl, A., Forster, C., Frank, A., Seibert, P. & Wotawa, G. Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2 {2005} ATMOSPHERIC CHEMISTRY AND PHYSICS
    Vol. {5}, pp. {2461-2474} 
    article  
    Abstract: The Lagrangian particle dispersion model FLEXPART was originally (about 8 years ago) designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis. Its application fields were extended from air pollution studies to other topics where atmospheric transport plays a role (e.g., exchange between the stratosphere and troposphere, or the global water cycle). It has evolved into a true community model that is now being used by at least 25 groups from 14 different countries and is seeing both operational and research applications. A user manual has been kept actual over the years and was distributed over an internet page along with the model's source code. In this note we provide a citeable technical description of FLEXPART's latest version (6.2).
    BibTeX:
    @article{Stohl2005,
      author = {Stohl, A and Forster, C and Frank, A and Seibert, P and Wotawa, G},
      title = {Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2},
      journal = {ATMOSPHERIC CHEMISTRY AND PHYSICS},
      year = {2005},
      volume = {5},
      pages = {2461-2474}
    }
    
    Stoica, I., Morris, R., Liben-Nowell, D., Karger, D., Kaashoek, M., Dabek, F. & Balakrishnan, H. Chord: A scalable peer-to-peer lookup protocol for Internet applications {2003} IEEE-ACM TRANSACTIONS ON NETWORKING
    Vol. {11}({1}), pp. {17-32} 
    article DOI  
    Abstract: A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
    BibTeX:
    @article{Stoica2003,
      author = {Stoica, I and Morris, R and Liben-Nowell, D and Karger, DR and Kaashoek, MF and Dabek, F and Balakrishnan, H},
      title = {Chord: A scalable peer-to-peer lookup protocol for Internet applications},
      journal = {IEEE-ACM TRANSACTIONS ON NETWORKING},
      year = {2003},
      volume = {11},
      number = {1},
      pages = {17-32},
      doi = {{10.1109/TNET.2002.808407}}
    }
    
    Strand, J., Chiu, A. & Tkach, R. Issues for routing in the optical layer {2001} IEEE COMMUNICATIONS MAGAZINE
    Vol. {39}({2}), pp. {81-87} 
    article  
    Abstract: Optical Layer control planes based on MPLS and other Internet protocols hold great promise because of their proven scalability, ability to support rapid provisioning, and auto discovery and self-inventory capabilities and are under intense study in various standards bodies. To date however little attention has been paid to aspects of the Optical Layer which differ from those found in data networking. We study three such aspects which impact routing: Network elements which are reconfigurable, but in constrained ways Transmission impairments which may make some routes unusable Diversity We conclude that if emerging optical technology is to be maximally exploited, heterogeneous technologies with dissimilar routing constraints are likely. Four alternative architectures for dealing with this eventuality are identified and some trade-offs between centralizing or distributing some aspects of routing are discussed.
    BibTeX:
    @article{Strand2001,
      author = {Strand, J and Chiu, AL and Tkach, R},
      title = {Issues for routing in the optical layer},
      journal = {IEEE COMMUNICATIONS MAGAZINE},
      year = {2001},
      volume = {39},
      number = {2},
      pages = {81-87}
    }
    
    Strecher, V., Shiffman, S. & West, R. Randomized controlled trial of a web-based computer-tailored smoking cessation program as a supplement to nicotine patch therapy {2005} ADDICTION
    Vol. {100}({5}), pp. {682-688} 
    article DOI  
    Abstract: Aim To assess the efficacy of World Wide Web-based tailored behavioral smoking cessation materials among nicotine patch users. Design Two-group randomized controlled trial. Setting World Wide Web in England and Republic of Ireland. Participants A total of 3971 subjects who purchased a particular brand of nicotine patch and logged-on to use a free web-based behavioral support program. Intervention Web-based tailored behavioral smoking cessation materials or web-based non-tailored materials. Measurements Twenty-eight-day continuous abstinence rates were assessed by internet-based survey at 6-week follow-up and 10-week continuous rates at 12-week follow-up. Findings Using three approaches to the analyses of 6- and 12-week outcomes, participants in the tailored condition reported clinically and statistically significantly higher continuous abstinence rates than participants in the non-tailored condition. In our primary analyses using as a denominator all subjects who logged-on to the treatment site at least once, continuous abstinence rates at 6 weeks were 29.0% in the tailored condition versus 23.9% in the non-tailored condition (OR = 1.30; P = 0.0006); at 12 weeks continuous abstinence rates were 22.8% versus 18.1 respectively (OR = 1.34; P = 0.0006). Moreover, satisfaction with the program was significantly higher in the tailored than in the non-tailored condition. Conclusions The results of this study demonstrate a benefit of the web-based tailored behavioral support materials used in conjunction with nicotine replacement therapy. A web-based program that collects relevant information from users and tailors the intervention to their specific needs had significant advantages over a web-based non-tailored cessation program.
    BibTeX:
    @article{Strecher2005,
      author = {Strecher, VJ and Shiffman, S and West, R},
      title = {Randomized controlled trial of a web-based computer-tailored smoking cessation program as a supplement to nicotine patch therapy},
      journal = {ADDICTION},
      year = {2005},
      volume = {100},
      number = {5},
      pages = {682-688},
      doi = {{10.1111/j.1360-0443.2005.01093.x}}
    }
    
    Strogatz, S. Exploring complex networks {2001} NATURE
    Vol. {410}({6825}), pp. {268-276} 
    article  
    Abstract: The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems - be they neurons, power stations or lasers - will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks.
    BibTeX:
    @article{Strogatz2001,
      author = {Strogatz, SH},
      title = {Exploring complex networks},
      journal = {NATURE},
      year = {2001},
      volume = {410},
      number = {6825},
      pages = {268-276}
    }
    
    Supply, P., Lesjean, S., Savine, E., Kremer, K., van Soolingen, D. & Locht, C. Automated high-throughput genotyping for study of global epidemiology of Mycobacterium tuberculosis based on mycobacterial interspersed repetitive units {2001} JOURNAL OF CLINICAL MICROBIOLOGY
    Vol. {39}({10}), pp. {3563-3571} 
    article  
    Abstract: Large-scale genotyping of Mycobacterium tuberculosis is especially challenging, as the current typing methods are labor-intensive and the results are difficult to compare among laboratories. Here, automated typing based on variable-number tandem repeats (VNTRs) of genetic elements named mycobacterial interspersed repetitive units (MIRUs) in 12 mammalian minisatellite-like loci of M. tuberculosis is presented. This system combines analysis of multiplex PCRs on a fluorescence-based DNA analyzer with computerized automation of the genotyping. Analysis of a blinded reference set of 90 strains from 38 countries (K. Kremer et al., J. Clin. Microbiol. 37:2607-2618, 1999) demonstrated that it is 100% reproducible, sensitive, and specific for M. tuberculosis complex isolates, a performance that has not been achieved by any other typing method tested in the same conditions. MIRU-VNTRs can be used for analysis of the global genetic diversity of M. tuberculosis complex strains at different levels of evolutionary divergence. To fully exploit the portability of this typing system, a website was set up for the analysis of M. tuberculosis MIRU-VNTR genotypes via the Internet. This opens the way for global epidemiological surveillance of tuberculosis and should lead to novel insights into the evolutionary and population genetics of this major pathogen.
    BibTeX:
    @article{Supply2001,
      author = {Supply, P and Lesjean, S and Savine, E and Kremer, K and van Soolingen, D and Locht, C},
      title = {Automated high-throughput genotyping for study of global epidemiology of Mycobacterium tuberculosis based on mycobacterial interspersed repetitive units},
      journal = {JOURNAL OF CLINICAL MICROBIOLOGY},
      year = {2001},
      volume = {39},
      number = {10},
      pages = {3563-3571}
    }
    
    Sutcliffe, G. The TPTP problem library - CNF release v1.2.1 {1998} JOURNAL OF AUTOMATED REASONING
    Vol. {21}({2}), pp. {177-203} 
    article  
    Abstract: This paper provides a detailed description of the CNF part of the TPTP Problem Library for automated theorem-proving systems. The library is available via the Internet and forms a common basis for development and experimentation with automated theorem provers. This paper explains the motivations and reasoning behind the development of the TPTP (thus implicitly explaining the design decisions made) and describes the TPTP contents and organization. It also provides guidelines for obtaining and using the library, summary statistics about release v1.2.1, and an overview of the tptp2X utility program. References for all the sources of TPTP problems are provided.
    BibTeX:
    @article{Sutcliffe1998,
      author = {Sutcliffe, G},
      title = {The TPTP problem library - CNF release v1.2.1},
      journal = {JOURNAL OF AUTOMATED REASONING},
      year = {1998},
      volume = {21},
      number = {2},
      pages = {177-203}
    }
    
    Sycara, K. Multiagent systems {1998} AI MAGAZINE
    Vol. {19}({2}), pp. {79-92} 
    article  
    Abstract: Agent-based systems technology has generated lots of excitement in recent years because of its promise as a new paradigm for conceptualizing, designing, and implementing software systems. This promise is particularly attractive for creating software that operates in environments that are distributed and open, such as the internet. Currently, the great majority of agent-based systems consist of a single agent. However, as the technology matures and addresses increasingly complex applications, the need for systems that consist of multiple agents that communicate in a peer-to-peer fashion is becoming apparent. Central to the design and effective operation of such multiagent systems (MASs) are a core set of issues and research questions that have been studied over the years by the distributed AI community. In this article, I present some of the critical notions in MASs and the research work that has addressed them. I organize these notions around the concept of problem-solving coherence, which I believe is one of the most critical overall characteristics that an MAS should exhibit.
    BibTeX:
    @article{Sycara1998,
      author = {Sycara, KP},
      title = {Multiagent systems},
      journal = {AI MAGAZINE},
      year = {1998},
      volume = {19},
      number = {2},
      pages = {79-92}
    }
    
    Sycara, K., Widoff, S., Klusch, M. & Lu, J. Larks: Dynamic matchmaking among heterogeneous software agents in cyberspace {2002} AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS
    Vol. {5}({2}), pp. {173-203} 
    article  
    Abstract: Service matchmaking among heterogeneous software agents in the Internet is usually done dynamically and must be efficient. There is an obvious trade-off between the quality and efficiency of matchmaking on the Internet. We define a language called Larks for agent advertisements and requests, and present a flexible and efficient matchmaking process that uses Larks. The Larks matchmaking process performs both syntactic and semantic matching, and in addition allows the specification of concepts (local ontologies) via ITL, a concept language. The matching process uses five different filters: context matching, profile comparison, similarity matching, signature matching and constraint matching. Different degrees of partial matching can result from utilizing different combinations of these filters. We briefly report on our implementation of Larks and the matchmaking process in Java. Fielded applications of matchmaking using Larks in several application domains for systems of information agents are ongoing efforts.
    BibTeX:
    @article{Sycara2002,
      author = {Sycara, K and Widoff, S and Klusch, M and Lu, JG},
      title = {Larks: Dynamic matchmaking among heterogeneous software agents in cyberspace},
      journal = {AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS},
      year = {2002},
      volume = {5},
      number = {2},
      pages = {173-203}
    }
    
    Tan, W.-T. & Zakhor, A. Real-Time Internet Video Using Error Resilient Scalable Compression and TCP-Friendly Transport Protocol {1999} IEEE TRANSACTIONS ON MULTIMEDIA
    Vol. {1}({2}), pp. {172-186} 
    article  
    Abstract: We introduce a point to point real-time video transmission scheme over the Internet combining a low-delay TCP-friendly transport protocol in conjunction with a novel compression method that is error resilient and bandwidth-scalable. Compressed video is packetized into individually decodable packets of equal expected visual importance. Consequently, relatively constant video quality can be achieved at the receiver under lossy conditions. Furthermore, the packets can be truncated to instantaneously meet the time varying bandwidth imposed by a TCP-friendly transport protocol. As a result, adaptive flows that are friendly to other Internet traffic are produced. Actual Internet experiments together with simulations are used to evaluate the performance of the compression, transport, and the combined schemes.
    BibTeX:
    @article{Tan1999,
      author = {Tan, Wai-Tian and Zakhor, Avideh},
      title = {Real-Time Internet Video Using Error Resilient Scalable Compression and TCP-Friendly Transport Protocol},
      journal = {IEEE TRANSACTIONS ON MULTIMEDIA},
      year = {1999},
      volume = {1},
      number = {2},
      pages = {172-186}
    }
    
    Tank, A., Wijngaard, J., Konnen, G., Bohm, R., Demaree, G., Gocheva, A., Mileta, M., Pashiardis, S., Hejkrlik, L., Kern-Hansen, C., Heino, R., Bessemoulin, P., Muller-Westermeier, G., Tzanakou, M., Szalai, S., Palsdottir, T., Fitzgerald, D., Rubin, S., Capaldo, M., Maugeri, M., Leitass, A., Bukantis, A., Aberfeld, R., Van Engelen, A., Forland, E., Mietus, M., Coelho, F., Mares, C., Razuvaev, V., Nieplova, E., Cegnar, T., Lopez, J., Dahlstrom, B., Moberg, A., Kirchhofer, W., Ceylan, A., Pachaliuk, O., Alexander, L. & Petrovic, P. Daily dataset of 20th-century surface air temperature and precipitation series for the European Climate Assessment {2002} INTERNATIONAL JOURNAL OF CLIMATOLOGY
    Vol. {22}({12}), pp. {1441-1453} 
    article DOI  
    Abstract: We present a dataset of daily resolution climatic time series that has been compiled for the European Climate Assessment (ECA). As of December 2001, this ECA dataset comprises 199 series of minimum, maximum and/or daily mean temperature and 195 series of daily precipitation amount observed at meteorological stations in Europe and the Middle East. Almost all series cover the standard normal period 1961-90, and about 50% extends back to at least 1925. Part of the dataset (90 is made available for climate research on CDROM and through the Internet (at http://www.knmi.nl/samenw/eca). A comparison of the ECA dataset with existing gridded datasets, having monthly resolution, shows that correlation coefficients between ECA stations and nearest land grid boxes between 1946 and 1999 are higher than 0.8 for 93% of the temperature series and for 51% of the precipitation series. The overall trends in the ECA dataset are of comparable magnitude to those in the gridded datasets. The potential of the ECA dataset for climate studies is demonstrated in two examples. In the first example, it is shown that the winter (October-March) warming in Europe in the 1976-99 period is accompanied by a positive trend in the number of warm-spell days at most stations, but not by a negative trend in the number of cold-spell days. Instead, the number of cold-spell days increases over Europe. In the second example, it is shown for winter precipitation between 1946 and 1999 that positive trends in the mean amount per wet day prevail in areas that are getting drier and wetter. Because of its daily resolution, the ECA dataset enables a variety of empirical climate studies, including detailed analyses of changes in the occurrence of extremes in relation to changes in mean temperature and total precipitation. Copyright (C) 2002 Royal Meteorological Society.
    BibTeX:
    @article{Tank2002,
      author = {Tank, AMGK and Wijngaard, JB and Konnen, GP and Bohm, R and Demaree, G and Gocheva, A and Mileta, M and Pashiardis, S and Hejkrlik, L and Kern-Hansen, C and Heino, R and Bessemoulin, P and Muller-Westermeier, G and Tzanakou, M and Szalai, S and Palsdottir, T and Fitzgerald, D and Rubin, S and Capaldo, M and Maugeri, M and Leitass, A and Bukantis, A and Aberfeld, R and Van Engelen, AFV and Forland, E and Mietus, M and Coelho, F and Mares, C and Razuvaev, V and Nieplova, E and Cegnar, T and Lopez, JA and Dahlstrom, B and Moberg, A and Kirchhofer, W and Ceylan, A and Pachaliuk, O and Alexander, LV and Petrovic, P},
      title = {Daily dataset of 20th-century surface air temperature and precipitation series for the European Climate Assessment},
      journal = {INTERNATIONAL JOURNAL OF CLIMATOLOGY},
      year = {2002},
      volume = {22},
      number = {12},
      pages = {1441-1453},
      doi = {{10.1002/joc.773}}
    }
    
    Tate, D., Jackvony, E. & Wing, R. Effects of Internet behavioral counseling on weight loss in adults at risk for type 2 diabetes - A randomized trial {2003} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {289}({14}), pp. {1833-1836} 
    article  
    Abstract: Context Weight loss programs on the Internet appear promising for short-term weight loss but have not been studied for weight loss in individuals at risk of type 2 diabetes; thus, the longer-term efficacy is unknown. Objective To compare the effects of an Internet weight loss program alone vs with the addition of behavioral counseling via e-mail provided for 1 year to individuals at risk of type 2 diabetes. Design, Setting, and Participants A single-center randomized controlled trial conducted from September 2001 to September 2002 in Providence, RI, of 92 overweight adults whose mean (SD) age was 48.5 (9.4) years and body mass index, 33.1 (3.8). Interventions Participants were randomized to a basic Internet (n=46) or to an Internet plus behavioral e-counseling program (n=46). Both groups received 1 face-to-face counseling session and the same core Internet programs and were instructed to submit weekly weights. Participants in e-counseling submitted calorie and exercise information and received weekly e-mail behavioral counseling and feedback from a counselor. Main Outcome Measures Measured weight and waist circumference at 0 and 12 months. Results Intent-to-treat analyses showed the behavioral e-counseling group lost more mean (SD) weight at 12 months than the basic Internet group (-4.4 [6.2] vs -2.0 [5.7] kg; P=.04), and had greater decreases in percentage of initial body weight (4.8% vs 2.2 P=.03), body mass index (-1.6 [2.2] vs -0.8 [2.1]; P=.03), and waist circumference (-7.2 [7.5] vs -4.4 [5.7] cm; P=.05). Conclusion Adding e-mail counseling to a basic Internet weight loss intervention program significantly improved weight loss in adults at risk of diabetes.
    BibTeX:
    @article{Tate2003,
      author = {Tate, DF and Jackvony, EH and Wing, RR},
      title = {Effects of Internet behavioral counseling on weight loss in adults at risk for type 2 diabetes - A randomized trial},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2003},
      volume = {289},
      number = {14},
      pages = {1833-1836}
    }
    
    Tate, D., Wing, R. & Winett, R. Using Internet technology to deliver a behavioral weight loss program {2001} JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
    Vol. {285}({9}), pp. {1172-1177} 
    article  
    Abstract: Context Rapid increases in access to the Internet have made it a viable mode for public health intervention. No controlled studies have evaluated this resource for weight loss. Objective To determine whether a structured Internet behavioral weight loss program produces greater initial weight loss and changes in waist circumference than a weight loss education Web site. Design Randomized controlled trial conducted from April to December 1999. Setting and Participants Ninety-one healthy, overweight adult hospital employees aged 18 to 60 years with a body mass index of 25 to 36 kg/m(2). Analyses were performed for the 65 who had complete follow-up data. Interventions Participants were randomly assigned to a 6-month weight loss program of either Internet education (education; n=32 with complete data) or Internet behavior therapy (behavior therapy; n=33 with complete data). All participants were given 1 face-to-face group weight loss session and access to a Web site with organized links to Internet weight loss resources. Participants in the behavior therapy group received additional behavioral procedures, including a sequence of 24 weekly behavioral lessons via e-mail, weekly online submission of self-monitoring diaries with individualized therapist feedback via e-mail, and an online bulletin board. Main Outcome Measures Body weight and waist circumference, measured at 0, 3, and 6 months, compared the 2 intervention groups. Results Repeated-measures analyses showed that the behavior therapy group lost more weight than the education group (P=.005). The behavior therapy group lost a mean (SD) of 4.0 (2.8) kg by 3 months and 4.1 (4.5) kg by 6 months. Weight loss in the education group was 1.7 (2.7) kg at 3 months and 1.6 (3.3) kg by 6 months. More participants in the behavior therapy than education group achieved the 5% weight loss goal (45% vs 22 P=.05) by 6 months. Changes in waist circumference were also greater in the behavior therapy group than in the education group at both 3 months (P=.001) and 6 months (P =.005). Conclusions Participants who were given a structured behavioral treatment program with weekly contact and individualized feedback had better weight loss compared with those given links to educational Web sites. Thus, the Internet and e-mail appear to be viable methods for delivery of structured behavioral weight loss programs.
    BibTeX:
    @article{Tate2001,
      author = {Tate, DF and Wing, RR and Winett, RA},
      title = {Using Internet technology to deliver a behavioral weight loss program},
      journal = {JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION},
      year = {2001},
      volume = {285},
      number = {9},
      pages = {1172-1177},
      note = {Annual Meeting of the North-American-Association-for-the-Study-of-Obesity, CHARLESTON, SOUTH CAROLINA, NOV 14-18, 1999}
    }
    
    Taubin, G. & Rossignac, J. Geometric compression through topological surgery {1998} ACM TRANSACTIONS ON GRAPHICS
    Vol. {17}({2}), pp. {84-115} 
    article  
    Abstract: The abundance and importance of complex 3-D data bases in major industry segments, the affordability of interactive 3-D rendering for office and consumer use, and the exploitation of the Internet to distribute and share 3-D data have intensified the need for an effective 3-D geometric compression technique that would significantly reduce the time required to transmit 3-D models over digital communication channels, and the amount of memory or disk space required to store the models. Because the prevalent representation of 3-D models for graphics purposes is polyhedral and because polyhedral models are in general triangulated for rendering, this article introduces a new compressed representation for complex triangulated models and simple, yet efficient, compression and decompression algorithms. In this scheme, vertex positions are quantized within the desired accuracy, a vertex spanning tree is used to predict the position of each vertex from 2, 3, or 4 of its ancestors in the tree, and the correction vectors are entropy encoded. Properties, such as normals, colors, and texture coordinates, are compressed in a similar manner. The connectivity is encoded with no loss of information to an average of less than two bits per triangle. The vertex spanning tree and a small-set of jump edges are used to split the model into a simple polygon. A triangle spanning tree and a sequence of marching bits are used to encode the triangulation of the polygon. Our approach improves on Michael Deering's pioneering results by exploiting the geometric coherence of several ancestors in the vertex spanning tree, preserving the connectivity with no loss of information, avoiding vertex repetitions, and using about three times fewer bits for the connectivity. However, since decompression requires random access to all vertices, this method must be modified for hardware rendering with limited onboard memory. Finally, we demonstrate implementation results for a variety of VRML models with up to two orders of magnitude compression.
    BibTeX:
    @article{Taubin1998,
      author = {Taubin, G and Rossignac, J},
      title = {Geometric compression through topological surgery},
      journal = {ACM TRANSACTIONS ON GRAPHICS},
      year = {1998},
      volume = {17},
      number = {2},
      pages = {84-115}
    }
    
    TAYLOR, J. & CORDES, J. PULSAR DISTANCES AND THE GALACTIC DISTRIBUTION OF FREE-ELECTRONS {1993} ASTROPHYSICAL JOURNAL
    Vol. {411}({2, Part 1}), pp. {674-684} 
    article  
    Abstract: We describe a quantitative model for the distribution of free electrons in the Galaxy, with particular emphasis on its utility for estimating pulsar distances from dispersion measures. Contrary to past practice, we abandon the assumption of an axisymmetric Galaxy. Instead, we explicitly incorporate spiral arms, the shapes and locations of which are derived from existing radio and optical observations of H II regions. Additional parameters of the model include the electron densities of `'outer'' and `'inner'' axisymmetric components, as well as of the spiral arms; scale lengths for the r- and z-dependences of the axisymmetric features and the width and scale height of the arms; and `'fluctuation parameters'' used to relate the dispersion and scattering contributions of the outer. inner, and spiral arm components of the model. Because of the large angular size and close proximity of the Gum Nebula, we also explicitly model its contribution to dispersion measures. Values of some of the model parameters have been fixed by appeal to independent astrophysical data of various sorts. The remaining adjustable quantities have been calibrated by reference to three distinct types of information: (1) independently measured distance limits and dispersion measures for 74 pulsars; (2) interstellar scattering measurements for 223 Galactic and extragalactic radio sources, together with their distances or dispersion measures; and (3) the distribution of 553 pulsar dispersion measures with respect to Galactic longitude. We believe that for most known pulsars the new model provides distance estimates accurate to approximately 25% or better. In an Appendix we describe several FORTRAN subroutines that implement the model, and we give instructions for obtaining copies of the code via Internet.
    BibTeX:
    @article{TAYLOR1993,
      author = {TAYLOR, JH and CORDES, JM},
      title = {PULSAR DISTANCES AND THE GALACTIC DISTRIBUTION OF FREE-ELECTRONS},
      journal = {ASTROPHYSICAL JOURNAL},
      year = {1993},
      volume = {411},
      number = {2, Part 1},
      pages = {674-684}
    }
    
    Teo, T., Lim, V. & Lai, R. Intrinsic and extrinsic motivation in Internet usage {1999} OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE
    Vol. {27}({1}), pp. {25-37} 
    article  
    Abstract: This study focuses on both intrinsic (i,e. perceived enjoyment) and extrinsic (i.e, perceived usefulness) motivation for the use of the Internet. An electronic Webpage survey was used to collect the data required for this study. A total of 1370 usable responses were obtained. Results indicated that local Internet users used the Internet mainly because they perceived the Internet to be more useful to their job tasks and secondarily, because it is enjoyable and easy to use. Findings demonstrated that while perceived usefulness had consistently strong effects on all usage dimensions (frequency of Internet usage, daily Internet usage and diversity of Internet usage), perceived ease of use and perceived enjoyment affected each specific usage dimension differently. (C) 1998 Elsevier Science Ltd. All rights reserved.
    BibTeX:
    @article{Teo1999,
      author = {Teo, TSH and Lim, VKG and Lai, RYC},
      title = {Intrinsic and extrinsic motivation in Internet usage},
      journal = {OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE},
      year = {1999},
      volume = {27},
      number = {1},
      pages = {25-37}
    }
    
    Tetko, I., Gasteiger, J., Todeschini, R., Mauri, A., Livingstone, D., Ertl, P., Palyulin, V., Radchenko, E., Zefirov, N., Makarenko, A., Tanchuk, V. & Prokopenko, V. Virtual computational chemistry laboratory - design and description {2005} JOURNAL OF COMPUTER-AIDED MOLECULAR DESIGN
    Vol. {19}({6}), pp. {453-463} 
    article DOI  
    Abstract: Internet technology offers an excellent opportunity for the development of tools by the cooperative effort of various groups and institutions. We have developed a multi-platform software system, Virtual Computational Chemistry Laboratory, http://www.vcclab.org, allowing the computational chemist to perform a comprehensive series of molecular indices/properties calculations and data analysis. The implemented software is based on a three-tier architecture that is one of the standard technologies to provide client-server services on the Internet. The developed software includes several popular programs, including the indices generation program, DRAGON, a 3D structure generator, CORINA, a program to predict lipophilicity and aqueous solubility of chemicals, ALOGPS and others. All these programs are running at the host institutes located in five countries over Europe. In this article we review the main features and statistics of the developed system that can be used as a prototype for academic and industry models.
    BibTeX:
    @article{Tetko2005,
      author = {Tetko, IV and Gasteiger, J and Todeschini, R and Mauri, A and Livingstone, D and Ertl, P and Palyulin, V and Radchenko, E and Zefirov, NS and Makarenko, AS and Tanchuk, VY and Prokopenko, VV},
      title = {Virtual computational chemistry laboratory - design and description},
      journal = {JOURNAL OF COMPUTER-AIDED MOLECULAR DESIGN},
      year = {2005},
      volume = {19},
      number = {6},
      pages = {453-463},
      doi = {{10.1007/s10822-005-8694-y}}
    }
    
    Thompson, K., Miller, G. & Wilder, R. Wide-area Internet traffic patterns and characteristics {1997} IEEE NETWORK
    Vol. {11}({6}), pp. {10-23} 
    article  
    Abstract: The Internet is rapidly growing in number of users, traffic levels, and topological complexity. At same time it is increasingly driven by economic competition. These developments render the characterization of network usage and workloads more difficult, and yet more critical. Few recent studies have been published reporting Internet backbone traffic usage and characteristics. At MCI, we have implemented high-performance, low-cost monitoring system that can capture traffic and perform analyses. We have deployed this monitoring tool on OC-3 trunks within internetMCI's backbone and also within the NSF-sponsored vBNS. This article presents observations on the patterns and characteristics of wide-area Internet traffic, as recorded by MCI's OC-3 traffic monitors. We report on measurements from two OC-3 trunks in MCI's commercial Internet backbone over two time ranges (24-hour and 7-day) in the presence of vp to 240,000 flows. We reveal the characteristics of the traffic in terms of packet sizes, flow duration, volume, and percentage composition by protocol and application, as well as patterns seen over the two time scales.
    BibTeX:
    @article{Thompson1997,
      author = {Thompson, K and Miller, GJ and Wilder, R},
      title = {Wide-area Internet traffic patterns and characteristics},
      journal = {IEEE NETWORK},
      year = {1997},
      volume = {11},
      number = {6},
      pages = {10-23}
    }
    
    Tomii, K. & Kanehisa, M. Analysis of amino acid indices and mutation matrices for sequence comparison and structure prediction of proteins {1996} PROTEIN ENGINEERING
    Vol. {9}({1}), pp. {27-36} 
    article  
    Abstract: An amino acid index is a set of 20 numerical values representing any of the different physicochemical and biochemical properties of amino acids, As a follow-up to the previous study, we have increased the size of the database, which currently contains 402 published indices, and re-performed the single-linkage cluster analysis, The results basically confirmed the previous findings, Another important feature of amino acids that can be represented numerically is the similarity between them, Thus, a similarity matrix, also called a mutation matrix, is a set of 20x20 numerical values used for protein sequence alignments and similarity searches, We have collected 42 published matrices, performed hierarchical cluster analyses and identified several clusters corresponding to the nature of the data set and the method used for constructing the mutation matrix, Further, we have tried to reproduce each mutation matrix by the combination of amino acid indices in order to understand which properties of amino acids are reflected most. There was a relationship between the PAM units of Dayhoff's mutation matrix and the volume and hydrophobicity of amino acids, The database of 402 amino acid indices and 42 amino acid mutation matrices is made publicly available on the Internet.
    BibTeX:
    @article{Tomii1996,
      author = {Tomii, K and Kanehisa, M},
      title = {Analysis of amino acid indices and mutation matrices for sequence comparison and structure prediction of proteins},
      journal = {PROTEIN ENGINEERING},
      year = {1996},
      volume = {9},
      number = {1},
      pages = {27-36}
    }
    
    Torkzadeh, G. & Dhillon, G. Measuring factors that influence the success of Internet commerce {2002} INFORMATION SYSTEMS RESEARCH
    Vol. {13}({2}), pp. {187-204} 
    article  
    Abstract: Efforts to develop measures of Internet commerce success have been hampered by (1) the rapid development and use of Internet technologies and (2) the lack of conceptual bases necessary to develop success measures. In a recent study, Keeney (1999) proposed two sets of variables labeled as means objectives and fundamental objectives that influence Internet shopping. Means objectives, he argues, help businesses achieve what is important for their customers-fundamental objectives. Based on Keeney's work, this paper describes the development of two instruments that together measure the factors that influence Internet commerce success. One instrument measures the means objectives that influence online purchase (e.g., Internet vendor trust) and the other measures the fundamental objectives that customers perceive to be important for Internet commerce (e.g., Internet product value). In phase one of the instrument development process, we generated 125 items for means and fundamental objectives. Using a sample of 199 responses by individuals with Internet shopping experience, these constructs were examined for reliability and validity. The Phase 1 results suggested a 4-factor, 21-item instrument to measure means objectives and a 4-factor, 17-item instrument to measure fundamental objectives. In Phase 2 of the instrument development process, we gathered a sample of 421 responses to further explore the 2 instruments. With minor modifications, the Phase 2 data support the 2 models. The Phase 2 results suggest a 5-factor, 21-item instrument that measures means objectives in terms of Internet product choice, online payment, Internet vendor trust, shopping travel, and Internet shipping errors. Results also suggest a 4-factor, 16-item instrument that measures fundamental objectives in terms of Internet shopping convenience, Internet ecology, Internet customer relation, and Internet product value. Evidence of reliability and discriminant, construct, and content validity is presented for the hypothesized measurement models. The paper concludes with discussions on the usefulness of these measures and future research ideas.
    BibTeX:
    @article{Torkzadeh2002,
      author = {Torkzadeh, G and Dhillon, G},
      title = {Measuring factors that influence the success of Internet commerce},
      journal = {INFORMATION SYSTEMS RESEARCH},
      year = {2002},
      volume = {13},
      number = {2},
      pages = {187-204}
    }
    
    Touma, C. & Gotsman, C. Triangle mesh compression {1998} GRAPHICS INTERFACE `98 - PROCEEDINGS, pp. {26-34}  inproceedings  
    Abstract: A novel algorithm for the encoding of orientable manifold triangle mesh geometry is presented. Mesh connectivity is encoded in a lossless manner. Vertex coordinate data is uniformly quantized and then losslessly encoded. The compression ratios achieved by the algorithm are shown to be significantly better than those of currently available algorithms, for both connectivity and coordinate data. Use of our algorithm may lead to significant reduction of bandwidth required for the transmission of VRML files over the Internet.
    BibTeX:
    @inproceedings{Touma1998,
      author = {Touma, C and Gotsman, C},
      title = {Triangle mesh compression},
      booktitle = {GRAPHICS INTERFACE `98 - PROCEEDINGS},
      year = {1998},
      pages = {26-34},
      note = {Graphics Interface Conference, VANCOUVER, CANADA, JUN 18-20, 1998}
    }
    
    Tsai, A. & Wadden, T. Systematic review: An evaluation of major commercial weight loss programs in the United States {2005} ANNALS OF INTERNAL MEDICINE
    Vol. {142}({1}), pp. {56-66} 
    article  
    Abstract: Background: Each year millions of Americans enroll in commercial and self-help weight loss programs. Health care providers and their obese patients know little about these programs because of the absence of systematic reviews. Purpose: To describe the components, costs, and efficacy of the major commercial and organized self-help weight loss programs in the United States that provide structured in-person or online counseling. Data Sources: Review of company Web sites, telephone discussion with company representatives, and search of the MEDLINE database. Study Selection: Randomized trials at least 12 weeks in duration that enrolled only Adults and assessed interventions as they are usually provided to the public, or case series that met these criteria, stated the number of enrollees, and included a follow-up evaluation that lasted 1 year or longer. Data Extraction: Data were extracted on study design, attrition, weight loss, duration of follow-up, and maintenance of weight loss. Data Synthesis: we found studies of eDiets.com, Health Management Resources, Take Off Pounds Sensibly, OPTIFAST, and Weight Watchers. Of 3 randomized, controlled trials of Weight Watchers, the largest reported a loss of 3.2% of initial weight at 2 years. One randomized trial and several case series of medically supervised very-low-calorie diet programs found that patients who completed treatment lost approximately 15% to 25% of initial weight. These programs were associated with high costs, high attrition rates, and a high probability of regaining 50% or more of lost weight in 1 to 2 years. Commercial interventions available over the Internet and organized self-help programs produced minimal weight loss. Limitations: Because many studies did not control for high attrition rates, the reported results are probably a best-case scenario. Conclusions: With the exception of 1 trial of Weight Watchers, the evidence to support the use of the major commercial and self-help weight loss programs is suboptimal. Controlled trials are needed to assess the efficacy and cost-effectiveness of these interventions.
    BibTeX:
    @article{Tsai2005,
      author = {Tsai, AG and Wadden, TA},
      title = {Systematic review: An evaluation of major commercial weight loss programs in the United States},
      journal = {ANNALS OF INTERNAL MEDICINE},
      year = {2005},
      volume = {142},
      number = {1},
      pages = {56-66}
    }
    
    Turenne, C., Tschetter, L., Wolfe, J. & Kabani, A. Necessity of quality-controlled 16S rRNA gene sequence databases: Identifying nontuberculous Mycobacterium species {2001} JOURNAL OF CLINICAL MICROBIOLOGY
    Vol. {39}({10}), pp. {3637-3648} 
    article  
    Abstract: The use of the 16S rRNA gene for identification of nontuberculous mycobacteria (NTM) provides a faster and better ability to accurately identify them in addition to contributing significantly in the discovery of new species. Despite their associated problems, many rely on the use of public sequence databases for sequence comparisons. To best evaluate the taxonomic status of NTM species submitted to our reference laboratory, we have created a 16S rRNA sequence database by sequencing 121 American Type Culture Collection strains encompassing 92 species of mycobacteria, and have also included chosen unique mycobacterial sequences from public sequence repositories. In addition, the Ribosomal Differentiation of Medical Microorganisms (RIDOM) service has made freely available on the Internet mycobacterial identification by 16S rRNA analysis. We have evaluated 122 clinical NTM species using our database, comparing > 1,400 bp of the 16S gene, and the RIDOM database, comparing similar to 440 bp. The breakdown of analysis was as follows: 61 strains had a sequence with 100% similarity to the type strain of an established species, 19 strains showed a 1- to 5-bp divergence from an established species, 11 strains had sequences corresponding to uncharacterized strain sequences in public databases, and 31 strains represented unique sequences. Our experience with analysis of the 16S rRNA gene of patient strains has shown that clear-cut results are not the rule. As many clinical, research, and environmental laboratories currently employ 16S-based identification of bacteria, including mycobacteria, a freely available quality-controlled database such as that provided by RIDOM is essential to accurately identify species or detect true sequence variations leading to the discovery of new species.
    BibTeX:
    @article{Turenne2001,
      author = {Turenne, CY and Tschetter, L and Wolfe, J and Kabani, A},
      title = {Necessity of quality-controlled 16S rRNA gene sequence databases: Identifying nontuberculous Mycobacterium species},
      journal = {JOURNAL OF CLINICAL MICROBIOLOGY},
      year = {2001},
      volume = {39},
      number = {10},
      pages = {3637-3648}
    }
    
    Udalski, A., Paczynski, B., Zebrun, K., Szymanski, M., Kubiak, M., Soszynski, I., Szewczyk, O., Wyrzykowski, L. & Pietrzynski, G. The Optical Gravitational Lensing Experiment. Search for planetary and low-luminosity object transits in the Galactic disk. Results of 2001 campaign {2002} ACTA ASTRONOMICA
    Vol. {52}({1}), pp. {1-37} 
    article  
    Abstract: We present results of an extensive photometric search for planetary and low-luminosity object transits in the Galactic disk stars commencing the third phase of the Optical Gravitational Lensing Experiment - OGLE-III. Photometric observations of three fields in the direction of the Galactic center (800 epochs per field) were collected on 32 nights during time interval of 45 days. Out of the total of 5 million stars monitored, about 52 000 Galactic disk stars with photometry better than 1.5% were analyzed for flat-bottomed eclipses with the depth smaller than 0.08 mag. Altogether 46 stars with transiting low-luminosity objects were detected. For 42 of them multiple transits were observed, a total of 185, allowing orbital period determination. Transits in two objects: OGLE-TR-40 and OGLE-TR-10, with the radii ratio of about 0.14 and estimate of the radius of the companion 1.0-1.5 R-Jup, resemble the well known planetary transit in HD 209458. The sample was selected by the presence of apparent transits only, with no knowledge on any other properties. Hence, it is very well suited for general study of low-luminosity objects. The transiting objects may be Jupiters, brown dwarfs, or M dwarfs, Future determination of the amplitude of radial velocity changes will establish their masses, and will confirm or refute the reality of the so called ``brown dwarf desert''. The low-mass stellar companions will provide new data needed for the poorly known mass-radius relation for the lower main sequence. All photometric data are available to the astronomical community from the OGLE Internet archive.
    BibTeX:
    @article{Udalski2002,
      author = {Udalski, A and Paczynski, B and Zebrun, K and Szymanski, M and Kubiak, M and Soszynski, I and Szewczyk, O and Wyrzykowski, L and Pietrzynski, G},
      title = {The Optical Gravitational Lensing Experiment. Search for planetary and low-luminosity object transits in the Galactic disk. Results of 2001 campaign},
      journal = {ACTA ASTRONOMICA},
      year = {2002},
      volume = {52},
      number = {1},
      pages = {1-37}
    }
    
    Udalski, A., Pietrzynski, G., Szymanski, M., Kubiak, M., Zebrun, K., Soszynski, I., Szewczyk, O. & Wyrzykowski, L. The optical gravitational lensing experiment. Additional planetary and low-luminosity object transits from the OGLE 2001 and 2002 observational campaigns {2003} ACTA ASTRONOMICA
    Vol. {53}({2}), pp. {133-149} 
    article  
    Abstract: The photometric data collected by OGLE-III during the 2001 and 2002 observational campaigns aiming at detection of planetary or low-luminosity object transits were corrected for small scale systematic effects using the data pipeline by Kruszewski and Semeniuk and searched again for low amplitude transits. Sixteen new objects with small transiting companions, additional to previously found samples, were discovered. Most of them are small amplitude cases which remained undetected in the original data. Several new objects seem to be very promising candidates for systems containing substellar objects: extrasolar planets or brown dwarfs. Those include OGLE-TR-122, OGLE-TR-125, OGLE-TR-130, OGLE-TR-131 and a few others. Those objects are particularly worth spectroscopic follow-up observations for radial velocity measurements and mass determination. With well known photometric orbit only a few RV measurements should allow to confirm their actual status. All photometric data of presented objects are available to the astronomical community from the OGLE INTERNET archive.
    BibTeX:
    @article{Udalski2003,
      author = {Udalski, A and Pietrzynski, G and Szymanski, M and Kubiak, M and Zebrun, K and Soszynski, I and Szewczyk, O and Wyrzykowski, L},
      title = {The optical gravitational lensing experiment. Additional planetary and low-luminosity object transits from the OGLE 2001 and 2002 observational campaigns},
      journal = {ACTA ASTRONOMICA},
      year = {2003},
      volume = {53},
      number = {2},
      pages = {133-149}
    }
    
    Udalski, A., Soszynski, I., Szymanski, M., Kubiak, M., Pietrzynski, G., Wozniak, P. & Zebrun, K. The optical gravitational lensing experiment. Cepheids in the Magellanic Clouds. IV. Catalog of Cepheids from the large Magellanic Cloud {1999} ACTA ASTRONOMICA
    Vol. {49}({3}), pp. {223-317} 
    article  
    Abstract: We present the Catalog of Cepheids from the LMC. The Catalog contains 1333 objects detected in the 4.5 square degree area of central parts of the LMC. About 3.4 . 10(5) BVI measurements of these stars were collected during the OGLE-II microlensing survey. The Catalog data include period, BVI photometry, astrometry, and R-21, phi(21) parameters of the Fourier decomposition of I-band light curve. The vast majority of objects from the Catalog are the classical Cepheids pulsating in the fundamental or first overtone mode. The remaining objects include Population IT Cepheids and red giants with pulsation-like light curves. Tests of completeness performed in overlapping parts of adjacent fields indicate that completeness of the Catalog is very high: >96 Statistics and distributions of basic parameters of Cepheids are also presented. Finally, we show the light curves of three eclipsing systems containing Cepheid detected among objects of the Catalog. All presented data, including individual BVI observations are available from the OGLE Internet archive.
    BibTeX:
    @article{Udalski1999,
      author = {Udalski, A and Soszynski, I and Szymanski, M and Kubiak, M and Pietrzynski, G and Wozniak, P and Zebrun, K},
      title = {The optical gravitational lensing experiment. Cepheids in the Magellanic Clouds. IV. Catalog of Cepheids from the large Magellanic Cloud},
      journal = {ACTA ASTRONOMICA},
      year = {1999},
      volume = {49},
      number = {3},
      pages = {223-317}
    }
    
    Udalski, A., Szymanski, M., Kubiak, M., Pietrzynski, G., Wozniak, P. & Zebrun, K. The optical gravitational lensing experiment, BVI maps of dense stellar regions. I. The Small Magellanic Cloud {1998} ACTA ASTRONOMICA
    Vol. {48}({2}), pp. {147-174} 
    article  
    Abstract: We present three color, BVI maps of the Small Magellanic Cloud. The maps contain precise photometric and astrometric data for about 2.2 million stars from the central regions of the SMC bar covering approximate to 2.4 square degrees on the sky. Mean brightness of stars is derived from observations collected in the course of the OGLE-II microlensing search from about 130, 30 and 15 measurements in the I, V and B-bands, respectively. Accuracy of the zero points of photometry is about 0.01 mag, and astrometry 0.15 arcsec (with possible systematic error up to 0.7 arcsec). Color-magnitude diagrams of observed fields are also presented. The maps of the SMC are the first from the series of similar maps covering other OGLE fields: LMC, Galactic bulge and Galactic disk. The data are very well suited for many projects, particularly for the SMC which has been neglected photometrically for years. Because of potentially great impact on many astrophysical fields we decided to make the SMC data available to the astronomical community from the OGLE Internet archive.
    BibTeX:
    @article{Udalski1998,
      author = {Udalski, A and Szymanski, M and Kubiak, M and Pietrzynski, G and Wozniak, P and Zebrun, K},
      title = {The optical gravitational lensing experiment, BVI maps of dense stellar regions. I. The Small Magellanic Cloud},
      journal = {ACTA ASTRONOMICA},
      year = {1998},
      volume = {48},
      number = {2},
      pages = {147-174}
    }
    
    Udalski, A., Zebrun, K., Szymanski, M., Kubiak, M., Soszynski, I., Szewczyk, O., Wyrzykowski, L. & Pietrzynski, G. The optical gravitational lensing experiment. Search for planetary and low-luminosity object transits in the galactic disk. Results of 2001 campaign - supplement {2002} ACTA ASTRONOMICA
    Vol. {52}({2}), pp. {115-128} 
    article  
    Abstract: `The photometric data collected during 2001 season OGLE-III planetary/low luminosity object transit campaign were reanalyzed with the new transit search technique - the BLS method by Kovacs, Zucker and Mazeh. In addition to all presented in our original paper transits, additional 13 objects with transiting low-luminosity companions were discovered. We present here a supplement to our original catalog - the photometric data, light curves and finding charts of all 13 new objects. The model fits to the transit light curves indicate that a few new objects may be Jupiter-sized (R < 1.6 R-Jup). OGLE-TR-56 is a particularly interesting case. Its transit has only 13 mmag depth, short duration and a period of 1.21190 days. Model fit indicates that the companion may be Saturn-sized if the passage were central. Spectroscopic follow-up observations are encouraged for final classification of the transiting objects as planets, brown dwarfs or late M-type dwarf stars. We also provide the most recent ephemerides of other most promising planetary transits: OGLE-TR-10 and OGLE-TR-40 based on observations collected in June 2002. All photometric, data are available to the. astronomical. community from the OGLE Internet archive.
    BibTeX:
    @article{Udalski2002a,
      author = {Udalski, A and Zebrun, K and Szymanski, M and Kubiak, M and Soszynski, I and Szewczyk, O and Wyrzykowski, L and Pietrzynski, G},
      title = {The optical gravitational lensing experiment. Search for planetary and low-luminosity object transits in the galactic disk. Results of 2001 campaign - supplement},
      journal = {ACTA ASTRONOMICA},
      year = {2002},
      volume = {52},
      number = {2},
      pages = {115-128}
    }
    
    Urban, G., Sultan, F. & Qualls, W. Placing trust at the center of your Internet strategy {2000} SLOAN MANAGEMENT REVIEW
    Vol. {42}({1}), pp. {39+} 
    article  
    Abstract: Consumers make Internet buying decisions on tbe basis of trust. How much trust your Web site needs to deliver depends on the nature of your products, competitive pressure from new infomediaries and your ability to innovate.
    BibTeX:
    @article{Urban2000,
      author = {Urban, GL and Sultan, F and Qualls, WJ},
      title = {Placing trust at the center of your Internet strategy},
      journal = {SLOAN MANAGEMENT REVIEW},
      year = {2000},
      volume = {42},
      number = {1},
      pages = {39+}
    }
    
    Vazquez, A., Pastor-Satorras, R. & Vespignani, A. Large-scale topological and dynamical properties of the Internet {2002} PHYSICAL REVIEW E
    Vol. {65}({6, Part 2}) 
    article DOI  
    Abstract: We study the large-scale topological and dynamical properties of real Internet maps at the autonomous system level, collected in a 3-yr time interval. We find that the connectivity structure of the Internet presents statistical distributions settled in a well-defined stationary state. The large-scale properties are characterized by a scale-free topology consistent with previous observations. Correlation functions and clustering coefficients exhibit a remarkable structure due to the underlying hierarchical organization of the Internet. The study of the Internet time evolution shows a growth dynamics with aging features typical of recently proposed growing network models. We compare the properties of growing network models with the present real Internet data analysis.
    BibTeX:
    @article{Vazquez2002,
      author = {Vazquez, A and Pastor-Satorras, R and Vespignani, A},
      title = {Large-scale topological and dynamical properties of the Internet},
      journal = {PHYSICAL REVIEW E},
      year = {2002},
      volume = {65},
      number = {6, Part 2},
      doi = {{10.1103/PhysRevE.65.066130}}
    }
    
    Wandell, B., Chial, S. & Backus, B. Visualization and measurement of the cortical surface {2000} JOURNAL OF COGNITIVE NEUROSCIENCE
    Vol. {12}({5}), pp. {739-752} 
    article  
    Abstract: Much of the human cortical surface is obscured from view by the complex pattern of folds, making the spatial relationship between different surface locations hard to interpret. Methods fur viewing large portions of the brain's surface in a single flattened representation are described. The flattened representation preserves several key spatial relationships between regions on the cortical surface. The principles used in the implementations and evaluations of these implementations using artificial test surfaces are provided. Results of applying the methods to structural magnetic resonance measurements of the human brain are also shown. The implementation details are available in che source code, which is freely available on the Internet.
    BibTeX:
    @article{Wandell2000,
      author = {Wandell, BA and Chial, S and Backus, BT},
      title = {Visualization and measurement of the cortical surface},
      journal = {JOURNAL OF COGNITIVE NEUROSCIENCE},
      year = {2000},
      volume = {12},
      number = {5},
      pages = {739-752}
    }
    
    Wang, W., Liew, S. & Li, V. Solutions to performance problems in VoIP over a 802.11 wireless LAN {2005} IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
    Vol. {54}({1}), pp. {366-384} 
    article DOI  
    Abstract: Voice over Internet-Protocol (VoIP) over a wireless local area network (WLAN) is poised to become an important Internet application. However, two major technical problems that stand in the way are: 1) low VoIP capacity in WLAN and 2) unacceptable VoIP performance in the presence of coexisting traffic from other applications. With each VoIP stream typically requiring less than 10 kb/s, an 802.11b WLAN operated at 11 Mb/s. could in principle-support more than 500 VoIP sessions. In actuality, no more than a few sessions can be supported due to various protocol overheads (for GSM 6.10, it is about 12).,This paper proposes and investigates A scheme that can improve the VoIP capacity by close to 100% without changing the standard 802.11 CSMA/CA protocol. In addition, we show that VoIP delay and loss performance in WLAN can be compromised severely in the presence of coexisting-transmission-control protocol (TCP) traffic, even when the number of VoIP sessions is limited to half its potential capacity. A touted advantage of VoIP over traditional telephony is, that it enables the creation of novel applications that integrate voice with data. The inability of VoIP and TCP traffic to coexist harmoniously over the WLAN poses a severe challenge to this vision. Fortunately, the problem can be largely solved by simple solutions that require only changes to the medium-access control (MAC) protocol at the access point. Specifically, in our proposed solutions, the MAC protocol at the wireless end stations does not need to be modified, making the solutions more readily deployable over the existing network infrastructure.
    BibTeX:
    @article{Wang2005,
      author = {Wang, W and Liew, SC and Li, VOK},
      title = {Solutions to performance problems in VoIP over a 802.11 wireless LAN},
      journal = {IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY},
      year = {2005},
      volume = {54},
      number = {1},
      pages = {366-384},
      doi = {{10.1109/TVT.2004.838890}}
    }
    
    Wang, X. Complex networks: Topology, dynamics and synchronization {2002} INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS
    Vol. {12}({5}), pp. {885-916} 
    article  
    Abstract: Dramatic advances in the field of complex networks have been witnessed in the past few years. This paper reviews some important results in this direction of rapidly evolving research, with emphasis on the relationship between the dynamics and the topology of complex networks. Basic quantities and typical examples of various complex networks are described; and main network models are introduced, including regular, random, small-world and scale-free models. The robustness of connectivity and the epidemic dynamics in complex networks are also evaluated. To that end, synchronization in various dynamical networks are discussed according to their regular, small-world and scale-free connections.
    BibTeX:
    @article{Wang2002a,
      author = {Wang, XF},
      title = {Complex networks: Topology, dynamics and synchronization},
      journal = {INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS},
      year = {2002},
      volume = {12},
      number = {5},
      pages = {885-916},
      note = {1st Asia-Pacific Workshop on Chaos Control and Synchronization, SHANGHAI, PEOPLES R CHINA, JUN 28-29, 2001}
    }
    
    Wang, X. & Chen, G. Synchronization in scale-free dynamical networks: Robustness and fragility {2002} IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-FUNDAMENTAL THEORY AND APPLICATIONS
    Vol. {49}({1}), pp. {54-62} 
    article  
    Abstract: Recently, it has been demonstrated that many large complex networks display a scale-free feature, that is, their connectivity distributions are in the power-law form. In this paper, we investigate the synchronization phenomenon in scale-free dynamical networks. We show that if the coupling strength of a scale-free dynamical network is greater than a positive threshold, then the network will synchronize no matter how large it is. We show that the synchronizability of a scale-free dynamical network is robust against random removal of nodes, but is fragile to specific removal of the most highly connected nodes.
    BibTeX:
    @article{Wang2002,
      author = {Wang, XF and Chen, GR},
      title = {Synchronization in scale-free dynamical networks: Robustness and fragility},
      journal = {IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-FUNDAMENTAL THEORY AND APPLICATIONS},
      year = {2002},
      volume = {49},
      number = {1},
      pages = {54-62}
    }
    
    Wang, X. & Chen, G. Pinning control of scale-free dynamical networks {2002} PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS
    Vol. {310}({3-4}), pp. {521-531} 
    article  
    Abstract: Recently, it has been demonstrated that many large complex networks display a scale-free feature, that is, their connectivity distributions have the power-law form. In the present work, control of a scale-free dynamical network by applying local feedback injections to a fraction of network nodes is investigated. The specifically and randomly pinning schemes are considered. The specifically pinning of the most highly connected nodes is shown to require a significantly smaller number of local controllers as compared to the randomly pinning scheme, The method is applied to an array of Chun's oscillators as an example. (C) 2002 Published by Elsevier Science B.V.
    BibTeX:
    @article{Wang2002b,
      author = {Wang, XF and Chen, GR},
      title = {Pinning control of scale-free dynamical networks},
      journal = {PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS},
      year = {2002},
      volume = {310},
      number = {3-4},
      pages = {521-531}
    }
    
    Wang, Y., Doherty, J. & Van Dyck, R. A wavelet-based watermarking algorithm for ownership verification of digital images {2002} IEEE TRANSACTIONS ON IMAGE PROCESSING
    Vol. {11}({2}), pp. {77-88} 
    article  
    Abstract: In recent years, access to multimedia data has become much easier due to the rapid growth of the Internet. While this is usually considered an improvement of everyday life, it also makes unauthorized copying and distributing of multimedia data much easier, therefore presenting a challenge in the field of copyright protection. Digital watermarking, which is inserting copyright information into the data, has been proposed to solve the problem. In this paper, we first discuss the features that a practical digital watermarking system for ownership verification requires. Besides perceptual invisibility and robustness, we claim that the private control of the watermark is also very important. Second, we present a novel wavelet-based watermarking algorithm. Experimental results and analysis are then given to demonstrate that the proposed algorithm is effective and can be used in a practical system.
    BibTeX:
    @article{Wang2002c,
      author = {Wang, YW and Doherty, JF and Van Dyck, RE},
      title = {A wavelet-based watermarking algorithm for ownership verification of digital images},
      journal = {IEEE TRANSACTIONS ON IMAGE PROCESSING},
      year = {2002},
      volume = {11},
      number = {2},
      pages = {77-88}
    }
    
    Wang, Y., Reibman, A. & Lin, S. Multiple description coding for video delivery {2005} PROCEEDINGS OF THE IEEE
    Vol. {93}({1}), pp. {57-70} 
    article DOI  
    Abstract: Multiple description coding (MDC) is an effective means to combat bursty packet losses in the Internet and wireless networks, MDC is especially promising for video applications where retransmission is unacceptable or infeasible. When combined with multiple path transport (MPT), MDC enables traffic dispersion and hence reduces network congestion. This paper describes principles in designing MD video coders employing temporal prediction and presents several predictor structures that differ in their tradeoffs between mismatch-induced distortion and coding efficiency. the paper also discusses example video communication systems integrating MDC and MPT.
    BibTeX:
    @article{Wang2005a,
      author = {Wang, Y and Reibman, AR and Lin, SN},
      title = {Multiple description coding for video delivery},
      journal = {PROCEEDINGS OF THE IEEE},
      year = {2005},
      volume = {93},
      number = {1},
      pages = {57-70},
      doi = {{10.1109/JPROC.2004.839618}}
    }
    
    Wang, Y. & Zhu, Q. Error control and concealment for video communication: A review {1998} PROCEEDINGS OF THE IEEE
    Vol. {86}({5}), pp. {974-997} 
    article  
    Abstract: The problem of error control and concealment in video communication is becoming increasingly important because of the growing interest ill video delivery over unreliable channels such as wireless networks and the Internet. This paper reviews the techniques that have been developed for error control and concealment in the past 10-15 years. These techniques are described in three categories according to the roles that the Encoder and decoder play in the underlying approaches. Forward error concealment includes methods that add redundancy at the source end to enhance error resilience of the coded bit streams. Error concealment by postprocessing refers to operations at the decoder to recover the damaged areas based on characteristics of image and video signals. Last, interactive error concealment covers techniques that are dependent on a dialogue between the source and destination. Both current research activities and practice in international standards are covered.
    BibTeX:
    @article{Wang1998,
      author = {Wang, Y and Zhu, QF},
      title = {Error control and concealment for video communication: A review},
      journal = {PROCEEDINGS OF THE IEEE},
      year = {1998},
      volume = {86},
      number = {5},
      pages = {974-997}
    }
    
    Ware, J. & Kosinski, M. Interpreting SF-36 summary health measures: A response {2001} QUALITY OF LIFE RESEARCH