Abstract This paper will examine the productivity of the public sectors in the US across the states. Because there is heterogeneity across states in terms of public services provided that could impact its productivity. In fact, there could be a convergence among the states. The services provided by the public sectors have come under increased scrutiny with the ongoing process of reform in recent years. The public sector unlike the private sector or in the absence of contestable markets, and the information and incentives provided by these markets, performance information, particularly measures of comparative performance, have been used to gauge the productivity of the public service sector. This paper will examine the productivity of the public sector across states throughout the United States. The research methodology marries exploratory (i.e. Kohonen clustering) and empirical techniques (panel model) via the Cobb-Douglas production function. Given that there is a homogeneity across states in terms of the use of a standard currency, it will be easy to identify the nature of the convergence process in the public sectors by states throughout the United States.
Very often features come with their own vectorial descriptions which provide detailed information about their properties. We refer to these vectorial descriptions as feature side-information. The feature side-information is most often ignored or used for feature selction prior to model fitting. In this paper, we propose a framework that allows for the incorporation of feature side-information during the learning of very general model families. We control the structures of the learned models so that they reflect features’ similarities as these are defined on the basis of the side-information. We perform experiments on a number of benchmark datasets which show significant predictive performance gains, over a number of baselines, as a result of the exploitation of the side-information.
Traditional linear methods for forecasting multivariate time series are not able to satisfactorily model the non-linear dependencies that may exist in non-Gaussian series. We build on the theory of learning vector-valued functions in the reproducing kernel Hilbert space and develop a method for learning prediction functions that accommodate such non-linearities. The method not only learns the predictive function but also the matrix-valued kernel underlying the function search space directly from the data. Our approach is based on learning multiple matrix-valued kernels, each of those composed of a set of input kernels and a set of output kernels learned in the cone of positive semi-de_nite matrices. In addition to superior predictive performance in the presence of strong non-linearities, our method also recovers the hidden dynamic relationships between the series and thus is a new alternative to existing graphical Granger techniques.
Very often features come with their own vectorial descriptions which provide detailed information about their properties. We refer to these vectorial descriptions as feature side-information. In the standard learning scenario, input is represented as a vector of features and the feature sideinformation is most often ignored or used only for feature selection prior to model fitting. We believe that feature side-information which carries information about features intrinsic property will help improve model prediction if used in a proper way during learning process. In this paper, we propose a framework that allows for the incorporation of the feature side-information during the learning of very general model families to improve the prediction performance. We control the structures of the learned models so that they reflect features’ similarities as these are defined on the basis of the side-information. We perform experiments on a number of benchmark datasets which show significant predictive performance gains, over a number of baselines, as a result of the exploitation of the side-information.
Today, molecular biology databases are the cornerstone of knowledge sharing for life and health sciences. The curation and maintenance of these resources are labour intensive. Although text mining is gaining impetus among curators, its integration in curation workflow has not yet been widely adopted. The Swiss Institute of Bioinformatics Text Mining and CALIPHO groups joined forces to design a new curation support system named nextA5. In this report, we explore the integration of novel triage services to support the curation of two types of biological data: protein–protein interactions (PPIs) and post-translational modifications (PTMs). The recognition of PPIs and PTMs poses a special challenge, as it not only requires the identification of biological entities (proteins or residues), but also that of particular relationships (e.g. binding or position). These relationships cannot be described with onto-terminological descriptors such as the Gene Ontology for molecular functions, which makes the triage task more challenging. Prioritizing papers for these tasks thus requires the development of different approaches. In this report, we propose a new method to prioritize articles containing information specific to PPIs and PTMs. The new resources (RESTful APIs, semantically annotated MEDLINE library) enrich the neXtA5 platform. We tuned the article prioritization model on a set of 100 proteins previously annotated by the CALIPHO group. The effectiveness of the triage service was tested with a dataset of 200 annotated proteins. We defined two sets of descriptors to support automatic triage: the first set to enrich for papers with PPI data, and the second for PTMs. All occurrences of these descriptors were marked-up in MEDLINE and indexed, thus constituting a semantically annotated version of MEDLINE. These annotations were then used to estimate the relevance of a particular article with respect to the chosen annotation type. This relevance score was combined with a local vector-space search engine to generate a ranked list of PMIDs. We also evaluated a query refinement strategy, which adds specific keywords (such as ‘binds’ or ‘interacts’) to the original query. Compared to PubMed, the search effectiveness of the nextA5 triage service is improved by 190% for the prioritization of papers with PPIs information and by 260% for papers with PTMs information. Combining advanced retrieval and query refinement strategies with automatically enriched MEDLINE contents is effective to improve triage in complex curation tasks such as the curation of protein PPIs and PTMs.
This paper analyzes the demand for recreation in Swiss forests using the individual travel cost method. We apply a two-steps approach, i.e., a hurdle zero-truncated negative binomial model, that allows accounting for a large number of non-visitors caused by the off-site phone survey and over-dispersion. Given the national scale of the survey, we group forest zones to assess consumer surpluses and travel cost elasticities for relatively homogeneous forest types. We find that forest recreation activities are travel cost inelastic and show that recreation in Swiss forests provides large benefits to the population. The most populated area is associated with greater consumer surpluses, but the lack of recreational infrastructure may cause a lower recreational benefit in some zones. For these zones, recreational benefits may be lower than costs caused by maintenance. More efficient management would require either improving recreational infrastructure thus increasing benefits, or switching the forest status from recreational to biodiversity forest hence decreasing management costs.
To ensure all products as perfect, inspection is essential, even though it is not possible to inspect all products after producing them like some special type products as plastic joint for the water pipe. In this direction, this paper develops an inventory model with lot inspection policy. With the help of lot inspection, all products need not to be verified still the retailer can decide the quality of products during inspection. If retailer founds products as imperfect quality, the products are sent back to supplier. As it is lot inspection, mis-clarification errors (Type-I error and Type-II error) are introduced to model the problem. Two possible cases are discussed for sending back products as defective lots are immediately withdrawn from the system and send back to supplier with retailer’s payment and for second case, retailer sends defective products during receiving next lot from supplier with supplier’s investment, like in food industry or in hygiene product industry. The model is solved analytically and results indicate that optimal order size and sample size are intrinsically linked and maximize the total profit. Numerical examples, graphical representations, and sensitivity analysis are given to illustrate the model. The results suggest that sending defective products maintaining the first case is the more profitable than the second case.
Accelerators assist early-stage ventures by offering networking opportunities, access to funding and training. We map business accelerators in five categories: Independent Accelerators (IA), Corporate Accelerators (CA), Hybrid Accelerators (HA), University Accelerators (UA), and Government Accelerators (GA). We argue that accelerators can be described by two main acceleration models (Model 1 and Model 2), providing accelerators with valuable information on how to position themselves strategically in the ecosystem. We identify a list of accelerators’ ‘characterizing’ variables that allows for differentiating the five accelerator categories and describing the various acceleration models. Empirical evidence is provided on the two acceleration models based on a case study of eight Swiss acceleration programs.
Economic theory assumes that willingness to pay (WTP) increases with the quantity of the consumed good. This implies that there should be a scope effect in contingent valuation studies. However, in previous issues of Ecological Economics, several authors criticized the contingent valuation (CV) method for the absence of such effect or its inadequacy. In this paper, we contribute to this ongoing debate by proposing to systematically apply several WTP statistical distribution assumptions to test for scope effects and check its plausibility, following Whitehead’s (2016) recent recommendations. We perform this approach using data from a Swiss case study assessing the WTP for an increased surface of forest reserves. We find that both mean WTP and scope effects are sensitive to the statistical distribution assumption. Regarding plausibility, scope elasticities provide mixed result and also depend on the assumed statistical distribution of WTP. For small sample size CV studies, a non-parametric analysis, a spike model or an open-ended format can thus be better suited to reveal scope effects than the classical parametric dichotomous choice analysis. We thus recommend to systematically apply several statistical distribution assumptions of WTP to test for scope effects and their plausibility.