Direzione & management

Research data management in Switzerland ::national efforts to guarantee the sustainability of research outputs

Description: 

In this article, the authors report on an on-going Data Life-Cycle Management(DLCM) National project realized in Switzerland, with a major focus on long-term preservation. Based on a extensive document analysis as well as semi-structured interviews, the project aims at providing national services to respond to the most relevant researchers’ DLCM needs, which includes: guidelines for establishing a data management plan, active data management solutions, long-term preservation storage options, training, and a single point of access and contact to get support. In addition to presenting the different working axes of the project, the authors describe a strategic management and lean startup template for developing new business models, which is key for building viable services.

Mondialisation, progrès technique et dépréciation du capital humain ::l’impact sur les politiques de formation

Description: 

Le capital humain et les politiques visant à sa création et à sa préservation prennent de plus en plus d’importance dans les sociétés industrialisées. Cet article propose un survol de la littérature économique récente dans ce domaine. Le défi majeur qui occupe aujourd’hui le marché du travail est l’accélération du progrès technique, qui s’accompagne d’effets différenciés suivant le niveau de qualification des travailleurs. Deux théories sont évoquées pour analyser ces effets. D’une part, le progrès technique est vu comme biaisé en faveur des travailleurs qualifiés et défavorable aux moins qualifiés. D’autre part, et suite au phénomène de polarisation du marché du travail récemment observé, on peut cependant penser que la technologie est surtout substituable aux emplois du milieu de l’échelle. De ces constatations découlent plusieurs recommandations de politiques économiques, la principale étant de donner aux travailleurs les moyens d’être flexibles et polyvalents, ce qui passe par une éducation relativement générale plutôt que trop spécifique et cantonnée à un secteur ou une profession.

Value for money in H1N1 influenza ::a systematic review of the cost-effectiveness of pandemic interventions

Description: 

Background The 2009 A/H1N1 influenza pandemic generated additional data and triggered new studies that opened debate over the optimal strategy for handling a pandemic. The lessons-learned documents from the World Health Organization show the need for a cost estimation of the pandemic response during the risk-assessment phase. Several years after the crisis, what conclusions can we draw from this field of research? Objective The main objective of this article was to provide an analysis of the studies that present cost-effectiveness or cost-benefit analyses for A/H1N1 pandemic interventions since 2009 and to identify which measures seem most cost-effective. Methods We reviewed 18 academic articles that provide cost-effectiveness or cost-benefit analyses for A/H1N1 pandemic interventions since 2009. Our review converts the studies’ results into a cost-utility measure (cost per disability-adjusted life-year or quality-adjusted life-year) and presents the contexts of severity and fatality. Results The existing studies suggest that hospital quarantine, vaccination, and usage of the antiviral stockpile are highly cost-effective, even for mild pandemics. However, school closures, antiviral treatments, and social distancing may not qualify as efficient measures, for a virus like 2009’s H1N1 and a willingness-to-pay threshold of $45,000 per disability-adjusted life-year. Such interventions may become cost-effective for severe crises. Conclusions This study helps to shed light on the cost-utility of various interventions, and may support decision making, among other criteria, for future pandemics. Nonetheless, one should consider these results carefully, considering these may not apply to a specific crisis or country, and a dedicated cost-effectiveness assessment should be conducted at the time.

neXtA5 ::accelerating annotation of articles via automated approaches in neXtProt

Description: 

The rapid increase in the number of published articles poses a challenge for curated databases to remain up-to-date. To help the scientific community and database curators deal with this issue, we have developed an application, neXtA5, which prioritizes the literature for specific curation requirements. Our system, neXtA5, is a curation service composed of three main elements. The first component is a named-entity recognition module, which annotates MEDLINE over some predefined axes. This report focuses on three axes: Diseases, the Molecular Function and Biological Process sub-ontologies of the Gene Ontology (GO). The automatic annotations are then stored in a local database, BioMed, for each annotation axis. Additional entities such as species and chemical compounds are also identified. The second component is an existing search engine, which retrieves the most relevant MEDLINE records for any given query. The third component uses the content of BioMed to generate an axis-specific ranking, which takes into account the density of named-entities as stored in the Biomed database. The two ranked lists are ultimately merged using a linear combination, which has been specifically tuned to support the annotation of each axis. The fine-tuning of the coefficients is formally reported for each axis-driven search. Compared with PubMed, which is the system used by most curators, the improvement is the following:+231% for Diseases,+236% for Molecular Functions and +3153% for Biological Process when measuring the precision of the topreturned PMID (P0 or mean reciprocal rank). The current search methods significantly improve the search effectiveness of curators for three important curation axes. Further experiments are being performed to extend the curation types, in particular protein–protein interactions, which require specific relationship extraction capabilities. In parallel, userfriendly interfaces powered with a set of JSON web services are currently being implemented into the neXtProt annotation pipeline.

Factorizing LambdaMART for cold start recommendations

Description: 

Recommendation systems often rely on point-wise loss metrics such as the mean squared error. However, in real recommendation settings only few items are presented to a user. This observation has recently encouraged the use of rank-based metrics. LambdaMART is the state-of-the-art algorithm in learning to rank which relies on such a metric. Motivated by the fact that very often the users’ and items’ descriptions as well as the preference behavior can be well summarized by a small number of hidden factors, we propose a novel algorithm, LambdaMART matrix factorization (LambdaMART-MF), that learns latent representations of users and items using gradient boosted trees. The algorithm factorizes LambdaMART by defining relevance scores as the inner product of the learned representations of the users and items. We regularise the learned latent representations so that they reflect the user and item manifolds as these are defined by their original feature based descriptors and the preference behavior. We also propose to use a weighted variant of NDCG to reduce the penalty for similar items with large rating discrepancy. We experiment on two very different recommendation datasets, meta-mining and movies-users, and evaluate the performance of LambdaMART-MF, with and without regularization, in the cold start setting as well as in the simpler matrix completion setting. The experiments show that the factorization of LambdaMart brings significant performance improvements both in the cold start and the matrix completion settings. The incorporation of regularisation seems to have a smaller performance impact.

The SIB Swiss institute of bioinformatics’ resources ::focus on curated databases

Description: 

The SIB Swiss Institute of Bioinformatics (www. isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB’s resources and competence areas, with a strong focus on curated databases and SIB’s most popular and widely used resources. In particular, SIB’s Bioinformatics resource portal ExPASy features over 150 resources, including UniProtKB/Swiss-Prot, ENZYME, PROSITE, neXtProt, STRING, UniCarbKB, SugarBindDB, SwissRegulon, EPD, arrayMap, Bgee, SWISS-MODEL Repository, OMA, OrthoDB and other databases, which are briefly described in this article.

Effectiveness, earmarking and labeling ::testing the acceptability of carbon taxes with survey data

Description: 

This paper analyzes the drivers of carbon taxes acceptability with survey data and a randomized labeling treatment. Based on a sample of more than 300 individuals, it assesses the effect on acceptability of specific policy designs and individuals’ perceptions of carbon taxes advantages and disadvantages. We find that the lack of perception of primary and ancillary benefits is one of the main barriers to the acceptability of carbon taxes. In addition, policy design matters for acceptability and in particular earmarking fiscal revenues for environmental purposes can lead to larger support. We also find an effect of labeling, comparing the wording ‘‘climate contribution’’ with ‘‘carbon tax’’. We argue that proper policy design coupled with effective communication on the effects of carbon taxes may lead to a substantial improvement in acceptability

Long-term care ::is there crowding out of informal care, private insurance as well as saving ?

Description: 

Publicly provided long-term care (LTC) insurance with means-tested benefits is suspected to crowd out either private LTC insurance (Brown and Finkelstein 2008. The Interaction of Public and Private Insurance: Medicaid and the Long-Term Care Insurance Market. American Economic Review 98(3):1083–102), private saving (Gruber and Yelowitz 1999. Public Health Insurance and Private Saving. Journal of Political Economy 107(6):1249–74; Sloan and Norton 1997. Adverse Selection, Bequests, Crowding Out, and Private Demand for Insurance: Evidence from the Long-term Care Insurance Market. Journal of Risk and Uncertainty 15:201–19), or informal care (Pauly 1990. The Rational Non-purchase of Long-term Care Insurance. Journal of Political Economy 95:153–68; Zweifel and Strüwe 1998. Long-term Care Insurance in a Two-generation Model. Journal of Risk and Insurance 65(1):13–32). This contribution predicts crowding-out effects for both private LTC insurance and informal care on the one hand and private saving and informal care on the other. These effects result from the interaction of a parent who decides about private LTC insurance before retirement and the amount of saving in retirement and a caregiver who decides about effort devoted to informal care. Some of the predictions are tested using a recent survey from China.

Transitions vers le post obligatoire ::différences de genre entre groupes ethniques par rapport à la norme scolaire

Description: 

Dans cet article, nous nous intéressons à l’assimilation des filles et garçons de nationalité étrangère dans le système scolaire suisse, en nous concentrant sur la transition en fin de scolarité obligatoire. Nous examinons empiriquement les effets de la vague migratoire, de l’âge d’arrivée et de la cohorte sur les différences de genre, en exploitant une base de données administrative du canton de Genève couvrant quinze cohortes d’élèves (1993-2007). Nos résultats montrent des écarts importants entre les différents sous-groupes de la population scolaire, indépendamment du statut socioéconomique des parents. En particulier, plus les enfants migrants arrivent tôt au sein du système scolaire, plus leur comportement de genre au moment de la transition dépend de leur appartenance ethnique ; un résultat compatible avec l’idée d’une assimilation caractérisée par une acculturation sélective.

Deep Question Answering for protein annotation

Description: 

Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision.

Pagine

Le portail de l'information économique suisse

© 2016 Infonet Economy

Abbonamento a RSS - Direzione & management