Haute Ecole de Gestion de Genève

To offer ebooks in libraries ::a way to break down physical barriers to knowledge and culture

Description: 

As institutions of public service, libraries play a major role in providing a democratic and egalitarian access to information and culture. Their missions are defined in several manifestos and codes of ethics (UNESCO, 1994; IFLA, 2012; IFLA, 2014). In such a context, integration of digital resources into libraries collections has created new difficulties, due to: - the complexity of organising and making these contents easily available; - visibility and promotion aspects. This, not only because the mission of libraries cover access to every kind of resources, digital or not, but also because it is one of the patrons’ expectations. Based on the results of a research project held in collaboration with some french and swiss libraries (Epron, Pouchot, Dillaerts and Prinz, 2014; Pouchot, Vieux, Peregrina, 2015), the aim of the poster is to set out some solutions to better integrate ebooks solutions into libraries’ offer and to optimise the communication actions about this kind of resources. Our suggestions are divided into two kinds of recommendations: on one hand, those dealing with content access, on the other hand, the ones regarding communication. First, patrons may have difficulties to identify, find, access and read ebooks. Their needs and wishes here may concern the devices as well as the content selection and providing. We encourage libraries to: - Supply the patrons with preloaded reading devices; - Offer personalised access to ebooks; - Propose downloadable lists of ebooks. For example, specific contents can be selected according to topics such as civic engagement or social development. Then, given that ebooks have appeared quite recently in libraries’ collections and that this kind of resources are intangible, this offer is often little-known by patrons. Furthermore, users do not always have sufficient technical and informational knowledge to access and read ebooks. Thus, it is necessary to inform them about: - the simple fact that this digital offer exists; - the scope of the offer; - technical aspects and constraints linked to these digital documents’ use (e.g. formats, devices, access protocols…); the support provided by their library (help, training, workshops…). Some actions can be undertaken to develop the ebooks’ potential and use: - To deliver appropriate and accurate information about ebooks by developing new services based on information literacy, use of ebooks and digital reading devices advice ; - To train patrons and to encourage them to self-study in this field ; - To efficiently communicate to highlight ebooks. In this way, libraries should offer a wide access to knowledge, regardless the medium, especially since digital contents break down physical barriers and can reach people with disabilities or far from (digital) reading (elders, prisoners…).

Space-time local embeddings

Description: 

Space-time is a profound concept in physics. This concept was shown to be useful for dimensionality reduction. We present basic definitions with interesting counter-intuitions. We give theoretical propositions to show that space-time is a more powerful representation than Euclidean space. We apply this concept to manifold learning for preserving local information. Empirical results on nonmetric datasets show that more information can be preserved in space-time.

Functional learning of time-series models preserving Granger-causality structures

Description: 

We develop a functional learning approach to modelling systems of time series which preserves the ability of standard linear time-series models (VARs) to uncover the Granger-causality links in between the series of the system while allowing for richer functional relationships. We propose a framework for learning multiple output-kernels associated with multiple input-kernels over a structured input space and outline an algorithm for simultaneous learning of the kernels with the model parameters with various forms of regularization including non-smooth sparsity inducing norms. We present results of synthetic experiments illustrating the benefits of the described approach.

Using mobile data and strategic tourism flows Pilot study Monitour in Switzerland

Accounting and financial professions in Swiss public administrations ::what are the profiles and users’ satisfaction when using enterprise resource planning (ERP) systems ?

Description: 

This study aims to examine the impact of an Integrated Financial System (ERP system) implementation on accountant profiles in Swiss public administrations. ERP systems are widely researched by authors and are described as a form of change driver. Conversely, the types of changes and, more specifically, the consequences on accountants’ profiles are not studied and are therefore the core of our investigation. The methodology used in this study was based on the completion of a large survey allowing for statistical analysis and is focused on the necessary knowledge and skill sets of accountants working with an ERP system. In this survey, we brought to light new information about the current skill sets needed by accountants using an ERP system to improve the work of this profession and, therefore, to enhance the performance of finance and accounting staff and the quality of information supplied by public administrators. The results allowed us to design the profile of an accountant working with an ERP system in the public sector. In particular, the study examined knowledge and skill sets, as well as educational background and professional experience. Moreover, the criteria that impacted the ERP system users’ satisfaction were identified, and these findings especially provided practical implications for public sector CFOs. Finally, we highlighted the crucial need for continuous education in accounting and the necessity to reconsider and adapt the job descriptions of accounting and finance staff when they work with an ERP system.

Exploiting incoming and outgoing citations for improving Information Retrieval in the TREC 2015 Clinical Decision Support Track

Description: 

We investigated two strategies for improving Information Retrieval thanks to incoming and outgoing citations. We first started from settings that worked last year and established a baseline. Then, we tried to rerank this run. The incoming citations’ strategy was to compute the number of incoming citations in PubMed Central, and to boost the score of the articles that were the most cited. The outgoing citations’ strategy was to promote the references of the retrieved documents. Unfortunately, no significant improvement from the baseline was observed.

Document retrieval metrics for program understanding

Description: 

The need for domain knowledge representation for program comprehension is now widely accepted in the program comprehension community. The so-called "concept assignment problem" represents the challenge to locate domain concepts in the source code of programs. The vast majority of attempts to solve it are based on static source code search for clues to domain concepts. In contrast, our approach is based on dynamic analysis using information retrieval (IR) metrics. First we explain how we modeled the domain concepts and their role in program comprehension. Next we present how some of the popular IR metrics could be adapted to the "concept assignment problem" and the way we implemented the search engine. Then we present our own metric and the performance of these metrics to retrieve domain concepts in source code. The contribution of the paper is to show how the IR metrics could be applied to the "concept assignment problem" when the "documents" to retrieve are domain concepts structured in an ontology.

Instance-based learning for tweet monitoring and categorization

Description: 

The CLEF RepLab 2014 Track was the occasion to investigate the robustness of instance-based learning in a complete system for tweet monitoring and categorization based. The algorithm we implemented was a k-Nearest Neighbors. Dealing with the domain (automotive or banking) and the language (English or Spanish), the experiments showed that the categorizer was not affected by the choice of representation: even with all learning tweets merged into one single Knowledge Base (KB), the observed performances were close to those with dedicated KBs. Interestingly, English training data in addition to the sparse Spanish data were useful for Spanish categorization (+14% for accuracy for automotive, +26% for banking). Yet, performances suffered from an overprediction of the most prevalent category. The algorithm showed the defects of its virtues: it was very robust, but not easy to improve. BiTeM/SIBtex tools for tweet monitoring are available within the DrugsListener Project page of the BiTeM website (http://bitem.hesge.ch/).

Learning coherent Granger-causality in panel vector autoregressive models

Description: 

We consider the problem of forecasting multiple time series across multiple cross-sections based solely on the past observations of the series. We propose to use panel vector autoregressive model to capture the inter-dependencies on the past values of the multiple series. We restrict the panel vector autoregressive model to exclude the cross-sectional relationships and propose a method to learn models with sparse Granger-causality structures coherent across the panel sections. The method extends the concepts of group variable selection and support union recovery into the panel setting by extending the group lasso penalty (Yuan & Lin, 2006) into matrix output regression setting with 3d-tensor of model parameters.

Information geometry and minimum description length networks

Description: 

We study parametric unsupervised mixture learning. We measure the loss of intrinsic information from the observations to complex mixture models, and then to simple mixture models. We present a geometric picture, where all these representations are regarded as free points in the space of probability distributions. Based on minimum description length, we derive a simple geometric principle to learn all these models together. We present a new learning machine with theories, algorithms, and simulations.

Seiten

Le portail de l'information économique suisse

© 2016 Infonet Economy

RSS - Haute Ecole de Gestion de Genève abonnieren