Among the proposed opportunistic content sharing services, Floating Content (FC) is of special interest for the vehicular environment, not only for cellular traffic offloading, but also as a natural communication paradigm for location-based context-aware vehicular applications. Existing results on the performance of vehicular FC have focused on content persistence, without addressing the key issues of the effectiveness with which content is replicated and made available, and of what are the conditions which enable acceptable FC performance in the vehicular environment. This work presents a first analytical model of FC performance in vehicular networks in urban settings. It is based on a variation of the random waypoint (RWP) mobility model, and it does not require a model of road grid geometry for its parametrization. We validate our model extensively, through numerical simulations on real-world traces, showing its accuracy on a variety of mobility patterns and traffic conditions. Through analysis and simulations, we show the feasibility of the FC paradigm in realistic urban settings over a wide range of traffic conditions.
Theprocessesofradiomicsconsistofimage-basedpersonalizedtumorphenotypingforpre-cision medicine. They complement slow, costly and invasive molecular analysis of tumoral tissue. Whereastherelevanceofalargevarietyofquantitativeimagingbiomarkershasbeen demonstrated for various cancer types, most studies were based on 2D image analysis of relatively small patient cohorts. In this work, we propose an online tool for automatically ex-tracting 3D state-of-the-art quantitative imaging features from large batches of patients. The developed platform is called QuantImage and can be accessed from any web browser. Its use is straightforward and can be further parameterized for refined analyses. It relies on a robust 3D processing pipeline allowing normalization across patients and imaging protocols. Theusercansimplydrag-and-dropalargezipfilecontainingallimagedataforabatchofpa-tients and the platform returns a spreadsheet with the set of quantitative features extracted for each patient. It is expected to enable high-throughput reproducible research and the validation of radiomics imaging parameters to shape the future of non-invasive personalized medicine.
Object detection and recognition algorithms usually require large, annotated training sets. The creation of such datasets requires expensive manual annotation. Eye tracking can help in the annotation procedure. Humans use vision constantly to explore the environment and plan motor actions, such as grasping an object. In this paper we investigate the possibility to semi-automatically train object recognition with eye tracking, accelerometer in scene camera data, learning from the natural hand-eye coordination of humans. Our approach involves three steps. First, sensor data are recorded using eye tracking glasses that are used in combination with accelerometers and surface electromyography that are usually applied when controlling prosthetic hands. Second, a set of patches are extracted automatically from the scene camera data while grasping an object. Third, a convolutional neural network is trained and tested using the extracted patches. Results show that the parameters of eye-hand coordination can be used to train an object recognition system semi-automatically. These can be exploited with proper sensors to fine-tune a convolutional neural network for object detection and recognition. This approach opens interesting options to train computer vision and multi-modal data integration systems and lays the foundations for future applications in robotics. In particular, this work targets the improvement of prosthetic hands by recognizing the objects that a person may wish to use. However, the approach can easily be generalized.
Techniques originating from the Internet of Things (IoT) and Cyber-Physical Systems (CPS) areas have extensively been applied to develop intelligent and pervasive systems such as assistive monitoring, feedback in telerehabilitation, energy management, and negotiation. Those application do-mains particularly include three major characteristics: intel-ligence, autonomy and real-time behavior. Multi-Agent Sys-tems (MAS) are one of the major technological paradigms that are used to implement such systems. However, they mainly address the first two characteristics, but miss to com-ply with strict timing constraints. The timing compliance is crucial for safety-critical applications operating in domains such as healthcare and automotive. The main reasons for this lack of real-time satisfiability in MAS originate from cur-rent theories, standards, and technological implementations. In particular, internal agent schedulers, communication mid-dlewares, and negotiation protocols have been identified as co-factors inhibiting the real-time compliance. This paper provides an analysis of such MAS components and pave the road for achieving the MAS compliance with strict timing constraints, thus fostering reliability and predictability.
Electronic Data Capture (EDC) software solutions are progressively being adopted for conducting clinical trials and studies, carried out by biomedi-cal, pharmaceutical and health-care research teams. In this paper we present the MedRed Ontology, whose goal is to represent the metadata of these studies, using well-established standards, and reusing related vocabularies to describe essential aspects, such as validation rules, composability, or provenance. The paper de-scribes the design principles behind the ontology and how it relates to existing models and formats used in the industry. We also reuse well-known vocabularies and W3C recommendations. Furthermore, we have validated the ontology with ex-isting clinical studies in the context of the MedRed project, as well as a collection of metadata of well-known studies. Finally, we have made the ontology available publicly following best practices and vocabulary sharing guidelines.
The Snowden documents have revealed that intelligence agencies conduct large-scale digital surveillance by exploiting vulnerabilities in the hardware and software of communication infrastructures. These vulnerabilities have been characterized as “weaknesses,” “flaws,” “bugs,” and “backdoors.” Some of these result from errors in the design or implementation of systems, others from unanticipated uses of intended features. A particularly subtle kind of vulnerability arises from the manipulation of technical standards to render communication infrastructures susceptible to surveillance. Technical standards have a powerful influence on our digital environment: They shape the conditions under which digital citizenship is exercised. The Snowden revelations brought to the forefront the role of intelligence agencies in the standards-making process, lending new urgency to the debate over the adequacy and legitimacy of the current mechanisms used for negotiating standards. This article explores how influence is exercised in the production of standards and the implications this has for their trustworthiness and integrity.
Training deep convolutional neural network for classification in medical tasks is often difficult due to the lack of annotated data sam-ples. Deep convolutional networks (CNN) has been successfully used as an automatic detection tool to support the grading of diabetic retinopa-thy and macular edema. Nevertheless, the manual annotation of exu-dates in eye fundus images used to classify the grade of the DR is very time consuming and repetitive for clinical personnel. Active learning al-gorithms seek to reduce the labeling effort in training machine learning models. This work presents a label-efficient CNN model using the ex-pected gradient length, an active learning algorithm to select the most informative patches and images, converging earlier and to a better local optimum than the usual SGD (Stochastic Gradient Descent) strategy. Our method also generates useful masks for prediction and segments regions of interest.
Therapeutic Drug Monitoring (TDM) is a key concept in precision medicine. The goal of TDM is to avoid therapeutic failure or toxic effects of a drug due to insufficient or excessive circulating concen-tration exposure related to between-patient variability in the drug’s disposition. We present TUCUXI – an intelligent system for TDM. By making use of embedded mathematical models, the software allows to compute maximum likelihood individual predictions of drug concentrations from population pharmacokinetic data, based on patient’s parameters and previously observed concentrations. TUCUXI was developed to be used in medical practice, to assist clinicians in taking dosage adjustment decisions for optimizing drug concentration levels. This software is currently being tested in a University Hospital. In this paper we focus on the process of software integration in clinical workflow. The modular architec-ture of the software allows us to plug in a module enabling data aggregation for research purposes. This is an important feature in order to develop new mathematical models for drugs, and thus to improve TDM. Finally we discuss ethical issues related to the use of an automated decision support system in clinical practice, in particular if it allows data aggregation for research purposes.
Recent papers in the field of radiomics showed strong evidence that novel image biomarkers based on structural tissue properties have the potential to complement and even surpass invasive and costly biopsy-based molecular assays in certain clinical contexts. To date, very few translations of these research results to clinical practive have been carried out. In addition, a majority of the identified imaging biomarkers are perceived as black boxes by end-users, hindering their acceptance in clinical and research environments. We present a suite of plugins called Quantitative Feature Explore (QFExplore) for the open-access cloud-based ePAD platform enabling the exploration and validation of new imaging biomarkers in a clinical environment. The latter include the extraction, visualization and comparison of intensity- and texture-based quantitative imaging features, regional division of regions of interest to reveal tissue diversity, as well as the construction, use and sharing of user-personalized statistical machine learning models. No software installation is required and the platform can be accessed through any web browser. The relevance of the developed tools is demonstrated in the context of various clinical use-cases. The software is available online.
The way tourists organize their travels is evolving day after day. While in the past, information was more obtained by the support of tour operators, nowadays this is mainly ob-tained through the Internet. Among many options, on the Internet it is possible to get information about interesting things a place has and to make reservations. This new touristic paradigm brings attractive opportunities such as saving money and discovering unexploited information pro-posed by tourists that already have visited targeted places. But, it comes with a major shortcoming. Information on the Internet can be overwhelming and it can lead to a lot of time spent on planning a travel. Moreover, the Internet con-tains outdated and incorrect information which can cause incorrect travel planning. These issues can become even more preponderant when dealing with niche domains, such as cultural heritage, where the information is not perva-sive. This paper presents the Cityzen platform for planning cultural heritage focused travels. This platform aims to provide a semantic web oriented data model that acts as central repository and which can be used by tourists for accurately and efficiently planning their sightseeing. This platform deals with cases of incorrect and incomplete in-formation by using the social web as possibility of actively engage users in the information management while they explore the provided information.