Recall the research objectives:
Explore the technical, economic, social, and ethical issues raised by the design of ubiquitous, context-aware, adaptive, human-centric computer ecosystems
We propose to explore three dimensions of design:
- The functional design of these ecosystems: what are they supposed to do;
- The user experience design: how to seamlessly integrate the functionality of these systems with the work- and life- flow of humans at their center; and
- The computer engineering design: designing scale, secure, resilient, adaptive, and resource-efficient infrastructure to support the deployment and operations of such ecosystems.
The figure above shows the most salient influences/inputs for each design discipline; we illustrate this for the case of functional design. Functional design, which is concerned by what an application must do, is influenced, first and foremost, by domain objectives: what is the application supposed to accomplish? A connected health application may aim to prevent health-related accidents, or to induce positive behavioral health changes, whereas an omnichannel commerce application may aim to enhance customer experience or increase customer brand loyalty. Functional design is also influenced by the available technology: the IoT makes it possible to monitor conditions of interest in real-time and in the field, and thus makes it possible to conceive of such applications. Finally, domain knowledge is a prerequisite for any design activity. The remainder of this section addresses these different design dimensions. For each design dimension, we discuss the major underlying research issues, identify the LATECE members exploring those issues, and showcase some of the relevant research projects.
The goal of functional design is to use technology to instrument person-centric processes towards achieving a worthwhile domain (business | social | health) goal. Examples of person-centric processes include:
• Being/living: one of the most promising applications is connected health where a connected ecosystem, consisting of sensors, relays, fog and cloud computing infrastructures, monitors persons as they go about their daily lives, to detect/prevent emergencies, to adjust medication, or to help subjects adopt positive behavioral changes (a more active lifestyle, healthier eating habits, etc.)
• Moving: notwithstanding autonomous vehicles, which exploit data about their environment to move around, we can think of applications that monitor drivers’ actions, reactions to their environment, and interactions with their vehicles. At a larger scale, we can also think of ecosystems for monitoring, managing, and optimizing transportation systems.
• Consuming: this goes from simple personalized online shopping experiences to immersive omnichannel customer experiences where a context-aware application uses a constantly evolving user profile, presents the customer with the most appropriate information, in the most opportune time, in the least intrusive fashion.
Playing: Webster’s define play as the “conduct, course or action of a game”, where a “game” is defined as a “field of gainful activity”, “an activity engaged in for diversion or amusement”, and “a physical or mental competition conducted according to rules with the participants in direct opposition to each other”. A connected, context-aware ecosystem can extend the physical and mental capacities of the player, enable play using immersive technologies, assess our reactions to various play situations, monitor our adherence to the rules of game, or suggest better ways to ‘play’, and has applications for gaming, and other fun endeavours for learning and profit.
• Manufacturing: while manufacturing is not a naturally human-centric process per se, it involves humans at various subprocesses, and connected ecosystems can take humans out of many loops, potentially leading to lower risks, lower error rates, and higher productivity, etc.
When we ‘design’—specify the requirements for—context-aware, adaptive, human-centric applications, we need to address several issues:
• Technical issues: we need to model the processes that we wish to instrument, and this involves two key issues: a) model content, which embodies knowledge of the application domain, and b) modeling methodology, which deals with ways to construct such models. In turn, modeling methodology involves two related challenges: a) developing/designing modeling processes that help us elicit the essence of the domain processes from various knowledge sources, and b) adapting/adopting modeling languages that help express the salient features of such models.
• Economic issues: study the business models underlying these applications, and assess the value created for the various stakeholders. This goes well beyond the popular dictum “if it is free, you are the product”. An insurance application that monitors my driving may reduce my risk of accidents, but also increase my premiums. A health monitoring application may help me avoid health emergencies, but render me uninsurable, come policy renewal time. It is important that users be aware of the value they bring, and ensure that they get a commensurate return.
• Social issues: researching the social impact—and utility—of these applications. A connected health application that helps people with a high risk of developing NCDs (Noncritical Chronic Diseases) to adopt healthier behaviors is definitely a societal plus; an application that tells you about the specials on your favorite junk foods, or about the location of the breathalyzer checkpoints on your itinerary home, maybe not so much.
• Ethical issues: given the amount of data being collected by these applications, and the way this data drives the various algorithms and inferences, most such application raise issues of fairness and privacy. Research in areas as different as facial recognition or pharmacological studies showed how experimental data inherently favor one group at the expense of another . The data collected by these applications also raises a number of privacy issues. Just think of all of the privacy issues raised by seemingly innocuous—and definitely socially useful—exposure risk assessment applications for COVID-19 (abusively called contact tracing applications).
Our researchers are exploring many of these issues:
• Technical issues: Bacon (kinesiology, Concordia), Balbinot (Marketing, UQAM), Boukadoum (Informatique, UQAM), Lavoie (Psychologie, UQAM), and Mili (Informatique, UQAM) have been working on various connected health applications, including the Canada-wide, fully-deployed, and highly-successful behavioral health change ACCELERATION program (Bacon & Lavoie, https://acceleration4health.ca), and budding pilot projects in health monitoring and fall prevention (Balbinot, Boukadoum, Mili). Mili has been working on the functional design of omnichannel customer experience applications that rely on a cognitive modeling of the purchasing process. Leshob (Management, UQAM) and Mili have been working on developing a business process modeling methodology (process and languages) that supports the precise and abstract modeling of business processes that focuses on the business rationale of processes, and supports the formal specialization of such processes.
• Economic issues: The rise of multinational content delivery platforms such as iTunes, Youtube, Netflix, Spotify has changed the entertainment industry and its business models. While it has made content more widely available, it has also led to a homogenization of cultural content, often at the expense of local or specialty content. Michèle Rioux (Political Science, UQAM) has been working on devising discoverability metrics for Québec cultural products (books, movies, songs) on the multinational platforms with the triple purpose of: 1) providing an accurate picture of the ‘presence’ of Québec cultural content on such platforms, 2) providing artists with tools to better promote their productions on such platforms, and 3) inform regulatory bodies, such as the CRTC, about the best strategies to defend and promote Québec—and Canadian—content.
• Ethical issues: Bonenfant (Communications, UQAM), Gambs (Informatique, UQAM) and Killijian (Informatique, UQAM) work on various issues dealing with data privacy. Bonfenant, who holds a Tier 2 Canada Research Chair on “gaming communities and big data”, combines concepts, approaches, and methods from communication studies, AI, and data science to study online gaming communities; part of her research deals with the epistemological and ethical issues in using data science/machine learning to gain insights into communities. Gambs, who holds a Tier 2 Canada Research Chair on “Privacy-Preserving and Ethical Analysis of Big Data”, among other things, works with Killijian to develop data representations and algorithms to codify (e.g., anonymize) and process data collected from people in a way that: 1) respects the individuals’ privacy, 2) supports the kind of legitimate inferences for which the data was collected, and 3) prevents illegitimate inferences, or ones that can threaten the individuals’ privacy. Gambs and Killijian are working with the city of Montréal, within the context of its open-data initiative, to properly ‘anonymize’ the collected location data. Gambs presented a brief to the Québec National Assembly during the public audiences for the Alerte Covid application.
User Experience Design (UXD) is concerned with “supporting user behavior through findability, usability, usefulness, desirability, credibility, accessibility, and value”, defined as follows:
• Findability implies being able to find the product and its content. Findability depends on information architecture and relies on information retrieval technology.
• Usability, according to Wikipedia, refers to the ability of a product or tool to “… enable[s] its users to perform the tasks safely, effectively, and efficiently while enjoying the experience”
Usefulness means that the product or service has a purpose for the user—which can be hedonic. It often comes down to identifying the problem that the product or service solves—and whose problem!
• Desirability relates to branding, image, aesthetics, and emotional design. To make a product or service desirable, we need to properly characterize the product/service and know the user/consumer.
• Credibility relates to truthfulness, information reliability, and trust. Confidence-building measures include security, privacy, accuracy and fairness.
• Accessibility refers to providing a user experience (UX) that is accessible to users with different abilities,
• Value refers to the extent to which the product or service delivers value to the business and the user.
User Experience Design is a quintessentially multidisciplinary activity, whose contours are in constant evolution. As illustrated in Figure 4 by the variety of qualities, it extends well beyond UI or user interaction design, and combines results from cognitive psychology, social psychology, service design , security, algorithms (e.g., Machine Learning), political economy, and others.
Our researchers have been working on user experience design within the context of various applications:
• Bacon (Kinesiology, Concordia), Lavoie (Psychologie, UQAM), and recently, Mili (Informatique, UQAM) have been working on UXD within the context of behavioral health change applications.
• Bacon, Balbinot (Marketing, UQAM), Boukadoum (Informatique, UQAM), Lavoie, and Mili, have been working on UXD issues within the context of connected health applications, collaborating with other researchers from Sciences de l’Activité Physique at UQAM (e.g. Christian Duval, also affiliated with Institut de Gériatrie de Montréal).
• Balbinot, Lavoie, Mili, and Tomiuk (Management, UQAM) have been working on UXD within the context of omnichannel customer experience management.
• Bonenfant (Communications, UQAM) has been studying UXD for gaming and has been studying gaming communities using a state of the art UX equipment.
• Guéhéneuc (Computer Science, Concordia) and Moha (TI et Génie logiciel, ÉTS) have been exploring UXD issues within the context of software engineering research and education, with applications for model and program comprehension.
• Privat (Informatique, UQAM) and Tomiuk (Management, UQAM) have been researching UXD issues within the context of e-learning.
We should also mention that, out of the recently awarded 1.8 $ million CFI Innovation Grant (official announcement upcoming), over 800 $ K will be used to acquire state of the art UX equipment, including about 425 $ K for a stationary UX lab, with high-precision equipment for UX experiments under controlled environment, and 395 $ K for a mobile UX lab, to be deployed real-time in the field.
Generally speaking, the computer engineering design of our smart process applications (SPAs) raise four classes of problems:
• Device integration and operation: different applications require different types of sensors and actuators, using different protocols, different data sampling requirements, and different resource consumption profiles. Care must be taken to make sure that the devices are functional and resilient.
• Infrastructure issues: cloud deployment of applications raises a number of challenges related to: 1) resource allocation between the different layers of the infrastructure (e.g., cloud versus fog computing ), 2) resource allocation within the context of containerized applications, 3) security, in general and within the context of multi-tenant clouds, 4) infrastructure fault-tolerance and resilience, etc.
• Software architecture: different smart process applications (SPAs) have different quality and architectural requirements. A health monitoring—or a combat support—application has different requirements from those of a weather monitoring or an immersive shopping application, in terms of nature and volume of data flow, timing constraints, criticality, etc. We need to establish architectural patterns for SPAs, cloud patterns, explore deployment, evolution, and migration scenarios, etc.
• Machine learning: adaptability is a key feature of SPAs (see Figure 1) and it relies on machine learning to develop and refine models about the user and their environment. When a SPA first goes into operation, it needs to be ‘bootstrapped’ with a priori models based on existing domain knowledge, which can later be refined. A health monitoring application needs some a priori ‘understanding’/’knowledge’ of expected values for the various biometrics, and can’t be given the time to learn on its own which readings indicate life-threatening situations and which do not! Similarly, customer-experience management application should be able to recommend products to twenty somethings before it has the chance of building a purchase history. Thus, putting machine learning to work in a real-time, real-life setting requires novel learning and inference strategies that combine a priori knowledge with incremental, data-driven learning. With regard to inferencing, we need robust strategies so that a SPA can respond, in an emergency, even when some links in the processing chain (sensor → relay → cloud → relay → sensor/actuator) fail.
Our researchers have been working on all of these areas:
• Device integration and operation: Ajib, Boukadoum, Driouch and Elbiaze, all from Informatique, UQAM, have been working on various aspects related to device integration and operation. Ajib, Driouch, and Elbiaze have been working on various resource allocation algorithms within the context of resource-starved ad hoc networks, e.g., vehicular networks. Boukadoum, with dual expertise in hardware and software (and AI), has been working on developing accurate and efficient biomedical sensors.
• Infrastructure issues: Ajib, Boukadoum, Elbiaze, Gambs, Killijian, Mcheick, Privat, and Tremblay—all from Informatique UQAM, except for Mcheick, from Informatique, UQTR—have been working on various infrastructure-related problems, including load balancing (Ajib, Elbiaze), resource allocation problems for devices (Ajib, Boukadoum, Driouch), embedded systems (Privat), container platforms (Elbiaze), infrastructures for context-aware applications (Mcheick), distributed and parallel computing (Mcheick, Tremblay), and privacy and security (Gambs, Killijian).
• Software architecture: Elboussaidi and Moha (TI et Génie logiciel, ÉTS), Guéhéneuc (Computer Science, Concordia), and Lounis, Mili, Mosser and Privat—all from Informatique, UQAM—have been working on various architectural aspects of IoT applications, in general, and SPAs, in particular. Guéhéneuc holds a Tier 1 Canada Research Chair on Software Engineering for IoT. Elboussaidi, Guéhéneuc, Mili, Moha, and Privat have been working on the migration of legacy applications to IoT enabled, service-oriented applications within the context of FRQNT-funded team projects. Elboussaidi, Guéhéneuc, and Mili have been studying open-source IoT frameworks, and are in the process of mapping out the architectural design space for IoT applications. Mosser (Informatique, UQAM) has been working on composition problems within the context of ultra-large systems, or systems of systems. Mosser and Privat are working on ways of weaving and composing security requirements.
• Machine learning: seven LATECE members have been working on various aspects of machine learning. Bouguessa (Informatique, UQAM) and Boukadoum are experts in deep learning and have been applying deep neural networks to a variety of problems in natural language processing, the analysis of time series for predictive maintenance, social network analysis, video image recognition (for autonomous vehicles). Boukadoum and Lounis have been applying ML to software engineering problems, in general, and software quality, in particular. Gambs and Meurs (Informatique, UQAM) have been applying machine learning techniques to natural language processing within the context of law and mental health. Gambs and Missaoui (Informatique, UQO) have been working on explainable AI, from different angles: Gambs works on developing explainable algorithms that approximate black-box AI algorithms, with a view towards assessing fairness. Missaoui has been working on symbolic machine learning algorithms (formal concept analysis) with applications to web mining, information classification and retrieval, etc. Lemire (Science & Technologie, Téluq) is a world-reknown expert on performance optimisation for various algorithms used in ‘big data’ processing, by taking advantage of hardware-level optimizations of processor architectures.
We should note that the recently awarded, 1.8 $ million, CFI Innovation Grant mentioned earlier includes over 750 $ K of equipment to support our research on computer engineering design of SPAs, including 388 $ K for an IoT Edge/Fog platform, and close to 370 $ K for a two-site (UQAM, Téluq) cloud platform.