Motivation and Goals

Systems we build are ultimately evaluated based on the value they deliver to their users and stakeholders. To increase the value, systems are subject to fast-paced evolution of the systems, due to unpredictable markets, complex and changing customer requirements, pressures of shorter time-to-market, and rapidly advancing information technologies.

To address this situation, agile practices advocate flexibility, efficiency and speed. Continuous software engineering refers to the organisational capability to develop, release and learn from software in rapid parallel cycles, typically hours, days or very small numbers of weeks. This includes to determine new functionality to build, evolving and refactoring the architecture, developing the functionality, validating it, and releasing it to customers. One needs to relate the changes performed on the system with their effect on the metrics of interest, keep the changes with positive effects, and discard the rest. In case of complex systems involving humans in the loop, such a relation is difficult to infer a priori; a solution is then to observe and experiment with systems in production environments, e.g. with continuous experimentation.

Reaching this goal requires crosscutting research which spans from the area of process and organisational aspects in software engineering to the individual phases of the software engineering lifecycle and finally to live experimentation to evaluate different system alternatives by users’ feedback. With the proliferation of data analysis and machine learning techniques and flexible approaches to rapid deployment, experimentation can be used in different domains (e.g. embedded systems); it can also be automated and used for runtime adaptation. These new concepts call for synergy between software engineers and data scientists.

RCoSE/DDrEE'19 brings together academics and practitioners with the overall goals:


to identify the problems in adoption and use of continuous software engineering and data-driven decisions

New ideas

to discuss new ideas that apply successfull and established concepts to other domains and use cases


to build a community between software engineers and data scientists working on a common research agenda


Workshop structure and planned outcomes

The full-day workshop will open with a keynote talk. The presenter of each accepted paper will then have approx. 20 mins for presentation and Q&A. We will try to stimulate discussions on the identified challenges and proposed solutions. Breakout groups will discuss the general topics of the workshop’s contributions.

As a follow-up of the workshop and to better consolidate the results from it, we plan to publish a report of the workshop’s outcomes in ACM SIGSOFT Software Engineering Notes.


Session 1 (9:00-10:30)

Welcome, workshop scope and goals

Workshop organizers

Keynote from Jeffrey Wong (Netflix): Mathematical Engineering in an Experimentation Platform's Measurement Ecosystem [slides]

The Experimentation Platform (XP) at Netflix manages and analyzes hundreds of experiments to improve product, operations, and marketing. For example, the XP helps to drive decisions in the UI, marketing effectiveness, and artwork selection. The measurement ecosystem within XP is a collection of backend statistical libraries and engineering systems that provide rich measurement on the effects of experiments. To be highly scalable and adaptive to different types of experiments, Mathematical Engineering for the measurement ecosystem carefully engineers for scalability and generalizability of causal inference algorithms. We will walk through how data is pulled into the measurement ecosystem, the mathematical framing of the causal effects problem, and the challenging numerical problems we have to solve for analysis.
Bio: Jeffrey Wong is a Senior Modeling Architect for Netflix's Experimentation Platform. His work has been at the intersection of statistics and numerical computing, with applications in optimal decision making for Netflix, such as incrementality in marketing. At Netflix he leads mathematical engineering and the development of scalable causal inference libraries which are used in the experimentation platform and as a research tool for causal inference methodology. In the past he was a Senior Research Scientist for Netflix working on causal machine learning and policy algorithms.

Data-driven Insights from Vulnerability Discovery Metrics [slides]

Nuthan Munaiah and Andrew Meneely
Rochester Institute of Technology
Coffee Break (10:30-11:00)

Session 2 (11:00-12:30)

Supporting the Developer Experience with Production Metrics [blog post]

Robert Chatley
Imperial College London

Continuous Thinking Aloud [pre-print]

Jan Ole Johanssen, Lara Marie Reimer and Bernd Bruegge
Technical University Munich

Hypotheses Engineering: first essential steps of experiment-driven software development [slides]

Jorge Melegati, Xiaofeng Wang and Pekka Abrahamsson
Free University of Bozen-Bolzano; University of Jyvaskyla

An Architectural Framework for Quality-driven Adaptive Continuous Experimentation [slides]

Miguel Jiménez, Luis F. Rivera, Norha M. Villegas, Gabriel Tamura, Hausi A. Müller and Nelly Bencomo
University of Victoria; Universidad ICESI; Aston University
Lunch Break (12:30-14:00)

Session 3 (14:00-15:30)

GLT: Edge Gateway ELT for Data-driven Intelligence Placement [slides]

Vasileios Theodorou and Nikos Diamantopoulos
Intracom Telecom; Independent

Break out group session #1.

Coffee Break (15:30-16:00)

Session 4 (16:00-17:30)

Break out group session #2.

Submission and Important Dates

The workshop invites three types of submissions:
  • Full research papers and experience reports, presenting original and evaluated research. Maximum length: 7 pages incl. references.
  • Position papers, presenting promising initial results from work-in-progress approaches or research challenges, experiences or roadmaps related to the theme of the workshop. Maximum length: 4 pages incl. references.
  • Industrial abstracts describe challenges or success stories from practice. Maximum length: 2 pages incl. references.
Submitted papers must conform to the IEEE Conference Proceedings Formatting Guidelines available at

Please submit your paper using the following link:

Submitted papers will be reviewed by at least 3 members of the PC and judged based on their relevance to the workshop scope, quality and originality of their results. Accepted papers will be published at the ICSE 2019 Companion volume by IEEE.

Important Dates