Treffer: A testing framework for executable domain-specific languages

Title:
A testing framework for executable domain-specific languages
Contributors:
Sunye, Gerson, Mottu, Jean-Marie, Bousse, Erwan
Publisher Information:
Zenodo
Publication Year:
2023
Collection:
Zenodo
Document Type:
Fachzeitschrift text
Language:
English
DOI:
10.5281/zenodo.8195446
Rights:
Creative Commons Attribution 4.0 International ; cc-by-4.0 ; https://creativecommons.org/licenses/by/4.0/legalcode
Accession Number:
edsbas.38CFC779
Database:
BASE

Weitere Informationen

The continuous growth of software complexity raises the need for effective complexity management. Model-Driven Engineering (MDE) is a development paradigm that meets this requirement by separating concerns through models. A model is a specific abstraction of a system that can be defined by a Domain-Specific Language (DSL). A DSL with execution facilities, referred to as Executable DSL (xDSL), enriches the modeling quality by enabling the employment of dynamic Verification & Validation (V&V) techniques. Testing is the most prevalent dynamic V&V technique in the field of software engineering. While many testing frameworks exist for general-purpose programming languages, providing testing facilities for any given xDSL remains a costly and challenging task. In this thesis, we propose a generic testing framework for executable DSLs. Given an xDSL, the framework provides a testing language that supports the use of xDSL-specific concepts in the definition of test cases. This enables the xDSL’s users, namely the domain experts, to write test cases for their models. The written test cases can be executed on the models and the test results will be produced. To further support the domain expert in efficiently testing models, the framework offers three supplementary services: (i) test quality measurement to ensure that the written test cases are good enough; (ii) test debugging to localize the fault of the model under test in case of test failure; and (iii) automatic test improvement to strengthen the ability of written test cases in detecting regression faults.