Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT

Regina Hebig, Thorsten Berger, Christoph Seidl, John Kook Pedersen, Andrzej Wasowski

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstrakt

In Model-Driven Software Development, models are automatically processed to support the creation, build, and execution of systems. A large variety of dedicated model-transformation languages exists, promising to efficiently realize the automated processing of models. To investigate the actual benefit of using such specialized languages, we performed a large-scale controlled experiment in which over 78 subjects solve 231 individual tasks using three languages. The experiment sheds light on commonalities and differences between model transformation languages (ATL, QVT-O) and on benefits of using them in common development tasks (comprehension, change, and creation) against a modern general-purpose language (Xtend). Our results show no statistically significant benefit of using a dedicated transformation language over a modern general-purpose language. However, we were able to identify several aspects of transformation programming where domain-specific transformation languages do appear to help, including copying objects, context identification, and conditioning the computation on types.
OriginalsprogEngelsk
TitelProceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018
Antal sider11
ForlagAssociation for Computing Machinery
Publikationsdato2018
Sider445-455
ISBN (Trykt)978-1-4503-5573-5
DOI
StatusUdgivet - 2018

Fingeraftryk

Dyk ned i forskningsemnerne om 'Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT'. Sammen danner de et unikt fingeraftryk.

Citationsformater