ITU

Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Standard

Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT. / Hebig, Regina; Berger, Thorsten; Seidl, Christoph; Kook Pedersen, John ; Wasowski, Andrzej.

Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018. Association for Computing Machinery, 2018. p. 445-455.

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Harvard

Hebig, R, Berger, T, Seidl, C, Kook Pedersen, J & Wasowski, A 2018, Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT. in Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018. Association for Computing Machinery, pp. 445-455. https://doi.org/10.1145/3236024.3236046

APA

Hebig, R., Berger, T., Seidl, C., Kook Pedersen, J., & Wasowski, A. (2018). Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT. In Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018 (pp. 445-455). Association for Computing Machinery. https://doi.org/10.1145/3236024.3236046

Vancouver

Hebig R, Berger T, Seidl C, Kook Pedersen J, Wasowski A. Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT. In Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018. Association for Computing Machinery. 2018. p. 445-455 https://doi.org/10.1145/3236024.3236046

Author

Hebig, Regina ; Berger, Thorsten ; Seidl, Christoph ; Kook Pedersen, John ; Wasowski, Andrzej. / Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT. Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018. Association for Computing Machinery, 2018. pp. 445-455

Bibtex

@inproceedings{168e395e91c54d3782769477539eab85,
title = "Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT",
abstract = "In Model-Driven Software Development, models are automatically processed to support the creation, build, and execution of systems. A large variety of dedicated model-transformation languages exists, promising to efficiently realize the automated processing of models. To investigate the actual benefit of using such specialized languages, we performed a large-scale controlled experiment in which over 78 subjects solve 231 individual tasks using three languages. The experiment sheds light on commonalities and differences between model transformation languages (ATL, QVT-O) and on benefits of using them in common development tasks (comprehension, change, and creation) against a modern general-purpose language (Xtend). Our results show no statistically significant benefit of using a dedicated transformation language over a modern general-purpose language. However, we were able to identify several aspects of transformation programming where domain-specific transformation languages do appear to help, including copying objects, context identification, and conditioning the computation on types.",
author = "Regina Hebig and Thorsten Berger and Christoph Seidl and {Kook Pedersen}, John and Andrzej Wasowski",
year = "2018",
doi = "10.1145/3236024.3236046",
language = "English",
isbn = "978-1-4503-5573-5",
pages = "445--455",
booktitle = "Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018",
publisher = "Association for Computing Machinery",
address = "United States",

}

RIS

TY - GEN

T1 - Model transformation languages under a magnifying glass: a controlled experiment with Xtend, ATL, and QVT

AU - Hebig, Regina

AU - Berger, Thorsten

AU - Seidl, Christoph

AU - Kook Pedersen, John

AU - Wasowski, Andrzej

PY - 2018

Y1 - 2018

N2 - In Model-Driven Software Development, models are automatically processed to support the creation, build, and execution of systems. A large variety of dedicated model-transformation languages exists, promising to efficiently realize the automated processing of models. To investigate the actual benefit of using such specialized languages, we performed a large-scale controlled experiment in which over 78 subjects solve 231 individual tasks using three languages. The experiment sheds light on commonalities and differences between model transformation languages (ATL, QVT-O) and on benefits of using them in common development tasks (comprehension, change, and creation) against a modern general-purpose language (Xtend). Our results show no statistically significant benefit of using a dedicated transformation language over a modern general-purpose language. However, we were able to identify several aspects of transformation programming where domain-specific transformation languages do appear to help, including copying objects, context identification, and conditioning the computation on types.

AB - In Model-Driven Software Development, models are automatically processed to support the creation, build, and execution of systems. A large variety of dedicated model-transformation languages exists, promising to efficiently realize the automated processing of models. To investigate the actual benefit of using such specialized languages, we performed a large-scale controlled experiment in which over 78 subjects solve 231 individual tasks using three languages. The experiment sheds light on commonalities and differences between model transformation languages (ATL, QVT-O) and on benefits of using them in common development tasks (comprehension, change, and creation) against a modern general-purpose language (Xtend). Our results show no statistically significant benefit of using a dedicated transformation language over a modern general-purpose language. However, we were able to identify several aspects of transformation programming where domain-specific transformation languages do appear to help, including copying objects, context identification, and conditioning the computation on types.

U2 - 10.1145/3236024.3236046

DO - 10.1145/3236024.3236046

M3 - Article in proceedings

SN - 978-1-4503-5573-5

SP - 445

EP - 455

BT - Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04-09, 2018

PB - Association for Computing Machinery

ER -

ID: 83700787