Organizations increasingly use Artificial Intelligence (AI) to achieve their goals. However, the use of AI has led to negative side effects harming people. The work presented in this thesis focuses on harvesting the benefits of AI while preventing harm by presenting theoretical and practical approaches to the responsible management of AI. The thesis answers the research question: How can organizations ensure responsible use of artificial intelligence? Five papers contribute to answering this question. The first paper asks the research question: How can an organization exploit inscrutable AI systems in a safe and socially responsible manner? We answer this question with an exploratory case study in the Danish Business Authority. The paper provides two key contributions by introducing the concept of sociotechnical envelopment and how it enables organizations to manage the tradeoff between predictive power and explainability in AI. The second paper asks the research question: How can organizations reconcile the growing demands for explanations of how AI based algorithmic decisions are made with their desire to leverage AI to maximize business performance? The paper is part of a double issue with the first paper, sharing a similar foundation but differentiating itself by targeting a practitioner’s audience. The paper contributes by proposing a framework with six dimensions to explain the behavior of blackbox AI systems and four recommendations for explaining the behavior of black-box AI systems. The third paper asks the research question: How do we ensure that machine learning (ML) models meet and maintain quality standards regarding interpretability and responsibility in a governmental setting? We address this with the use of the action design research method. The paper introduces the action design research project in the Danish Business Authority and the first version of the design artifact X-RAI framework, including its four sub-frameworks. The fourth paper asks the research question: How should procedures be designed to assess the risks associated with a new AI system. The paper uses action design research, focuses on the first artifact of the X-RAI framework, the Artificial Intelligence Risk Assessment (AIRA) tool, and provides five design principles. The fifth paper asks the research question: How to plan for successful evaluation of AI systems in production? The paper uses action design research and focuses on the second artifact of the X-RAI framework, the Evaluation Plan. The paper finds five challenges in evaluating AI and prescribes five design principles to address them.