Tag Archives: responsible

HOW DOES MICROSOFT ENSURE RESPONSIBLE DEVELOPMENT AND APPLICATION OF AI IN THE AI FOR GOOD PROGRAM

Microsoft launched the AI for Good initiative in 2017 with the goal of using AI technology to help address some of humanity’s greatest challenges. As one of the leading developers of AI, Microsoft recognizes it has an important responsibility to ensure this powerful technology is developed and applied responsibly and for the benefit of all.

At the core of Microsoft’s approach is a commitment to developing AI using a human-centered design philosophy. This means all AI projects undertaken as part of AI for Good are guided by principles of transparency, fairness and accountability. Ethics reviews are integrated into the design, development and testing processes from the earliest stages to help identify and mitigate any risks or potential for harm, bias or unintended consequences.

A multi-disciplinary team of engineers, data scientists, sociologists and ethicists work closely together on all AI for Good initiatives. Their goal is to develop AI solutions that augment, rather than replace, human capabilities and decision making. Input from external experts and potential end users are also sought to shape the design of technology and address needs. For example, when developing AI for healthcare, Microsoft works with medical professionals, patients and advocacy groups to identify real challenges and ensure any tools developed are clinically valid and easy for non-technical people to understand and use safely.

Once an AI model or technology is developed, rigorous testing is conducted to evaluate its performance, accuracy, fairness and resilience. Data used to train models is also carefully analyzed to check for biases or gaps. Microsoft believes transparency into how its AI systems work is important for maintaining user trust. To help achieve this, explanations of model decisions are provided in non-technical language so users understand the rationale behind predictions or recommendations.

Microsoft further ensures responsible oversight of AI systems by integrating privacy and security measures from the start. Data use complies with regulations like GDPR and is only used for the specified purpose with user consent. Access to data and models is restricted and systems are designed to protect against attacks or attempts to manipulate outputs.

A cornerstone of Microsoft’s approach is ongoing monitoring of AI systems even after deployment. This allows Microsoft to continually evaluate performance for biases that may emerge over time due to changes in data or other factors. If issues are discovered, techniques like training data or model updates can be used to help address them. Microsoft is also investing in technology like Constitutional AI that can help evaluate systems for unfair treatment or harm, improving oversight capabilities over the long run.

Processes are in place for feedback mechanisms so end users, partners and oversight boards can report any concerns regarding an AI system to Microsoft for investigation. Concerns are taken seriously and dealt with transparently. If issues cannot be sufficiently addressed, systems may be taken offline until the problem is resolved.

To ensure AI for Good initiatives have measurable positive impact, key performance indicators are established during project planning. Regular progress reporting against goals keeps teams accountable. Microsoft also supports working with independent third parties to evaluate impact where appropriate using methods like randomized controlled trials.

Where possible, Microsoft aims to openly share learnings from AI for Good projects so others can benefit or build upon the work. Case studies, research papers and data are made available under open licenses when it does not compromise user privacy or intellectual property. Microsoft is also collaborating with partners across industry, civil society and government on issues like model card templates to help standardize ‘nutrition labels’ for AI and advance responsible innovation.

Microsoft brings a multi-faceted approach rooted in human-centric values to help ensure AI developed and applied through its AI for Good initiatives delivers real benefits to people and society in a way that is lawful, ethical and trustworthy. Through a focus on transparency, oversight, accountability and collaboration, Microsoft strives to serve as a leader in developing AI responsibility for the benefit of all. Ongoing efforts aim to help address important challenges through technology, while mitigating risk and avoiding potential downsides.