We deliver tools to our employees, customers and partners for developing and using AI responsibly, accurately and ethically.
We believe the benefits of AI should be accessible to everyone. But it is not enough to deliver only the technological capabilities of AI – we also have an important responsibility to ensure that AI is safe and inclusive for all. We take that responsibility seriously and are committed to providing our employees, customers, and partners with the tools they need to develop and use AI safely, accurately, and ethically.
To safeguard human rights and protect the data we are entrusted with, we work with human rights experts, and educate, empower and share our research with customers and partners.
To create AI accountability we seek stakeholders feedback, take guidance from the Ethical Use Advisory Council, and conduct our own data science review board.
We strive for model explainability and clear usage terms, and ensure customers control their own data & models.
Accessible AI promotes growth and increased employment, and benefits society as a whole.
AI should respect the values of all those impacted, not just those of its creators. To achieve this, we test models with diverse data sets, seek to understand their impact, and build inclusive teams.
We explore the ethics of AI and bring awareness of key issues to internal employees, customers, and the public alike.See all ethics blogs
As an Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate
Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and
partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and
products. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute
of Technology. The second edition of her book, "Understanding your users," was published in May 2015.
Architect, Ethical AI Practice
At Salesforce, Yoav helps instantiate, embed and scale industry-leading best practices for the responsible
development, use and deployment of AI. Prior to joining this team, Yoav worked at Omidyar Network where he led
the Responsible Computer Science Challenge and helped develop EthicalOS, a risk mitigation toolkit for product
managers. Before that, he brought to bear his undergraduate studies in Religious Studies and Political Science
as a leader of mission-driven, social impact organizations.
Principal, Ethical AI Practice
The Office of Ethical & Humane Use promotes the ethical and humane use of our technology.Learn more
Learn to catch bias data and design ethical AI systems at your company.Learn more