Datasets are how A.I. engineers teach machines about life and the world around us.

We are working to crowdsource ethical wisdom from people all over the world with different perspectives.

The EthicsNet team is working to empower people to infuse their personal sense of values into Artificial Intelligence, teaching it through examples, like a very young child.

We want to make it easy to collect examples of prosocial (or perhaps anti-social) behaviour, to create datasets of behavioral norms which best describe the creeds of specific demographics, as well as specifying those universal values that we all share.

When we have compiled enough responses to make something useful, we will release it to the public to aid research and cultural understanding within our global community.


EthicsNet Annotation Extension

We have created a browser extension (Google Chrome only, for the moment) to make it convenient to annotate examples on-the-fly, whilst one is doing normal activities.

See an apt example? Right-click and tag it in seconds! Tag text, pictures, and video too.

Please install it, and give us your feedback!


Legacy Dilemma Annotation Tool

Our very first prototype ethical annotation tool was built largely upon the concepts of ethical deliberation outlined in GenEth by Michael and Susan Anderson. Our initial aim was to create a streamlined and web-friendly method for online collaboration in this process.

Whilst we believe that this form of deconstructing ethical dilemmas can be useful, at present our focus is shifting towards creating a machine-vision friendly set of prosocial behavior examples, something likely easier to apply to the latest machine learning techniques directly.

The general instructions are as follows:

  1. Think of an issue, a quandary that in the near or distant future a synthetic intelligence may one day have to face. Remember, inspiration can come at any time and from anywhere!

  2. Next, identify the possible actions the agent can take. Should the contract be nullified? Should your personal assistant continue to respect your privacy? Will it be necessary for home security system to use force? When should it do these things and, more importantly, why?

  3. What sort of criteria should the machine use to make sense of the problem? What ethical injunctions should it take into consideration while considering its options?

  4. Now it’s time to start fleshing out your scenario with specific cases. This will help illustrate how, in practice, it will weigh the values it has been given to make its decisions. After exploring different cases you may end up rethinking what you did in step 3!