Complex Systems and Artificial Intelligence: Ethical Decision Making with AI

Tuesday, November 28, 2017 - 19:00
Complex Systems and Artificial Intelligence: Ethical Decision Making with AI
28 Nov 2017 - 07:00 PM28 Nov 2017 - 07:00 PM
450 Lexington Avenue
New York,

As the field of AI advances, researchers have started to play around with the capabilities of their systems to test what questions AI could answer. While the useful questions concerning sentiment and classification are already considered pretty abstract, researchers have started to ask questions that go a step beyond. Ethical decision making is something that humans themselves cannot even define effectively, yet AI systems have been made to approach this problem. At this meeting we will discuss the implications of this while closely following the MIT white paper "A Voting-Based System for Ethical Decision Making." This paper discusses the decisions a self driving car would make in extreme circumstances, where the ethical decision is decided by the results of online polls taken by random people. Along with this paper, we will discuss systems that claim to be able to identify potential terrorists or criminals, and even someone's sexual orientation, just from their face. Is this a ground breaking application of AI, or just an embellished resurgence of the archaic pseudoscience of physiognomy? This will be an open discussion, so feel free to come in and state your stance!

Links:

The White Paper: A Voting-Based System for Ethical Decision Making - https://arxiv.org/pdf/1709.06692.pdf

Facing Facts: Artificial Intelligence and the Resurgence of Physiognomy - https://undark.org/article/facing-facts-artificial-intelligence/

Sources: https://arxiv.org/abs/1612.04158 , https://osf.io/zn79k/ , https://www.ncbi.nlm.nih.gov/pubmed/28176027 , https://www.glaad.org/blog/glaad-and-hrc-call-stanford-university-responsible-media-debunk-dangerous-flawed-report

AI Gaydar Study Gets Another Look -
https://www.insidehighered.com/news/2017/09/13/prominent-journal-accepted-controversial-study-ai-gaydar-reviewing-ethics-work

Sources: https://www.gsb.stanford.edu/faculty-research/publications/deep-neural-networks-are-more-accurate-humans-detecting-sexual

Criminal machine learning -
http://callingbullshit.org/case_studies/case_study_criminal_machine_learning.html

Sources: http://onlinelibrary.wiley.com/doi/10.1002/bsl.939/abstract

How Adversarial Attacks Work -
http://blog.ycombinator.com/how-adversarial-attacks-work/

Google's AI Wizard Unveils a New Twist On Neural Networks -
https://www.wired.com/story/googles-ai-wizard-unveils-a-new-twist-on-neural-networks/

AI Experts Make a Terrifying Film Calling for a Ban on Killer Robots - http://bigthink.com/paul-ratner/watch-slaughterbots-a-terrifying-new-film-warning-about-killer-robots

Video:

https://www.youtube.com/watch?v=9CO6M2HsoIA&t

When algorithms discriminate: Robotics, AI and ethics -
http://www.aljazeera.com/programmes/talktojazeera/2017/11/algorithms-discriminate-robotics-ai-ethics-171117133216779.html


Please go to Room 4A. It's located on the 4th floor of the WeWork Building, but please don't hesitate to post in the group/message me if you're not sure where to go. You have to go into the 450 Lexington entrance that has "WeWork" written on the door or on a sign above the entrance. If you arrive late, I won't be able to look at my phone to guide people to the location. Please arrive on time.

Share:

As the field of AI advances, researchers have started to play around with the capabilities of their systems to test what questions AI could answer. While the useful questions concerning sentiment and classification are already considered pretty abstract, researchers have started to ask questions that go a step beyond. Ethical decision making is something that humans themselves cannot even define effectively, yet AI systems have been made to approach this problem. At this meeting we will discuss the implications of this while closely following the MIT white paper "A Voting-Based System for Ethical Decision Making." This paper discusses the decisions a self driving car would make in extreme circumstances, where the ethical decision is decided by the results of online polls taken by random people. Along with this paper, we will discuss systems that claim to be able to identify potential terrorists or criminals, and even someone's sexual orientation, just from their face. Is this a ground breaking application of AI, or just an embellished resurgence of the archaic pseudoscience of physiognomy? This will be an open discussion, so feel free to come in and state your stance!

Links:

The White Paper: A Voting-Based System for Ethical Decision Making - https://arxiv.org/pdf/1709.06692.pdf

Facing Facts: Artificial Intelligence and the Resurgence of Physiognomy - https://undark.org/article/facing-facts-artificial-intelligence/

Sources: https://arxiv.org/abs/1612.04158 , https://osf.io/zn79k/ , https://www.ncbi.nlm.nih.gov/pubmed/28176027 , https://www.glaad.org/blog/glaad-and-hrc-call-stanford-university-responsible-media-debunk-dangerous-flawed-report

AI Gaydar Study Gets Another Look -
https://www.insidehighered.com/news/2017/09/13/prominent-journal-accepted-controversial-study-ai-gaydar-reviewing-ethics-work

Sources: https://www.gsb.stanford.edu/faculty-research/publications/deep-neural-networks-are-more-accurate-humans-detecting-sexual

Criminal machine learning -
http://callingbullshit.org/case_studies/case_study_criminal_machine_learning.html

Sources: http://onlinelibrary.wiley.com/doi/10.1002/bsl.939/abstract

How Adversarial Attacks Work -
http://blog.ycombinator.com/how-adversarial-attacks-work/

Google's AI Wizard Unveils a New Twist On Neural Networks -
https://www.wired.com/story/googles-ai-wizard-unveils-a-new-twist-on-neural-networks/

AI Experts Make a Terrifying Film Calling for a Ban on Killer Robots - http://bigthink.com/paul-ratner/watch-slaughterbots-a-terrifying-new-film-warning-about-killer-robots

Video:

https://www.youtube.com/watch?v=9CO6M2HsoIA&t

When algorithms discriminate: Robotics, AI and ethics -
http://www.aljazeera.com/programmes/talktojazeera/2017/11/algorithms-discriminate-robotics-ai-ethics-171117133216779.html


Please go to Room 4A. It's located on the 4th floor of the WeWork Building, but please don't hesitate to post in the group/message me if you're not sure where to go. You have to go into the 450 Lexington entrance that has "WeWork" written on the door or on a sign above the entrance. If you arrive late, I won't be able to look at my phone to guide people to the location. Please arrive on time.

Location

Event Time: 
Tuesday, November 28, 2017 - 19:00
Event Day: 
November 29 Wednesday
Address: 
450 Lexington Avenue
City/State: 
New York,
Organizer: 
Complex Systems and Artificial Intelligence