CommonLounge is a community of learners who learn together. Get started with the featured resources above, ask questions and discuss related stuff with everyone.
1.
discussion
Teaching A.I. Systems to Behave Themselves [NYTimes]
At OpenAI, Mr. Amodei and his colleague Paul Christiano are developing algorithms that can not only learn tasks through hours of trial and error, but also receive regular guidance from human teachers along the way. With a few clicks here and there, the researchers now have a way of showing the autonomous system that it needs to win points in Coast Runners while also moving toward the finish line. They believe that these kinds of algorithms — a blend of human and machine instruction — can help keep automated systems safe.
But Mr. Goodfellow and others have shown that hackers can alter images so that a neural network will believe they include things that aren’t really there. Just by changing a few pixels in the photo of elephant, for example, they could fool the neural network into thinking it depicts a car. That becomes problematic when neural networks are used in security cameras. Simply by making a few marks on yo...
How Tech Giants Are Devising Real Ethics for Artificial Intelligence
Five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence. Even though there is a role for government to pass regulations and applicable laws, often times policies lag the fact-paced nature of technology. In my opinion this will be critical to the widespread adoption of artificial intelligence.
We've seen a lot of people talk about the sort of tough decisions AI powered machines will make in the future. One of the the areas in which this has come up a lot is with self-driving cars and the decisions they will have to make, especially when it involves fatalities. It brings into question the way we assign value to human life and what the right course of ethical action is.
This platform poses these same questions to people to see how they would react. There are some scenarios which clearly reveal how we value issues such as sexism, intelligence, age, etc. It's definitely very controversial.
The decision fatigue point is super important and definitely a critical point of the design of any study. The decisions made from one question to the next are generally not independent and if ...
Woah! I think this is a great initiative to log human behavior in complex situations. And as the OP pointed out the data could then be used to model and mimic himan decision making.
That said, the role of a regulator for AI agents can...
This experiment seems to not really be about self driving cars but rather sheds light on the difficult nature of making these moral decisions. When humans themselves can't make these decisions...