Overview

Security is organized around threats, speculative scenarios that describe possible security outcomes. Speculative scenarios are also used widely in design. This initiative takes inspiration from speculative practices in design to develop new practices for identifying security issues, drawing designers into the work of security in the process.

mlfailures
When machine learning (ML) models misfire, people can get hurt. But, if you have an algorithm in front of you, how do you know what can go wrong? This project—mlfailures—produces labs to help students identify bias with hands-on, real-world problems in Python. They're accompanied by a Twitter account, @mlfailures, which tweets out relevant issues from the news. Our goal is to train the next generation of students to identify, discuss and address the risks posed by machine learning algorithms.

Current Project Team

Nick Merrill

Director

Inderpal Kaur

Research Assistant

Samuel Greenberg

Research Assistant

Contact information
Nick Merrill - ffff at berkeley dot edu
Security Games
Our security games project combines the strengths of existing security practices with human-centered design. We are developing improvisational role-playing games to help stakeholders imagine the human perpetrators of attacks. These games aim to create more specific, human-centered ways of representing security threats, and to draw new kinds of people into the practice of security: designers, activists, project managers, and more.

Current Project Team

Nick Merrill

Director

Kyra Baffo

Research Assistant

Past Team Members

Joanne Ma

Research Assistant

Contact information
Nick Merrill - ffff at berkeley dot edu