[Summary: I've been meeting with people and doing my reading. Does anyone have suggestions about how people's minds change and how structural inequalities play out in deliberative settings?]
Another week of wondering about the possibilities of populist process has passed...
I've been following up the pointers I mentioned last time. Much of the time from Thursday evening to Sunday evening I was at the unconference on horizontal organizing in San Francisco, which I mentioned. It was interesting and challenging in several ways. I met fascinating people with stimulating new ideas, and I also enjoyed the unconference format, which directly encourages all the people present to take responsibility for determining what's going to happen: I found that personally challenging and useful, as it pulled me out of my habitual orientation, which can include waiting to see what's going to happen and whether it's going to include something I want or not.
I'm still thinking over the conversation that happened there on the last day, when a lot of people from Occupy SF were present and talking about the Occupy movement in general. It confirmed my perception that most of the interesting decision making tasks that Occupy faces are far too deep, divided, and long-term to be addressed by the usual consensus-process meetings with proposals, concerns, amendments, and blocks. They require some kind of slow, ongoing, inclusive deliberation. Of course, these conversations happen all the time, informally and in print, but I'm definitely concerned that there's a lack of structure to hold that and help it happen in a positive, effective way. So I'm interested in wondering what could fill that need. I also think that Occupy's assemblies and other meetings often draw a lot of people who are feeling a strong need to be heard in their suffering and acknowledged as valid people with something to contribute - and rightly so - and it would be very good to have processes that directly address those needs, helping to reduce the demands on the deliberation process, which can't generally address them.
I had hoped to move fairly quickly from reading to developing something new, but I'm going to continue reading and having conversations for now. I'm learning a lot, and I think it's probably the best approach right now.
I'll see this message out with updates on the 4 forks in the road that I identified last time.
On modeling the simple case of collective deliberation with fixed preferences as a constraint satisfaction problem: I'm intrigued by distributed constraint satisfaction algorithms - Yokoo and Hirayama 2000, "Algorithms for Distributed Constraint Satisfaction: A Review" - in which a cluster of computers work together to solve constraint problems with millions of variables. They construct "plan fragments" and pass little messages to each other called "nogoods". A "nogood" is a particular thing that's been ruled out: for instance, if we're looking for a congenial pizza, and I won't eat pineapple with garlic and you won't eat clams without garlic, at some point we may figure out that it's not worth considering clams and pineapple together. So we can pass on that news to everyone else as a "nogood". I want to delve into this in more detail and see whether there are interesting things people can do with that framework, as an alternative or supplement to the ideas of proposals, concerns, votes, etc.
On strategy and self-interest: two main sources so far.
Landa and Meirowitz, 2009, "Game Theory, Information, and Deliberative Democracy". Given a particular context and process for deliberation, when do participating agents have incentives to share information fully and truthfully, and when will they gain by dissembling or withholding. For instance, when different agents want different outcomes (some are for invading Iraq and some against, for instance), agents' incentive is to say whatever will influence people to vote the way they want, whether it's truthful or not. If everyone wants the same thing, on the other hand, for instance to stop a beloved building from collapsing into the sea or something, then all may have an interest in sharing what they know truthfully. Several more complex scenarios are also discussed.
Vannucci and Singer, 2010, Come Hell or High Water: A Handbook on Collective Process Gone Awry. Discusses lots of ways in which people subvert the process of making good decisions together, from refusing to take time to teach other people how to do certain bookkeeping tasks to conspiring to get someone banned and blacklisted from the group. Many of these behaviors can arise in multiple ways - a person might be doing it unintentionally, unable to imagine another way or out of an unacknowledged emotional need, or might be doing it on purpose to make themselves powerful at the expense of the group. The authors strongly recommend insisting on fair process based in well-defined egalitarian principles even when some people are vociferously opposed.
Vannucci and Singer's book also addresses problems of racial, gender and other equity in collective groups a bit, and recommends rising to the challenge of listening to people who are different and making room for those differences, without getting into excessive hand-wringing about particular isms. I think that this can go a long way, but something more is needed. I've had good conversations with Beth Simpson and Kennan Salinero, who gave me good reading suggestions that I'm going to follow up.
And on how people's approaches and preferences change during the encounter with others, I have a few threads to follow. Kennan suggests that change doesn't come from deliberation, but from a context shift arising from direct experience of the other person, and suggests a book by the Heath brothers, which I'll look into. Jonathan Dushoff suggests that one theoretical way in might be in the observation that people are often willing to accept a proposal if most other people want it, to support "the will of the group". This is a sort of threshold effect, which connects to things he and I have been studying in other contexts (http://leeworden.net/lw/thresholds-1, http://leeworden.net/lw/node/90).
I think I want other ways into this question. Some possibilities seem to include Quine's models of networks of core and peripheral beliefs, Lakoff's ideas about framing, and what I wrote before about Marshall Rosenberg.
I welcome suggestions about changing minds and isms (and everything else).