msr faculty summit: bill griswold on 'community on the go'

| No Comments

Bill Griswold's presentation is actually entitled "Community on the Go: The Quest for Mobile 2.0". I'm trying to ignore the "2.0" buzzword usage.

He claims web 2.0 is "democracy on the internet," or "democracy made possible on the internet," which I find a bit of a stretch. The audience here, I have to remember is very traditional CS faculty--not, for the most part, people who live in today's web services world. So he's giving a pretty basic overview of "look! my mom can comment on my photos" and "this is what we call a 'mashup'." What are dangers, he asks..."noise, misinformation, you can't escape your past." Those are no different from physical space he says (I'd disagree, particularly on the last item).

I suspect I'm not going to learn much here, but this might be a good presentation to point some of my less web-savvy colleagues towards.

Mobile 2.0 is Web 2.0 everywhere, all the time. However, it's in a "divided attention" context. What kind of democracy do you get in a divided attention context, he asks. (Well, our students are already dividing their attention, even at a larger screen. It's a reality, not a possiblity.)

The web experience is incomplete, he says. (Andy Phelps is typing on his iPhone next to me right now, and raises his eyebrows, flashing me the quite-complete Safari browser he's looking at.)

He talks about hardware shortcomings, but most of what he's talking about is a short-term problem (processor speed, graphics capability, short battery life).

Says Twitter is "Mobile 1.5". Makes some claims that I wonder about..."most Twitters come from the web", for example. That's exactly the kind of research I'd like to do. Most people generalize about Twitter based on their and their social network's use of the tool.

Where we could really be in mobile 2.0 is "augmenting the real-world commons." Adds the idea of "microtasking," "proactivity" (where your phone lets you know that you should be paying attention to something), "context awareness," "in situ computing," and "public displays."

Added dangers--trying to sip from a fire hose, the stalking aspect of context awareness. (Leaves out corporate "stalking" and data gathering; do we really want Google, or AT&T, to be able to track our every move?)

In the microtasking discussion, he makes some claims about Flickr (compared to the Dropshots) he shows that I think are incorrect--for example that Dropshots organizes photos by date, but Flickr only gives you a linear flow. In fact, Flickr has many ways to view photos, many pivot points (including day, date, month, year, tag, etc).

Talks about "in situ computing" in the context of his active classroom work. Allowed students to ask questions anonymously in a backchannel (his work on active classroom is of the best quotes I've heard about backchannels was his line that it "prioritizes the question, not the questioner.") Love, love, love what he's doing with the use of mobiles and SMS in this context. Great stuff. They've got an "ActiveCampus Explorer" for the mobile that looks very cool. Campus map, local chat/messaging, etc.

Discusses some context aware apps they've been working on--for example, location-based reminders. (e.g. When I get home, remind me to call my Mom.) What if you could leave annotations related to a place ("this restaurant has great hot cocoa") for a specific person, or a group of people connected to you--and they'd see it only if/when they went to that place, or searched for notes related to that place. This is the kind of mobile app I'm really fascinated by. They studied this, and found that people used location as a proxy for other concepts (busy/not busy). Found that it calmed people--"it was a relief knowing I would be reminded." Interesting, and relates to the whole David Allen "open loop" concept.

Proactivity---augmenting peripheral perception. They used something called "PeopleTones", which played a friend's unique sound when s/he is nearby. First problem is detecting proximity reliably using celltower triangulation. The second is conveying the alert unobtrusively; they use da short personal sound clip and vibration. Two novel vibration encoding algorithms were used ("think microMorse code"). Nature sounds were not effective for identifying who it was, but music was very effective (whether music was chosen by self or target). Even though they expected music to be more disruptive, they found that users found it more helpful than annoying. Even if they didn't act on the knowledge, they "liked to know" that someone was nearby. (This is important. Ambient presence. He notes, and I agree, that this is also the appeal of Twitter.)

"When I was going to Bob's birthday, I know who was there when I pulled up because of the ringtones."

"I could tell if Melissa was home when I passed by Claremont." (Hmmm...that definitely brings up the stalking question!)

Mashup idea that came out of this: Mashup PeopleTones with Place-Its ("pounce" on someone you need to talk to about something).

(Note to to Kevin Li about this later this week; he's an intern at MSR this summer.)

Moves on to the topic of "community-based context awareness"--what if everyone carried a carbon dioxide sensor that coud report atmospheric conditions and report them to a central server so that aggregate information was available. (Hmmm...this seems like it would be awfully easy to game if you were an unethical industry person. Coudln't you spam inaccurate information intentionally?)

Also discusses "RealityFlythrough"--multiple cameras viewing the same scene, stitched together into a single immersive coherent view (basically video Photosynth, it sounds like). Really interesting idea/demo.

Leave a comment

About this Entry

This page contains a single entry published on July 16, 2007 2:07 PM.

no, i still don't want an iphone was the previous entry in this blog.

msr faculty summit: using social relevance to enhance CS is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.


Category Archives