Skip to content

Posts tagged ‘social networks’

The sociological breakthrough of Google+

My last Facebook update said:

Too busy playing on Google+ to check FB

And that was five days ago.

The truth is that I’ve long been too busy doing anything to check Facebook. I’ve secretly, and increasingly, loathed Facebook since I joined it, which was relatively early (beginning of 2007, I believe), because my beat at The Economist back then was Silicon Valley, and it was simply part of my job to be fiddling with stuff like this. (I’m not the only one loathing FB, apparently.)

Ah, 2007. That seems like a distant era now. I still recall meeting Mark Zuckerberg, who was not yet used to meeting anybody, much less the heads of state and glitterati that surround him now, and who was awkward even by the standards of Silicon Valley’s skewed autism spectrum. (Here is the profile I wrote about him soon after that meeting.)

So anyway, I was and remained “on Facebook”, the way one just is. How could I not be? But I was almost entirely passive (observing incoming updates without sending outgoing ones). And I was proud of my wife, who is savvy in such matters and simply said ‘I’ll sit this one out’. She never signed up.

Why this skepticism?

Because Facebook is fundamentally (=unalterably) indiscreet.

And it is fundamentally indiscreet because it is architecturally indiscrete. (Forgive me that word play.) Meaning: you cannot distinguish easily between different degrees of intimacy among the people in your social graph. The various relationships are not discrete, not separate.

Mark’s vision (as he told it to me back then, and as I described it in my early profile) was to be a “mapmaker” (like the heroic explorers of the Renaissance) of human connections. To him that was an algorithmic challenge. I always knew that his premise was unsound sociologically.

Tell me: In real life, how often do you walk up to somebody and request to be “friends”, then begin “sharing” pictures of your naked baby?

How wonderfully warm and fuzzy do you feel when somebody (oh yes, wasn’t he on my soccer team 30 years ago? Or perhaps I vomited on him at that keg party in 1989?) stops you on the street, asks to be “friends”, then shares his baby pictures with you?

Mark has been asking us all to do exactly this sort of thing. I thought it was strange back then, and I said so in our pages. (The picture at the top of this post is from that old piece.) But — did I mention? — that was in 2007. A different era, as I said.

Facebook then put us all on a roller coaster of “privacy” policies. (We’ve discussed some of them on this blog.) It got more and more confusing, and simultaneously boring. Who wants to put in the time to learn what Mark is up to now?

Plus: the page started to look like Times Square in the 1970s. (Remember, aesthetics really, really matter to me.)

So now we have Google+. It has not even officially been launched yet, but seems to have passed 18 million users today. We all thought that sheer fatigue would keep all of us from filling out yet another profile. But lo, everyone I know is already there, and we’re playing happily. Even my wife is trying it out.

Google+’s crucial innovation (among many others existing or planned) is Circles. You can make as many of them as you like. They can contain 1 person, 2 people, the Dunbar number, or the entire web. Because there are things you want to share with just one person, or with 2, or with lots, or with everybody (as on WordPress).

Ergo: Discrete → discreet

You also don’t have to ask anybody to be your “friend”. Nor do you have to reply to anybody’s “friend request”. You simple put people into the discrete/discreet spheres they already inhabit in your life.

Quite a few of us — Nick Bilton at the NYT, for example — seem to be optimistic that this is the beginning of a good trajectory. (Nothing new should be evaluated by what it is today. What matters is what it will become tomorrow.)

Now, if you had asked me which company I considered least likely to come up with such a sociologically simple and elegant solution, I might well have answered: Google.

Its founders and honchos worship algorithms more than Mark Zuckerberg does. (I used to exploit this geekiness as “color” in my profiles of Google from that era.) Google then seemed to live down to our worst fears by making several seriously awkward attempts at “social” (called Buzz and Wave and so forth).

But these calamities seem to have been blessings. Google seems to have been humbled into honesty and introspection. It then seems to have done the unthinkable and consulted not only engineers but … sociologists (yuck). And now it has come back with … this.

Pew and me, “imagining the internet”

The Pew Internet & American Life Project invited me to participate in the next iteration of their serial “expert” reports on the future evolution of the Internet.

The questions themselves were interesting and telling, and I thought I might share them with you and let you know how I answered. (I look forward to finding out what all the other participants said when “Future of the Internet” is published by Cambria Press.)

The questions were “tension pairs” of alternative scenarios around the following themes:

  • Human intelligence
  • Reading and writing skills
  • Social and human relationships
  • The Internet’s “end-to-end principle”
  • Desktop versus cloud computing
  • The next takeoff technologies

Human intelligence

Here is one tension pair (their words):

By 2020, people’s use of the internet has enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices. Nicholas Carr was wrong: Google does not make us stupid.

Or:

By 2020, people’s use of the internet has not enhanced human intelligence and it could even be lowering the IQs of most people who use it a lot. Nicholas Carr was right: Google makes us stupid.

I chose alternative 1 and elaborated (my words):

What the internet (here subsumed tongue-in-cheek under “Google”) does is to support some parts of human intelligence, such as analysis, by replacing other parts, such as memory. Thus, people will be more intelligent about, say, the logistics of moving around a geography because “Google” will remember the facts and relationships of various locations on their behalf. People will be better able to compare the revolutions of 1848 and 1789 because “Google” will remind them of all the details as needed. This is the continuation ad infinitum of the process launched by abacuses and calculators: we have become more “stupid” by losing our arithmetic skills but more intelligent at evaluating numbers.

Reading skills

Here is another tension pair (their words):

By 2020, it will be clear that the internet has enhanced and improved reading, writing, and the rendering of knowledge.

Or:

By 2020, it will be clear that the internet has diminished and endangered reading, writing, and the intelligent rendering of knowledge.

Here, too, I chose alternative 2 but elaborated (my words):

We are currently transitioning from reading mainly on paper to reading mainly on screens. As we do so, most of us read more, in terms of quantity (word count), but also more promiscuously and in shorter intervals and with less dedication. As these habits take root, they corrupt our willingness to commit to long texts, as found in books or essays. We will be less patient and less able to concentrate on long-form texts. This will result in a resurgence of short-form texts and story-telling, in “Haiku-culture” replacing “book-culture”.

Friendship and intimacy

Here is another tension pair:

In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the internet has mostly been a negative force on my social world. And this will only grow more true in the future.

Or:

In 2020, when I look at the big picture and consider my personal friendships, marriage and other relationships, I see that the internet has mostly been a positive force on my social world. And this will only grow more true in the future.

And again I chose alternative 2, but said:

The question presents a false dichotomy: Technology has no impact whatsoever in the long term on human relationships. What it does is to facilitate some aspects of it for a time (thoughts with letters, speech with telephony, updates with social networks, nearness-awareness with geo-location, etc) at the expense of outrunning the etiquette and courtesy protocols of the previous generation (disturbance during dinner time with telephony, privacy and discretion with social networks and geo-location, et cetera). Over time, etiquette catches up (or evolves), but efficiency advances elsewhere. But throughout, people remain responsible for their human connections–ie, the commitments in time and trust they make to others and their expectations of reciprocity.

Privacy and “sharing”

One more tension pair:

By 2020, members of Generation Y (today’s “digital natives”) will continue to be ambient broadcasters who disclose a great deal of personal information in order to stay connected and take advantage of social, economic, and political opportunities. Even as they mature, have families, and take on more significant responsibilities, their enthusiasm for widespread information sharing will carry forward.

Or:

By 2020, members of Generation Y (today’s “digital natives”) will have “grown out” of much of their use of social networks, multiplayer online games and other time-consuming, transparency-engendering online tools. As they age and find new interests and commitments, their enthusiasm for widespread information sharing will abate.

And again, I chose alternative 2 and elaborated:

The human maturation process does not change because of a new technology. Starting before we left the savannahs, the young members of Homo “Sapiens” have over-shared in order to make themselves socially interesting to the group and to potential mates, only to discover the enormous risks involved when shared information reaches malicious individuals or a group at large, at which point they have re-learned the discretion of their parents. Thus sharing on the internet will continue on its present trajectory: more will be shared by the young than the old, and as people mature they will share more banal and less intimate information.

The other topics didn’t interest me quite as much, although I gave my opinions. Regarding the question of “cloud computing” versus PC-based computing, I made my thinking quite clear when Apple’s support team gave me ample (in terms of time) opportunity to ponder it.

Can’t wait to hear what you guys think.

Bookmark and Share

More primatology

The folks at 850 KOA, a Colorado news-radio channel, called me up this morning to chat more about friendship and human group size in the age of Facebook. Here is our conversation.

(If you’re just arriving, this is about a piece I wrote in The Economist called Primates on Facebook. I blogged about the back story here.)

Update: Cameron Marlow, whom I described as Facebook’s “in-house sociologist”, has now posted his back story with lots, lots more data and detail and analysis.

Bookmark and Share

Primates on Facebook

I got 500 friends on Facebook

I got 500 friends on Facebook

A lot of bloggers are picking up my piece in today’s issue of The Economist on the possible bio-sociological conclusions to be drawn from Facebook data. A few examples are here, here, here, here and here.

The point of the piece: to add a tiny bit to the research debates about human group size.

Just a few words on the backstory of this article: I got the idea in December while chatting to Sheryl Sandberg, the COO (“chief operating officer”) of Facebook. We were just shooting the breeze when I thought of the Dunbar Number, one of my favorite talking points, and a conceit that I’ve used before. Robin Dunbar, an anthropologist, hypothesized that primates form groups to the extent that their brains can compute the many relationships among group members. I think what I like about it is the originality of extrapolating from the ratio of neocortex size to group size among other primates to Homo Sapiens.

Anyway, we all went off over the holidays, and when we came back from the break, Larry Yu at Facebook put me in touch with Cameron Marlow. He had got some complicated PhD title that amounted to, in his words, “computational sociology” and is now the “in-house sociologist” at Facebook. (Since Mr Crotchety and I think alike–great minds?–he serendipitously emailed me an article about the rise of computational social scientists that same week. If you don’t know Mr Crotchety, you haven’t been reading The Hannibal Blog enough.)

What I wanted from Facebook was numbers that might advance the debate on human group size. It was really difficult. Marlow and his team came back with one set of charts that I could not decipher without help. “Simpler“, I said, which will not surprise regular readers. Eventually, they produced a chart that I thought was simple enough.

Here it is: economist-median-network-size (Incomprehensibly, WordPress does not allow me to embed an PDF chart into my blog, so please click through.)

I like it. It shows three diverging lines: the blue for the number of people we track passively (by clicking on their profiles or status updates, say); the green for the number of people to whom we respond (by commenting on a status update, say) but who don’t respond back; and the red for the number of people with whom we communicate two-way (by chatting, emailing, exchanging wall posts, etc).

The conclusion: the more active or intimate the network, the smaller and more stable, no matter how many “friends” you have on Facebook. I wrote my piece around that chart.

To my surprise, my editor then took an extreme interest in gender differences. So I went back to Facebook and had Marlow produce another chart, this one: active-relationship-size-by-gender

It was still simple, but broke out the same information by gender. Yes, just as you thought, women are more social than men.

To my surprise, the editor didn’t go with any of the charts. What can you do? So I went back to Facebook to get actual numbers. (It was like getting blood out of a stone at this point.) And so we ended up putting those numbers in the text.

Long story short, the piece has no chart but it still gets the point across: No matter what technology you use, we still seem to interact with the same small number of people as ever before. Hence the rubric (subtitle):

Even online, the neocortex is the limit

Anyway, if you’re interested look at the charts and see if that gives you more ideas.
Bookmark and Share

Follow

Get every new post delivered to your Inbox.

Join 478 other followers

%d bloggers like this: