Global North vs Global South: Haves and Have Whats?

I have worked in what I consider the nonprofit sector for almost fifteen years. My current employer is a research and communication center within a university, so some might argue I’m in the academic sector now. However,  my program (and my work) is funded by USAID and operates in the global public health sphere, which makes me feel like I work for a non-governmental organization. That’s not the nomenclature problem. The problem is with the terminology surrounding the global distribution of wealth, power, and certain kinds of economic development.

I’m not denying that there are inequalities in play—countries that give or receive aid, export more than they import, have or don’t have certain kinds of industry and infrastructure, or are above or below the global gross domestic product per capita average. But I think it’s a false dichotomy, and the nomenclature around it is deeply unsatisfactory.

Right now the in-vogue term for countries that (for lack of a better term) I shall call the “economic-industrial-have-nots” is “the Global South”, or just the South. These countries, and the people who live there, are called Southern. Communication and cooperation between them is called “South-South”. This makes my teeth hurt, because of geography. Here’s a map from Wikipedia of the countries above and below the average GDP per capita line.

Screen Shot 2013-05-30 at 9.01.24 PM

Yep, a lot of the blue (more-money-than-average) countries are in the northern hemisphere–which by the way includes nearly all of Asia and about half of Africa (I’m not sure, because my brain has been warped by the Mercator projection). There are a lot of blue countries in the southern hemisphere, too. Imprecision bothers me.

I don’t object to having gotten rid of the term “third world countries”–I don’t hear it any more from people in my professional space. “Developing countries” was in vogue for a while, which seemed better, but then as the director of my project noted the other day, “It’s not like a country crosses some magical line and doesn’t have any more progress to make.” Some people were using “emerging markets” for a while (and might stil be), but I find that pretty insulting–as though people in the international development sector are there solely for the purpose of selling people things. (I’m not saying that isn’t *a* reason. But it’s not the only reason. And it’s certainly not my primary reason for doing the work I do.)

I think I’m also irritated because the “rich/poor”, “industrial/agrarian”, “democracy/dictatorship” dichotomies deal in such a narrow sphere of human value. They all split the countries up and attempt to name them as two groups by reducing people to dollars, or voters, or oppressed masses. I think it’s too simple.

And yes, I recognize that my discomfort with the nomenclature is a First World Problem, and I’m having all kinds of guilt about my carbon footprint and disproportionate consumption of all sorts of resources. But here’s a totally different map–a scale, not a dichotomy:

Screen Shot 2013-05-30 at 10.08.53 PM

This is the Happy Planet Index map. It’s about ecological footprint.

Surprisingly, I’m having trouble finding a map of happiness, or fulfillment, or peace, or connection, or time with family, or any of the other things that count to me as a person to my quality of life.

So, I’m on the lookout for an evolution in the nomenclature. I’ll keep you posted.

Update! Sept. 25, 2013: No evolution in nomenclature, but a new report on global happiness from the Sustainable Development Solutions Network (SDSN), a nice post about it on Columbia University’s Earth Institute website, and a digital publication version. Sadly, still no map.

Update 2! Oct. 15, 2021: Nomenclature!! I just heard the term “Majority World” for the first time. It’s a great phrase, although I’m struggling with some imprecise nuances. I’m also annoyed that I haven’t heard it before, despite it having been coined no later than 2009 (the publication date on the article I link to above). Diffusion of innovations is an interesting thing.

Dubious Milestone: 1,000 spam comments

I would like to thank all the marketers of Cialis, knockoff designer handbags (especially Louis Vuitton), and SEO optimization “services” for their interest in my blog. I’m touched. But since I have better things to do than moderate spam comments (for example, anything else I ever do), I have closed comments.

While I’m strongly in favor of a participatory atmosphere and an open exchange of ideas, this blog has received over 1,000 spam comments, and zero real comments. If you’re a real human who actually wants to talk to me (without me buying anything from you or accepting membership in your malware-enslaved spam-spewing botnet), you’ll find ways to contact me via the About page.

 

Why I Became a Knowledge Manager

In 2003, my then once-and-future-boss Piers Bocock gave me a new title, “Knowledge Manager.” It felt comfortable and satisfying. Up until that point, my career had been fairly accidental and unintentional, driven mostly by other people wanting me to work for them, more than by my own professional ambitions. It wasn’t until years later that I realized the groundwork for my knowledge-manager-ness had been laid in the 1970s.

When I was seven years old and living in England, the BBC aired James Burke’s series Connections, which is about the history of technological change. At the end of the last episode of the first series, Burke gives a monologue about the importance of computers. During much of the monologue, the camera is focused on his face, as though he is speaking directly and personally to the viewer.

He talks about computers, power, and the process of “helping people toward knowledge” as keys to the future. Remember, this was produced when computers were just emerging from the military/financial/corporate sphere. The Apple II and the TRS-80 were released the year before the program aired, but in 1980 when I was in 5th grade and we got an Apple II, I was the only kid in my class with access to a “home computer”. So his prescience here is remarkable.

When I re-watched the series in my mid-30s, I realized I had absorbed his advice whole-heartedly. Listening to that closing monologue, I felt like he was reciting something I had memorized years ago, something which had become a core piece of my personal value system, but without any conscious memory of where it had come from.

The words of James Burke, from Connections, Episode 10: “Yesterday, Tomorrow, and You” (http://www.youtube.com/watch?v=kv3pBAlisVA; my slightly edited transcript below starts at 40:56 of the video.)

If Part One of the specialization of knowledge happened in the 15th century when Johann Gutenberg came up with the printing press and helped scientists to talk their own kind of gibberish to each other on the printed page, easier than they’d ever done it before, then this [the computer] is Part Two. Only this is no book that you can leaf through and get a rough idea of what it’s talking about.  This is the future. Because if you tell a computer everything you know about something, it will juggle the mix and come up with a prediction: Do this, and you’ll get that. 

And if you have information and a computer, you too can look into the future—and that is power. Commercial power, political power, power to change things. You want some of that power, easy. Go get yourself a PhD. Otherwise, the way things have become, forget it.  

… But never mind the machinery. What about the stuff this lot uses, the raw material that will change our future in ways you will never believe—information. Not the facts, it’s too late for that. What you do with the facts. Because there you’re into probability theory, choosing one of the alternate futures and actually making it happen. And how does the man in the street get involved in that game? He doesn’t.   

So when the next major change comes out of the computers, double-checked and pre-packaged, it looks increasingly like you’ve only got two options open to you. 

(1) Do nothing. Stick your thumb in your mouth. Switch your mind to neutral. 

(2) Do what people have done for centuries when machines did things they didn’t want: Overreact. Strike out. Sabotage the machines for good. Do you want that? [Somewhat overwrought montage of smashing and exploding technology.] But once you start, can you stop? Is our technology so interconnected that when you destroy one machine, you automatically trigger total destruction of the entire life-support system?  

Well, that’s no better a solution than any of the others, is it? So, in the end, have we learned anything from this look [the entire 10 episode series] at why the world turned out the way it did  that’s of any use for us, in our future? Something, I think. That the key to why things change is the key to everything: How easy is it for knowledge to spread? And that in the past, the people who made change happen were the people who had that knowledge—whether they were craftsmen or kings. 

Today, the people who make things change, the people who have that knowledge, are the scientists and the technologists who are the true driving force of humanity … [I cut out a bit here about art/politics – SP]

Scientific knowledge is hard to take [compared to the products of human emotion– art/literature/politics], because it removes the reassuring crutches of opinion and ideology, and leaves only what is demonstrably true about the world. And the reason why so many people may be thinking about throwing away those crutches is because thanks to science and technology they have begun to know that they don’t know so much. And if they are have to have more say in what happens to their lives, more freedom to develop their abilities to the full, they have to be helped toward that knowledge that they know exists, and that they don’t possess

And by “helped toward that knowledge”, I don’t mean “Give everyone a computer and say ‘Help yourself’.” Where would you even start?  

No, I mean: Try to find ways to translate the knowledge, and to teach us to ask the right questions. See, we are on the edge of a revolution in communications technology that is going to make that more possible than ever before. Or, if it [the translation/helping] is not done, to cause an explosion of knowledge that will leave those of us who don’t have access to it as powerless as if we were deaf, dumb, and blind. And I don’t think most people want that. 

 So, what do we do about it? I don’t know. But maybe a good start would be to recognize within yourself the ability to understand anything, because that ability is there, as long as it’s explained clearly enough. And then go and ask for explanations. And if you’re thinking right now “What do I ask for?”, ask yourself if there’s anything in your life that you want changed. That’s where to start. 

Screen Shot from 47:42 of Yesterday, Tomorrow, and You
James Burke’s comforting smile in the face of of a daunting future

At the end of this very serious, furrowed-brow, issues-of-importance monologue, he smiles, very slightly. It is an avuncular and gentle and kindly smile, with a hint of knowing in it. I remembered the smile. He was smiling at me, through a television in London in 1978. He was speaking directly to me, and I was listening.

Jargon du Jour

Some days, I live in a world of hurt. Some meetings, conference calls, and academic papers are too much for me to take. My name is Simone, and I am jargon-sensitive.

Firecrackers. Bio-break. Grasstops. Fireballs[1]. Realtime. Synergy. Deep Dive. Gamechangers, and their ancestral Paradigm Shifts. Leverage. Sustainable. Animert.[2]

Words like these hurt me. When I say them, I feel dirty. (I say them anyway, sometimes, because other people in my professional space expect to hear them.) When I hear them, it’s worse than nails-on-a-chalkboard; it’s like stepping barefoot on a tiny piece of glass.

A few months ago I was exposed to the word “exnovation”, and it literally made my palms sweat with linguistic consternation. It was used in the sense of “to improve something by removing outdated or superfluous features”—a process I applaud. It’s the word I have a problem with. Let’s take a look at the pieces (with some help from the Etymological Dictionary):

innovate (v.) Look up innovate at Dictionary.com 1540s, “introduce as new,” from L. innovatus, pp. of innovare “to renew, restore; to change,” from in- “into” (see in- (2)) + novus “new” (see new). Meaning “make changes in something established” is from 1590s. Related: Innovated; innovating.

in- (2) Look up in- at Dictionary.com Element meaning “into, in, on, upon” (also im-, il-, ir- by assimilation of -n- with following consonant), from L. in- “in” (see in). In O.Fr. this often became en-, which usually was respelled in English to conform with Latin, but not always, which accounts for pairs like enquire/inquire. There was a native form, which in W.Saxon usually appeared as on- (cf. O.E. onliehtan “to enlighten”), and some verbs survived into M.E. (cf. inwrite “to inscribe”), but all now seem to be extinct. Not related to in- (1) “not,” which also was a common prefix in Latin: to the Romans impressus could mean “pressed” or “unpressed.”

ex- Look up ex- at Dictionary.com Prefix, in English meaning mainly “out of, from,” but also “upwards, completely, deprive of, without,” and “former;” from L. ex “out of, from within,” from PIE *eghs “out” (cf. Gaul. ex-, O.Ir. ess-, O.C.S. izu, Rus. iz). In some cases also from Greek cognate ex, ek. PIE *eghs had comparative form *eks-tero and superlative *eks-t(e)r-emo-.

So, following these pieces, we see that “innovation” means “the process of imbuing something with newness”, and thus can parse “exnovation” as “to remove or expel newness”. I don’t think that is what the coiner (apparently A Sandeep, or so his blog states) meant. I think he meant “edit” or “improve” or “iterate”, all of which are words that cause me no central nervous system distress.

I don’t mind people coining new words for new ideas (like “meme”, which will probably get a post of its own eventually). In the case of “exnovate”, my objection is to the disregard for venerable prefixes.

[1] “Fireball” hits my jargon-nerve only when used to mean “fans” or “champions”—that is, “People who are on our side, and are vocal about it, and may attract more people to our cause”. I like the word “fireball” when it denotes big globs of flaming pitch, dragon-breath, explosions, meteors, or cinnamon-flavored jawbreakers.

[2] Honorable Mention: After years of it being common parlance, “webinar” is now just below my pain threshhold, thanks in part to the comparative horror of “eSeminar”.

 

How to Write a Useful Bug Report

I work in the space between front-end web people (who write content and talk to web users) and back-end web people (who write code and build web applications). I help wonks[1] communicate with geeks[2]. A fundamental element of success in this pursuit is the Useful Bug Report. (There’s a bit of a rant coming up, so if you want to skip straight to the how-to, here’s your link.)

The tracking system we use for bugs/feature requests/etc. works well. It isn’t perfect.[3] It leaves room for gaps in wonk/geek communication. For example, the instructions for the “Description” field say:

Put as many details as possible in the Description, including links to the pages involved, usernames/emails, and your browser version.

These instructions aren’t very helpful to a wonk who is trying to write a report that will be useful to a geek. Minute detail does not necessarily produce good bug reports. On the other hand, neither does an “I was on our website and something is broken—I clicked something, and it looked weird” approach.

I just finished writing up a more practical “How To” sheet for the wonk side of my team. I would have thought this kind of information would already be widely available, but a quick Google search for “how to write a good bug report”  led me to:

  • Specific bug reporting systems which require you to log in before giving you any information (boo), and
  • Some “how to write clear bug reports” tips from bloggers who don’t write clearly (by my standards) and/or are condescending.

Not terribly helpful. I shall now rush in to fill this information gap.

~~~

How To Write a Useful Bug Report

The way you describe a bug to developers can make a big difference to how quickly they can resolve an issue. The better the bug report, the less initial troubleshooting/diagnosis the developers need to go through.

Your “Description” field [in our internal system] should contain a little narrative that includes the answers to the following:

1. Where were you? (Website / product or application / URL / section of page)

2. Were you logged in, and if so, with what permission levels? (e.g., not logged in, logged in as user or admin)

3. What operating system and browser were you running? Include version numbers if you can (e.g., Windows XP 2002 SP 3 + Firefox 10.2.0). It can be illuminating to check a bug in multiple operating systems and browsers, and then add something like this to your description: “First encountered this on my work desktop setup, where I’m running Firefox 10.2.0 over Windows XP 2002 Service Pack 3. Tested in Chrome 17.0.963.56 m and Internet Explorer 8.0.6001.18702; same results. I also tested it from home where I’m running a (really old) Mac, OS 10.5, and Firefox 9ish; same results, only slower.”

4. Describe the bug:
a. What did you click, or what action did you take right before the bug made itself known (e.g., added something new to cart, clicked “Submit”)?
b. What did you expect to happen when you took that action?
c. What actually happened?
d. What does “fixed” look like? That is, do you want the fix to reflect the “what did you expect” state above, or something else—like hiding the broken thing, or reverting to an earlier build?

This should give [our in-house development team] enough to go on to make a good start at diagnosing the issue, and save a bunch of back-and-forth emails or ticket comments.

~~~

Remember, dear reader: The above is a basic cheat sheet I wrote for my own team. Your mileage may vary. Other things like bandwidth/connection type, other programs you were running (like adblockers or portals like Blackboard), whether you were streaming Pandora at the time, etc. can also be helpful diagnostics. This isn’t the One True Way. It’s my way, today.

[1] Wonk: I use this term to mean “Expert; having a deep and detailed understanding of a particular content area which is little known by the general public.” Etymological sources differ; some suggest it is drawn from “know” spelled backwards, as in “to know your material backwards and forwards.” Go back to reference point

[2] Geek: I use this term to mean “A person who is proficient in digital technologies; who knows how to write at least one type of computer code and/or is comfortable installing, troubleshooting, or repairing computer and networking hardware.” As a person who identifies as a wonk and a geek (as well as a nerd and occasionally a dork), I use these terms with love and respect. Go back to reference point

[3] None of the bug/feature tracking systems I have used are perfect. They all have quirks, or leave something out that you wish they put in, or have more features than you’ll ever use. Go back to reference point

I have a huge crush on MailChimp

On Wednesday, I found myself in need of sage advice to help manage expectations and soothe frustrations with the process of switching bulk email providers. I spent a couple hours late on Wednesday night noodling around on the MailChimp site and thinking about how much I love them, and then thinking that the Top Ten Totally Sweet Things About MailChimp (Today) would make a cheerful and possibly useful post.

First, a little back-story: Almost everything I know about bulk email best practices I learned from MailChimp. Circa 2002, I needed to convince my executive director to get permission from people before sending them bulk email, and to update our list once in a while. I was new, and green, and practically not a geek at all [1]. My advice did not sway her. MailChimp’s materials did. Continue reading I have a huge crush on MailChimp

Google mystery, continued

Well, that was short-lived. Whatever the algorithm change was that bumped this site up (or LinkedIn, Facebook, and Google+ down), it has apparently reverted to what it was before. Again, the whale blows bubbles, and I am a perplexed bystander.

Google mystery

Google rankings are not part of my core expertise. I know a bit about how the system works—enough to know that “Can’t you just call Google and tell them to move us higher?” is a ridiculous thing to say. I like watching the rankings change, from an almost oracular perspective. I like the intellectual/synaptic sensation of trying to derive a law from observations of a complex phenomenon, even though I know I’ll never figure out all the variables from the outside.

Every once in a while, I Google my own name.[1] Partly this is a mildly paranoid due-diligence process; partly it’s to see what my “Google resume” looks like; and partly it’s to try, in a lackadaisical way, to gain some insight into the mysteries of Google itself. (The search-algorithm mysteries, not the mysteries of the greater Google enterprise.) Something changed today. Continue reading Google mystery

At long last, a product

The new multi-contributor blog for the Knowledge for Health (K4Health) Project launched today. This is the first major visible-to-the-public result of my team’s work since I started at K4Health at the end of August. There are still some tweaks to be made and kinks to iron out; I don’t think there’s a WYSIWYG CMS interface in existence that doesn’t have inconvenient idiosyncrasies. But overall, it’s an immensely satisfying product, born out of an immensely satisfying process. I learned some valuable lessons about web fonts and font smoothing, coordinating quality assurance teams who are geographically dispersed (i.e., not all working in the same time zone), the complexities of legacy development environments, and the myriad tiny pieces of promotion that follow after a product goes live. Looking forward to a restful winter break.