What I am going to try is using key words from the most controversial topic of my generation: the Israel-Palestinian issue. However, there is a risk: what I’ve noticed is that anyone who stakes out a position, no matter which side such position favours (if any), will be the subject of vicious attacks. This issue is quite unique in that regard; it seems to annihilate any trace of sanity in some people.
I am going to try to be the first person to speak about this issue in a way that keeps me impervious to attacks. Here I go!
1. Israel is often miss-pronounced “is-real” (It should be pronounced “isss-ra’el”). This doesn’t mean, however, that it isn’t real. People have noticed it.
2. “Palestine” could have been a term used to refer to friends of Albert Einstein.
That’s 2 keywords down, and I’m still untouchable. How many more can I get away with? Lets see:
3. Zionism is what is practiced by someone who likes to keep his eye on something (get it? “he has hiS EYE ON it”). Please get your laughter under control before proceeding.
4. PLO… now this is a true story. In grade school, teachers would write “PLO” on the chalkboard when they didn’t want the contents to be wiped off. PLO in this case stood for “Please leave on.” Either that or there was widespread support for the Palestine Liberation Organization among the schoolteachers in my very hick hometown.
5. Hamas sounds like it could be made up of two words that would be offensive to Muslims (and Jews and vegetarians too)
I’m on a roll! Just a one more now:
6. Benjamin Netanyahu – did his family found an internet search engine? Or maybe his last name is Hebrew for “Nathan – celebrate!” (netan – yahu!)
Ok, I think I’ve exhausted your attention span. I still think I’m safe from vitriol (especially since almost no one reads this blog). But lets see what comes up in the comments.
Tuesday, August 24, 2010
Sunday, August 15, 2010
Who are the poorest in this town?
Suppose you have come up with a way of identifying the poor people in a poor country (where income is hard to verify) in transparent way (one reason for doing this would be for social assistance targeting). How would you go about doing this? Here are a couple of options, as I see them:
1. Suppose you have household survey data that contains variables for easily observable characteristics, such as the household’s house, its assets, characteristics of the household (such as education of the household head, location), etc. as well as actual consumption (adjusted for regional price differences). You could try Proxy Means Testing: that is you regress actual consumption on a set of variables (e.g. gender of household head; age of household head; age of household head squared; presence of car, air conditioner, etc.) to arrive at a set of betas, or weights, which you would then use to predict consumption for out-of-sample households. Those with predicted consumption below a certain cutoff or threshold can be defined as poor. For this example let us define as poor those in the bottom quintile of predicted consumption.
One problem with this approach is that if the regression does not explain a lot of the variation in the data then many errors of inclusion (noon-poor being deemed poor) and errors of exclusion (poor being deemed non-poor) will result. Ideally, your PMT model would be limited to 10-14 variables, and this would limit the explanatory power of the model; the consequence is that the inclusion and exclusion errors would be significant. One mitigating fact, though, is that most of the errors of inclusion consist of those in the second lowest quintile of consumption. In a very poor country, these people are still quite poor.
2. One idea suggested to me was to use open public meetings to determine who the poorest people are. The idea is that you’d gather some people in the village, and ask them: who are the poorest 20 percent in this village? The extent to which this would work might depend on the power dynamics in the village. The advantage is that it could allow local knowledge to inform the determination of who is in the poorest quintile.
Some modifications to this approach are worth considering. For example, one could approach several people in the village and ask them to pick out those they think are in the poorest quintile. If there is significant overlap in the selection, it would be reasonable to conclude that the such overlap constitutes the poorest people in the village.
I went to a small farming village in a dirt-poor Central Asian country a few days ago. I spoke with a couple of farmers, and asked them about how in general to identify the poorest people in the village. It seems that the answer is that ‘you just know.’ One farmer took me to see a lady who, in his opinion, is the poorest person in his village.
He was right – you just know. This lady’s husband left her to become a non-remitting labor migrant. She has a disabled daughter, and at least one young son. She has a house (everyone has property) with a decent sized yard, but the kitchen and other rooms are tell the story about the well-being of this family. To get by, this lady gathers stalks, makes brooms out of them, and sells them for $0.75 each.
While there I saw that others in her village grow crops in their yards right outside their houses (Unlike the farmers, they are not forced to allocate 60 percent of their land to growing cotton). I was wondering if there is potential for this lady to do so as well. I could provide a small loan to pay for inputs (labor, seeds, fertilizer, etc), the farmer can provide technical advice. I can see how small social enterprises like this get started. My only concern is that the funds might be diverted other needs (such buying medicine and food, which is understandable) rather than being spent solely for the purpose of farming, which would not bring in revenue for some time. I’ll have to make sure I have a plan for this for my next trip out there.
1. Suppose you have household survey data that contains variables for easily observable characteristics, such as the household’s house, its assets, characteristics of the household (such as education of the household head, location), etc. as well as actual consumption (adjusted for regional price differences). You could try Proxy Means Testing: that is you regress actual consumption on a set of variables (e.g. gender of household head; age of household head; age of household head squared; presence of car, air conditioner, etc.) to arrive at a set of betas, or weights, which you would then use to predict consumption for out-of-sample households. Those with predicted consumption below a certain cutoff or threshold can be defined as poor. For this example let us define as poor those in the bottom quintile of predicted consumption.
One problem with this approach is that if the regression does not explain a lot of the variation in the data then many errors of inclusion (noon-poor being deemed poor) and errors of exclusion (poor being deemed non-poor) will result. Ideally, your PMT model would be limited to 10-14 variables, and this would limit the explanatory power of the model; the consequence is that the inclusion and exclusion errors would be significant. One mitigating fact, though, is that most of the errors of inclusion consist of those in the second lowest quintile of consumption. In a very poor country, these people are still quite poor.
2. One idea suggested to me was to use open public meetings to determine who the poorest people are. The idea is that you’d gather some people in the village, and ask them: who are the poorest 20 percent in this village? The extent to which this would work might depend on the power dynamics in the village. The advantage is that it could allow local knowledge to inform the determination of who is in the poorest quintile.
Some modifications to this approach are worth considering. For example, one could approach several people in the village and ask them to pick out those they think are in the poorest quintile. If there is significant overlap in the selection, it would be reasonable to conclude that the such overlap constitutes the poorest people in the village.
I went to a small farming village in a dirt-poor Central Asian country a few days ago. I spoke with a couple of farmers, and asked them about how in general to identify the poorest people in the village. It seems that the answer is that ‘you just know.’ One farmer took me to see a lady who, in his opinion, is the poorest person in his village.
He was right – you just know. This lady’s husband left her to become a non-remitting labor migrant. She has a disabled daughter, and at least one young son. She has a house (everyone has property) with a decent sized yard, but the kitchen and other rooms are tell the story about the well-being of this family. To get by, this lady gathers stalks, makes brooms out of them, and sells them for $0.75 each.
While there I saw that others in her village grow crops in their yards right outside their houses (Unlike the farmers, they are not forced to allocate 60 percent of their land to growing cotton). I was wondering if there is potential for this lady to do so as well. I could provide a small loan to pay for inputs (labor, seeds, fertilizer, etc), the farmer can provide technical advice. I can see how small social enterprises like this get started. My only concern is that the funds might be diverted other needs (such buying medicine and food, which is understandable) rather than being spent solely for the purpose of farming, which would not bring in revenue for some time. I’ll have to make sure I have a plan for this for my next trip out there.
Sunday, August 8, 2010
What I've been reading lately
1. McMafia: A Journey Through the Global Criminal Underworld (Vintage) by Misha Glenny. Recommend. Its about organized crime in different areas of the world and how they are formed and what they do.
2. Getting Health Reform Right: A Guide to Improving Performance and Equity
It is super BORING read; they should include more anecdotes and link the lessons to these. Still the content is useful.
3. Liar's Poker by Micheal Lewis. Highly recommend. He is hilarious, and has a great writing style.
4. Blink: The Power of Thinking Without Thinkingand Outliers: The Story of Successby Malcom Gladwell. Highly recommend. I want to re-read blink, in fact.
2. Getting Health Reform Right: A Guide to Improving Performance and Equity
It is super BORING read; they should include more anecdotes and link the lessons to these. Still the content is useful.
3. Liar's Poker by Micheal Lewis. Highly recommend. He is hilarious, and has a great writing style.
4. Blink: The Power of Thinking Without Thinkingand Outliers: The Story of Successby Malcom Gladwell. Highly recommend. I want to re-read blink, in fact.
Programming tips - translating docx and xlsx files using google translate
UPDATE: FORGET THIS POST! I wrote an MS Excel macro do to this, which you can add as an "add-in" to your MS Excel Worksheet. You can find instructions on how to do this and how to obtain the .xlam file here.
(This is basically a note to myself reminding me how to do something that I may need to do again)
For my job I (sometimes, many times) have to do programming. Mostly I use STATA, but also use perl and sometimes VB to get tasks done. I often get stuck, and then spend many hours google searching on how to do something. If I'm still stuck, I'll ask one of programmers in the programming group in another unit at work.
Sometimes I want to translate a docx file for an xlsx file using google translate, but preserving the formatting exactly. Right now google doesn't support the xml based docx and xlsx files for upload for translation. I don't understand why, since it is super-easy to do. I spent less than a week writing code in perl to enable this. Here are the steps:
1. change the docx or xlsx file extension to zip
2. unzip the file. This will create a folder with directories and a file.
3. find the document.xml file in the word or excel directory. (you may wish to do the same with the footnotes & endnotes files for word documents)
4. my code pulls the contents of this file for translation, replacing them with placeholders, and putting the contents file in a separate file
5. upload this contents file to google translate (translate.google.com/toolkit)
6. down load the translation
7. I also have code to replace the placeholders file with the translated content, putting the results in a new file
8. replace the document.xml file with the file from the previous step
9. Zip the files back up. Here is where I was having lots of problems.
When I was zipping the files back up, I would zip the containing folder. This is the wrong way to do it - word/excel can't open this. The _rels, docprops, and word directories, as well as the [Content_Types].xml have to be in the root directory, not some containing folder. So then I tried instead selecting the _rels, docprops, and word directories, as well as the [Content_Types].xml file directly and zipping them, and after changing the extension to docx word was able to open it as a word document (word still had to repair the document, but it was able to. If I use Yemuzip to zip the files, then word doesn't have to repair the document to open it.)
The result is a word or excel file that has been translated, while preserving exactly the original formatting. Email shafique.jamal@gmail.com if you want the code. (its in perl)
(This is basically a note to myself reminding me how to do something that I may need to do again)
For my job I (sometimes, many times) have to do programming. Mostly I use STATA, but also use perl and sometimes VB to get tasks done. I often get stuck, and then spend many hours google searching on how to do something. If I'm still stuck, I'll ask one of programmers in the programming group in another unit at work.
Sometimes I want to translate a docx file for an xlsx file using google translate, but preserving the formatting exactly. Right now google doesn't support the xml based docx and xlsx files for upload for translation. I don't understand why, since it is super-easy to do. I spent less than a week writing code in perl to enable this. Here are the steps:
1. change the docx or xlsx file extension to zip
2. unzip the file. This will create a folder with directories and a file.
3. find the document.xml file in the word or excel directory. (you may wish to do the same with the footnotes & endnotes files for word documents)
4. my code pulls the contents of this file for translation, replacing them with placeholders, and putting the contents file in a separate file
5. upload this contents file to google translate (translate.google.com/toolkit)
6. down load the translation
7. I also have code to replace the placeholders file with the translated content, putting the results in a new file
8. replace the document.xml file with the file from the previous step
9. Zip the files back up. Here is where I was having lots of problems.
When I was zipping the files back up, I would zip the containing folder. This is the wrong way to do it - word/excel can't open this. The _rels, docprops, and word directories, as well as the [Content_Types].xml have to be in the root directory, not some containing folder. So then I tried instead selecting the _rels, docprops, and word directories, as well as the [Content_Types].xml file directly and zipping them, and after changing the extension to docx word was able to open it as a word document (word still had to repair the document, but it was able to. If I use Yemuzip to zip the files, then word doesn't have to repair the document to open it.)
The result is a word or excel file that has been translated, while preserving exactly the original formatting. Email shafique.jamal@gmail.com if you want the code. (its in perl)
Monday, August 2, 2010
Subscribe to:
Posts (Atom)