0

How to survive the economics job market

Posted by David Smerdon on Sep 5, 2017 in Economics, Non-chess

As I’ve previously mentioned, last year I went to the economics job market. The job market is an incredible annual event that sees thousands of newly-minted PhDs and young researchers from around the world pitted against each other in a global battle for careers at hundreds of universities, government agencies, international NGOs, banks, start-ups and other multinationals such as Facebook and Microsoft. It’s huge. It’s stressful. And, for a young economist about to leave the safety of the student nest, it’s incredibly confusing.

One of the things that really struck me on the market is that not every candidate is on an equal footing when it comes to understanding the process. I found myself playing catch-up once the market opened, whereas for some of my (largely US-based) peers, their universities start training them for years before they were even applying.

So I’ve put together some tips for surviving the market for this and future year’s candidates, based on my experiences. There are plenty of very good advice articles online, especially:

These articles are invaluable, not least to help ease the stress of what can be an especially trying experience for a young researcher. To these, I’m adding some advice from my recent experience as a Europe-based applicant. I’ll be giving a seminar to prospective job-market candidates from my alma mater on Wednesday, and I’ve expanded the presentation so that the slides can stand alone. Hopefully some readers find them useful. Comments/suggestions are very welcome.

(Note: If your browser doesn’t let you see the embedded pdf, there’s a download link below.)

Download (PDF, Unknown)

Tags: ,

 
0

Perverse incentives

Posted by David Smerdon on Aug 12, 2017 in Economics, Non-chess

One of the cool things about studying economics is that it teaches a new and exciting way of thinking about things. This means we can look at all sorts of topics in a whole new light, spot problems that we never knew were even there, and come up with efficient solutions. And THAT means every now and then we come across an everyday, routine situation and say, “Err, what?”

This has happened to me a few times recently, all related to the problem of incentivesOne of the neat areas in the economist’s toolkit is contract theory, which came to public prominence last year when two of its champions were awarded the Nobel prize. Contracts and incentives govern most of the relationships in our everyday lives, both formal (such as with our employer, our bank, the government…) and informal (neighbours, strangers, and of course, partners). And it’s absolutely critical to get the incentives right. Normally, they’ve sorted themselves out over the years, either through social norms or legal obligations. But every now and then, we come across situations where the incentives don’t align with the situation.

One example from earlier this week was when I was investigating customs clearance when moving back to Australia. Shipped goods are inspected by customs officials, and – if they decide the goods don’t meet the test – they can charge you for treatment or destruction of your property. That is, the officials, and only the officials, get to decide whether or not you have to pay them more money. Now in this case, we could assume (probably correctly) that the officials are acting as unbiased members of a government organisation in which there’s no corruption going on, and so the inspection decisions are made on the merits alone. But part of designing good contracts is reducing the potential risks of corruption developing in the future, and here the incentives don’t really line up.

Not that this will affect my shipping decisions. But fast-forward to yesterday when I started investigating buyer’s agents, a whole industry I previously didn’t even know existed. Basically, real estate agents are employed by would-be home buyers to search for property on their behalf, negotiate the price and execute the sale. Sounds good in principle, especially for the busy worker who doesn’t have the time or know-how to search themselves. But the fee structure is truly baffling. Buyer’s agents charge a (quite sizeable) percentage of the sale price – say, 3% of the final amount you pay for the house. That means when they’re searching for property and negotiating with the seller, they make more money if they get a worse deal for you. The incentives are almost exactly aligned in the wrong direction. How does this possibly make sense?!

Granted, I’m new to this field, but the more I’ve searched online the surer I am that the incentive system in this industry is really perverse (to use the economic term). So then I started to think about what sort of contract would actually align the incentives of agent and buyer. There are a couple of better alternatives to the status quo, though it turns out this problem is not as simple as it appears.

A fixed-fee model is the easiest, and a minority of agents do use this. But this means there aren’t any monetary incentives for the agent to really work hard to get you the best deal. (There are some reputational benefits of course, but that’s another story.) For the buyer who’s risk-averse and just wants to avoid getting screwed over, however, let’s see if we can do better. First, the agent searches for properties within the buyer’s strict criteria (price range, location, size etc). Then, when the buyer has agreed on one she likes, she agrees that she will pay in total the listed property price. The agent then gets paid whatever amount lower than the asking price the agent can negotiate. E.g. imagine the listed price is $500,000 and the agent negotiates it down to $490,000. The buyer pays $490,000 to the seller and $10,000 to the agent. The point is that the buyer knows exactly what her total cost will be before the negotiation takes place, reducing the uncertainly, as well as reducing the risks for the agent to act against the buyer’s interests.

It’s not a perfect solution, though. This contract might influence the agent’s selection of properties to present to the buyer; maybe we’ll only get to see properties that the agent thinks are overpriced (and therefore easily negotiated downwards). Can we do better?

The solution I came up with is complicated, but we can outsource most of it to a computer (in theory, anyway). First, the buyer comes up with a list of strict criteria for the perfect property. E.g. up to $700,000, within 4km of the office, near to parks and schools, at least 3 bedrooms, living space of at least 150 square metres, blah blah. Then there’s a second list of “optional extras” that the buyer likes: a garden would be nice, a pool would be amazing, more than one bathroom or more than one car park sounds good, open-plan kitchen, close to cafes, possible to ride a bike to work, and more blah blah. We feed our strict and desired preferences into our fancy computer software, which spits out a score function. Then, every potential home, for a given price, is given a score. If a house just meets the bare minimum criteria with no optional extras, the score is 0, and the agent gets a minimal flat fee. For every point higher in the score, the agent gets paid more. E.g. if the price is negotiated down to $5,000 less than our maximum price, the score goes up, the agent gets paid an extra $2,000, and everybody’s happy. If the location is closer to the office, the score goes up, proportional to the distance or travel time. Extra bedroom? Score goes up. Outdoor pool? You beauty!

The nicer the property and the better the deal is to the buyer, the more the agent earns. The key point is: it’s in the agent’s interests to try harder for the buyer. Incentives are aligned. Everybody wins.

So… Anyone know a good programmer?

 
5

Chess, age and Roger Federer

Posted by David Smerdon on Jul 22, 2017 in Chess, Economics

In the provocatively titled “Can Anand Be The Federer Of Chess?“, Chess.com’s Mike Klein brings up the question of whether chess players can really keep performing at the top level as they age. Proponents of this view typically note that chess is not subject to intense physical stresses, that it depends a lot more on past experience than other competitive sports do, and typically end curtly with “Korchnoi.”

All plausible arguments. On the other hand, opponents like to point out that scientists typically ‘peak’ before they turn 30, judging by the age when the key work for Nobel prizes was conducted (though this age may be increasing). This is supported by a famous Einstein quip:

“A person who has not made his great contribution to science before the age of thirty will never do so.”

All well and good, but it’s perhaps not fair to compare chess to either physical sports or science, as our passion only partly overlaps with either comparison. It’s probably better to look at how chess rankings and ages have changed over time. And here, there’s undeniably been a trend towards the world’s elite becoming younger. Hardly surprising, really, given the exponential increase in availability of chess materials and resources. The more easily dispensable chess knowledge is younger players, the less older players can rely on their experience to their advantage. And that’s the key point: If younger players can more easily (and quickly) pick up the knowledge that took older players years to acquire, then the days of chess veterans dominating the charts are numbered.

A quick aside: Of the world’s top 15 as of the time of writing,  only two are over 40: Former World Champions Vladmir Kramnik and Vishy Anand. Is it a surprise that Kramnik is known as a meticulous opening preparer and innovator, while Anand was one of the very first professional chess players to use computers for chess training? Of the remaining 13 players, three are in their 30s, nine are in their 20s, and the future World Champion Wei Yi is still old enough to play in junior events…

There’s a reason I highlighted some special characteristics about Kramnik and Anand. Statistics that look at the average rating of different age cohorts often make a fundamental error that econometricians like to call ‘selection bias’. The players who drop out of professional chess as they get older are not random, but instead are exactly the types who are ‘feeling their age’ with regard to their chess performance. So when we look at the average of older players who are still competing, we’ll actually be calculating the average of the better players, so we’ll always think age is having less of an effect than it actually is.

This seems like a pretty basic thing to keep in mind, but it’s shocking how often this mistake is made in research. Anyway, some economists have recently shown that once we take care of this bias, the results show that chess players’ performance indeed decreases steadily as we get older. Moreover, this decline happens much earlier than we used to think. They found that the average peak age for chess performance is a remarkably young 21.6 years, with a median around 24 years. One of the most remarkable findings from their study is that the average chess player’s level at 40 will be similar to when he or she was 15 (obviously, this assumes the player was already playing tournaments at 15). I find this very hard to believe from an intuitive standpoint, and I haven’t yet had time to go through their paper in close detail. But from a quick read, it seems like the results are solid.

“Seriously, show some respect.”

As an ageing player myself, I should find this depressing (especially to think that in eight years time, I’ll get beaten by my 15 year old self). But taking a step back, the age we live in is the most exciting of all time to be a chess lover, as I explained in a recent interview. The exponential increase in chess knowledge, playing opportunities and tournament coverage (all thanks to computers) has brought the joy of the game to hundreds of millions. And if that means I have to get beaten up by a couple of kids every now and then, do I really have a right to complain?

Then again, back in my day…

 
6

Why I didn’t write

Posted by David Smerdon on Jun 19, 2017 in Economics, Non-chess

I didn’t plan to stop writing. It just sort of happened; I had a lot of fires to put out in my work, and they kept coming, and after a while it all became too easy to forget about this site. I feel a bit guilty – it was seven years of regular writing without a break – but there you have it.

The main reason for the break – the biggest ‘fire’ – was the job market for economists. The job market is really an amazing concept that only economists could think is a good idea. This is a heavily centralised process that allows (or forces) thousands of newly-minted PhD graduates and other junior academics from every country to compete for hundreds of economist jobs around the world. The astute visitor to this site (if I still have any) may have noticed the ‘Economics’ tab in the website header, which directs to my professional site – an essential for the job market. Besides the regular details that you’d expect on a professional site, it also features a cheesy video CV that was surprisingly popular with potential employers. Filming and directing credits to Sabina, naturally…

 

 

The main event in the job market is the annual American Economic Association meeting, which for candidates is basically a massive stress-fest with a lifetime’s worth of interviews crammed into a couple of days. I must admit, there was a moment in early January, where I was literally running on the ice-covered streets of Chicago in my suit, sweating little icicles in -20 degrees C with the icy Lake Michigan wind in my face, desperately trying to make my fifth interview for the day on time, and looking up at the giant, menacing letters on Trump Tower… when I began to wonder if it was all worth it. (My interviewers were also late.)

Almost all economists interested in the world of academia will have to go through this baptism of fire once in their lives. For me, despite my job in Milan and finishing my thesis in Amsterdam, the market proved to be a full-time occupation from November to February. To survive, I had to make sacrifices: chess tournaments (including online!), book reviews, commentary and banter blitz, weekends, Christmas… and blogging. There was light at the end of the tunnel, though. At the end of the wash, I found an exciting position at the University of Queensland, back in my home city, which I’ll begin later this year.

I have a lot more to say about the job market, but I’ll save that for a later (but not too much later!) post. In particular, a lot of PhD students in economics really don’t have a good feel for what the market entails, let alone how best to prepare for it, so I’m going to try and provide some advice. Even though November is still a few months away, early preparation is key – something that I wish I’d known a year ago.

But for now, and particularly for those outside the strange world of academia, I can highly recommend this entertaining description of the job market that pretty much sums it up: Speed Dating for Economists.

 
3

Arbitrary achievements

Posted by David Smerdon on May 27, 2016 in Economics, Non-chess

How many seconds have you been alive in your life?

Seriously, take a guess. Just pick the closest number that feels right. What did you think? One million? Ten million? A hundred million?!

This question is hard. As humans, we’re not used to calculating or even guessing big numbers. We’re not programmed for it; after all, it wouldn’t have been much use to our ancestors. Really big numbers, really little numbers, and probabilities: these are things at which humans, quite frankly, are rubbish.

Behavioural economists and psychologists use this as an explanation for why many people take part in lotteries. Their models might show that it’s mathematically rational to take part in the lottery if the first prize is $100 million but not if it’s under $80 million, for example. While the math works, personally I doubt many people are thinking this way when they buy a ticket – “Oo, I’ll only win $80 million; might wait til it gets a bit higher…”. Actually I think the real reason many people take part is not because they’re ignorant that it’s irrational (this fact gets shoved down our throats in high school math class), but rather because there’s some extra enjoyment from being part of something, some big social event, that connects us in an abstract way.

But before I digress too far, let’s get back to the question at hand. If you guessed 1 million, or even 10 million, I’m afraid you passed that milestone long before your first birthday. And unless you’re an extremely bright three-year-old reading this, 100 million was also off. It turns out that 1 billion is quite a close ballpark estimate for the number of seconds in one’s life, a milestone which a person hits before their 32nd birthday.

(Incidentally, one of my friends guessed a trillion, which would make him former chums with the first homo sapiens around 30,000 BC.)

I brought up this topic because, as many of you know, I hate birthdays. But I love symbolism, and silly math. I’m the sort of person who, on my friend’s recent 27th birthday, wished her “a long and happy life well beyond your next cubic birthday.” And so it was that, having bugged my mum to dig up the timestamp on my birth certificate, I was (I presume) one of the few people consciously aware of the milestone when I ticked on to my one billionth second on earth.

(Want to work out when’s your billionth second, or your own arbitrary milestones? You can find a calculator here.)

Someone, breaking time down into its smallest practical unit adds a weird perspective on things. As in, we can physically note the passing of time if we count the seconds – you are getting older now, and now, and now. Depressing. A cheerier question is: What was the most memorable second in your life to date? Not moment, or event (though it’s likely part of one), but second. What was the scariest? The happiest? Can you remember your angriest second? Which of your seconds had the most impact on another person’s life?

Perhaps I’m just in a philosophical mood. After all, I hit the big ten-digits yesterday. Unfortunately, the moment was during a seminar at work so I couldn’t whoop for joy or interrupt the invited speaker to pronounce my new-found ancientry. (Cool word, huh? You learn these things when you get to my age.)

But I look forward to discussing all of these questions over coffee in half an hour, when I am forcing my colleagues to celebrate the landmark with me. I’ve copied the invitation email below.

 

From: David Smerdon

To: CREED mailing list

Subject: Cookies

Abstract:  There will be some cookies (of dubious quality, but free consumption) available in the kitchen at 11:00.

Keywords:  Minimum effort; Public goods game; Free-riding; Reference dependence; Hyperbolic discounting

At the beginning of Alexander’s seminar yesterday afternoon, I must confess I was watching the clock. Only briefly, mind you; I was watching it until exactly 16:05.40, and then I turned back to the speaker (“What about guns?”, you may recall I asked, in a desperate attempt to cover my distraction).

Why this exact time? Well, as many of you know, I have limited enthusiasm for birthdays, and I abhor my own. But at this moment, I passed a milestone that we each get to achieve only once in our lives: I had been alive for one billion seconds.

Unfortunately, as I discovered last night, with great wisdom does not come great baking prowess and my efforts to replicate the Anzac biscuits of last month were a bit of a disaster. They look like the earwax of a giant with dandruff. But I offer them to you anyway, along with an invitation to a short coffee break at 11:00.

Now I know some of you will question this achievement. You may want to ask how I exactly know the precise second I was born. You may also protest that the issue is more a philosophical one about when life begins, or quip with glee that one billion is itself quite arbitrary – “After all, we get to achieve each new second only once in our lives!” 

You, my friend, will not get a cookie.

– David

 
7

When I’m not playing chess…

Posted by David Smerdon on Apr 16, 2016 in Economics, Non-chess

Recently someone asked me what I did “when you’re not playing chess.” I found the question quite comical because I’m not playing chess the vast majority of the time. Still, occasionally I get mistaken for a professional player (albeit a weak one).

Those who’ve read my blog before won’t be surprised to read that I’m a researcher. I’m currently finishing a PhD in economics, with a focus on social and psychological topics. Recently I got the chance to present my current project at the General Sir John Monash symposium, held in Oxford. My work’s about finding the best ways to resettle refugees smoothly and efficiently into the community.

The presentation was pecha kucha style, which was weird but fun: 20 slides, 20 seconds each, no control over the speed. The organisers have made the presentations available online, so if you want a quick glimpse at what I do when I’m “not playing chess”, check out the video below.

 
11

Do women play less beautiful chess? A rebuttal

Posted by David Smerdon on Feb 28, 2016 in Chess, Economics, Gender, Non-chess

I generally try to avoid the Chessbase news site, as experience has demonstrated that reading its articles generally leads to me hitting my own head more than is considered healthy. But this morning I stumbled across what on the surface seemed an incredible article. Azlan Iqbal, a senior lecturer at the Universiti Tenaga Nasional in Malaysia, wrote an article claiming to have found evidence that women play less beautiful chess than men. He recently presented his scientific findings, based on his own advanced computer software, at the reputable International Congress on Interdisciplinary Behavior and Social Science.

 

Readers will know that I’ve previously weighed in on the “gender in chess” debate (see here, and in a more academic sense here). But I like to keep an open mind about things, especially if they are backed by scientific evidence, and so I made myself a coffee and sat down to dissect the groundbreaking research of “Azlan Iqbal, PhD”, as he himself writes under the title.

 

Despite my general rule of distrust for anything written by someone who feels the need to write “PhD” after their name, the fact that his paper was accepted at an international conference was heartening, and Iqbal also provided the slides from the conference and the academic paper for reference. The Chessbase article summarized the main findings, which seemed to conclusively demonstrate that women play less aesthetically than men in his exhaustive analysis of Chessbase’s “Big Database 2015”. It seems somehow absurd on the surface that this result could even be measured, let alone whether it has any truth, but I pressed on, eager to see the real analysis. As the coffee slowly made its way into my system, I decided to start with the conference slides and then move on to the more technical scientific article.

 

The introduction of the presentation starts with the smiling photos of Magnus Carlsen and Mariya Muzychuk, together with their ELO ratings and the comment that “Statistically, a player rated 2882 has an 88% chance of defeating a player rated 2530 in a game.” Of course, every chess rating system only gives the expected score in a game, and says nothing at all about the chances of winning. Not a great start, but an easy mistake to make and so, excuses made, I moved on. The next slide started with the bold statement “Research suggests that men are better at chess than women.” Ugh! As I (and many others) wrote about extensively, this is certainly not the academic consensus. But everyone’s entitled to their own opinion –  even though in this case it was hardly framed as one. I quickly moved on, and – ah! – the next slides have actual chess diagrams in them! Iqbal presents a simple example of a famous mate-in-three:

 

Screen Shot 2016-02-27 at 13.48.38

 

Seen this before? Of course; it’s a beautiful and famous chess puzzle. Unfortunately, Iqbal’s next slide, purporting to show the solution, begins “1.Nxh6+”. This doesn’t exactly inspire confidence. Naturally we can excuse this as a simple double-typo, although the little errors by now were beginning to accrue.

 

I hastily moved on to the real analysis. Iqbal describes his methodology as follows: He wanted to compare all mate-in-three sequences by men and women in the Chessbase database of games, ranking them with his patented software ‘Chesthetica’ for aestheticism. Now you might immediately be struck by one obvious questions here, as I was. What evidence is there that executed mate-in-threes can reflect general beauty in playing chess? Unfortunately, the only justification given is that three-move mates give the most consistent testing results from his software. The natural follow-up question is then to ask: how do we know Chesthetica is really measuring chess beauty? Ah, but here Iqbal preemptively counters with that often-used and curiously vague ‘get out of jail free’ card: Chesthica has been “experimentally validated”!

 

Confused? Never fear; now we get to the real data. Of the 6.3 million games in Big Database 2015, Iqbal extracted a sample of 1069 games by women and 115 games by men. Wait, what? Less than 1200 games out of over six million, and only 115 games by men? What’s going on?! There’s nothing in the slides to explain this inconceivably small sample, so I finally delved in to the full academic paper. And that’s when things got strange.

 

The first incomprehensible feature of the data collection is that Iqbal extracted only the games where White checkmated Black. This shortcut immediately threw out half the sample. The only reason I can possibly think of for this is that he didn’t want to have to modify his Chesthetica software to be able to flip the colours when it analyzed the Black-checkmating-White games – although given that Iqbal’s profession is computer science, this seems highly unlikely. I honestly have no idea why half the games would be discarded in this way, especially as Iqbal goes on to make the excuse many times in his paper that the analysis suffers from too few suitable games.

 

But how is it possible that he ended up with fewer male games? Well, the second baffling component is that the sample was split by gender using an incredibly rudimentary method: by filtering for tournaments with “women” or “men” in the game data. And surprise surprise, there were very few men-only events. I have to say that this seems like an astonishingly lazy way to filter the data. Why not just cross-reference the sample against any standard database of female players? Or hey, even just sort manually over a day or two? After all, I guess this is what Iqbal next had to do anyway, because he goes on to write that his team “managed to identify enough additional games between males to bring the 115 set to 1,069 as well.”

 

I found the term ‘managed’ a bit comical, seeing as he would have had literally tens of thousands of candidate games to choose from. How did they select the games? Were they random? And why limit this to exactly 1,069? Any basic statistical comparison can handle uneven numbers in the samples, and practically always in science, ‘more data is better’ from an academic perspective. It it very strange to say the least to limit one’s sample to an identical match (and highly unlikely that this came about by chance).

 

The eagle-eyed observer, however, will have noticed an even stranger term in Iqbal’s last sentence: “between males”. And indeed, closer inspection reveals that the database includes games by males only against other males, and games by females only against other females. Why?! Is Iqbal testing whether women play more beautifully against other women, perhaps as an extension of the famous Maass, d’Ettole and Cadinu paper of 2008? Well, no, and in any case, this would still require a sample of checkmates by women against male players.

 

I can think of no sensible explanation for this restriction, except that perhaps this was what came out of the primitive “women” and “men” tournament search. The result of this piece of academic lethargy is a bit more serious than just reducing the size of the data sample, as in the above cases. It adds an extra potential bias to the data, which is most likely a serious one given that – as Iqbal himself quotes in the paper – research has shown that women play differently against men than they do against other women.

 

By now I was on to my second coffee and getting slightly worried: I hadn’t yet reached the main results of the analysis and already the data set was (a) unnecessarily small and (b) most likely corrupt. With more than a degree of trepidation, I turned to the slide with the chief experimental results, and breathed a sigh of relief:

 

Screen Shot 2016-02-27 at 14.26.45

 

“Elo & Age Independent”! Yes! That was a huge relief to see; after all, the average Elo, a crucial component to aesthetic chess, is most certainly different between the male and female samples, and there’s almost certainly an age difference as well (although how relevant age is to chess beauty is debatable). But the fact that these factors had been excluded was absolutely necessary for the results to have any worth at all.

 

But I did begin to wonder how Iqbal had done this. After all, it would have required a reasonable (though not infeasible) amount of work to extract these variables from the Chessbase dataset, and all evidence so far had suggested that he was against excessive effort if it could be avoided. I turned back to the academic paper to find out the details. I took a sip of coffee, turned back to the paper, and almost spat it out as I read in black and white:

 

“There was no filtering based on age or playing strength as this study is concerned more with gender differences and aesthetic quality of play…” 

 

Well.

 

At this point, I considered whether I should even bother to read the rest of the article. It was of course possible that Iqbal had run multiple econometric regressions to try to control for the influence of age and Elo. But this would have run into all sorts of technical problems, such as the relationships between gender, Elo and age, as well as what we call ‘endogeneity’ – for example, one would have to prove that trying to play ‘beautiful chess’ in all your games doesn’t affect your rating. There are econometric techniques to deal at least in part with many of these concerns. None are mentioned. In fact, the explanation about Elo and age independence is curiously missing entirely from the scientific paper.

 

Not to worry; we shall persevere! I continued reading. Iqbal’s next result is to show that checkmates by strong players (Elo 2500 and above) are statistically more beautiful than average, according to his software. I doubt this surprises anyone. A major problem with this analysis is that checkmates that actually appear on the board in games are far more likely to occur in much weaker level games. It is very well ingrained chess etiquette for GMs to resign before checkmate is delivered, especially if it is forced (and forced checkmates are the only types Iqbal considers – don’t get me started about this).

 

So when might a forced checkmate actually be seen in a GM game? You guessed it: when it’s exceptionally beautiful. That’s the only time chess etiquette dictates that a player shouldn’t resign but allow the mate to be played out, if he or she wants. So this means that testing the relationship between Elo and checkmating beauty is inherently, inseparably flawed. GMs allow other GMs to deliver mate only when the checkmates are already beautiful – unless of course it happens during blitz, but no sensible study would include those games.

 

Well…it turns out Iqbal’s sample does include blitz games. And rapid, and exhibition games, and also – wait for it – games from simultaneous exhibitions.

 

I could go on, but you are probably already at the limits of your endurance. But allow me to leave you with just a few pearls of wisdom that can be found buried within the discussion in the paper. Iqbal is obviously proud of his main finding that females play less beautifully than males, as he extrapolates this to an insight into the psychological preferences of women, writing:

 

“Do the results then imply that women have less artistic appreciation of the game? Perhaps.”

 

He also suggests some keen intuition into the depths of – you’ll like this – the psychology of computers.

 

“This suggests that computers, regardless of their playing strength or ‘experience’ (if any), …perhaps [have] just no conscious or unconscious appreciation of art…”

 

All I can say to such shrewd perceptions is: thank God we have men.

 

It is not all bad news for females. Iqbal does, in a rare concession for the paper, offer the following caveat to his analysis:

 

“Logically, it would also follow that there are likely domains where women fare better aesthetically than men.” 

 

One or two do come to mind.

 

Reading over what I’ve written above, I feel a little guilty for the harsh and dismissive way I’ve criticized Iqbal’s work. So let me conclude with a positive note: In general, I am optimistic and supportive of scientific efforts to use chess as a tool to analyze different questions. There have been several interesting academic works in recent years that have done this, and I genuinely think that Iqbal’s Chesthetica software has its role to play in the future of chess research. But such research has to be conducted in a thorough, industrious and attentive manner, especially if it purports to lofty claims in areas such as gender. If not, the methodology is prone to stern aspersion or, even worse, outright dismissal.

 

I finished my second coffee just as I came to the concluding paragraphs of Iqbal’s paper. And here, finally, I agreed wholeheartedly with one of his generalized statements, and so it’s a good note on which to finish this rebuttal:

 

“In general, what we have demonstrated should not be taken too seriously…”

 

With that, I shut down my computer.

 

 

 
58

Nakamura and McShane’s big mistake

Posted by David Smerdon on Oct 12, 2015 in Chess, Economics

I love following this Millionaire Chess tournament. It’s really quite a spectacle: untitled players can pick up tens of thousands of dollars, one top player gets to go home with $100,000, and there’s even the bizarre “Win a million” lottery to keep us interested. And with so many ‘novelties’, it’s not surprising that there’s always the potential for controversy.

The biggest of these happened yesterday in the final round of qualification for Millionaire Monday. The main organiser GM Maurice Ashley was visibly irate when discussing the nine-move draw between the top seed Hikaru Nakamura and English GM Luke McShane. (You can see all the interviews of the draw controvery here.) He called short draws “a stain on our game”. Poor Hikaru and Luke suffered a fair bit of backlash in the chat channels and on Twitter for their performance, although they handled their interviews extremely well (particularly Luke, one of the real gentlemen of chess).

Hikaru also spoke well, aside from two notable exceptions. In the interview you’ll hear these two sentences: “the risk wasn’t worth the reward, frankly”, followed later by “I don’t think I did anything wrong”. At this point, the economist in me was highly dubious, although I have no doubt that both Hikaru and Luke actually believed this to be true.

I don’t know Hikaru personally, but in recent years I’ve been impressed by his interviews, and particularly how gracious and appreciative he is about being able to play chess for a living. And in games where the result is clearly not prearranged and either player would have to make a concession to avoid the repetition, I have no moral problems with a draw – so in this, the players are right. But in my opinion, one of the greatest innovations of Millionaire Chess is that its unique prize structure should naturally prevent boring draws. This is because the risk really is worth the reward in most cases.

So at this point, I did what any math/chess geek would do: I wrote down the problem 🙂 And without going into too many details, it turns out that the short draw was almost certainly the wrong decision for the players to make for themselves. Even under some very tolerant assumptions, the expected payoff from playing on, for either player, was greater than the expected payoff from accepting the repetition.

In my analysis, I had to make a bunch of assumptions, although I think they’re all pretty reasonable. I took into account that by playing on, the players would most likely have a very long game that would sap their energy somewhat (Luke had had a couple of really tiring previous games, while Hikaru said he had been feeling unwell). This would decrease their performance in the tie-breaks (if they occured) and the rest of the event. I also assumed that whoever chose to avoid the repetition would have to make a concession that would decrease their chances in the game from what they were at the outset. I assumed that, all else being equal, Hikaru’s chances in tie-breaks and the final-four were above that of an average competitor, while Luke’s were average (…after a short draw, while a bit lower if he played on). Finally, as it turned out, almost the maximum number of players on 4.5 points who could get to a tie-break with 5.5 points did so, while really made Hikaru’s and Luke’s decision look silly – but they couldn’t have known that when they took the draw. So I relaxed this assumption a bit so that a normal number (five out of eight potentials) reached the ‘tie-break score’ of 5.5. 

The analysis is a lot more complicated than this, but you can already get a rough idea of things by checking out the prize list. It’s incredibly top-heavy, and so under almost any realistic assumptions, a player in their shoes would want to maximise their chances of making the final four, above all else. If Luke played on, his chances of beating Hikaru were slim – but they were still much higher than making it through a tie-break with seven other players, including Hikaru. And for HIkaru himself, despite being one of the best rapid players out there, the odds still suggested the same decision.

(For those interested: my final numbers suggested that Luke’s expected payoff was roughly $4,000 higher from playing on, while for Nakamura, avoiding the repetition was worth about $8,000 in expectation.)

Of course, ‘in expectation’ is such an economist thing to say; probabilities are one thing, but only one outcome can actually occur in real life. For Nakamura, he made it through the tie-breaks (though not without some very bumpy moments!), and so it looks like things have paid off. But that’s not the right way to think about things. It’s like winning your first ever spin of roulette: just because you got paid doesn’t mean you made the right decision. I would definitely advise Hikaru in future to do these sorts of calculations (or better yet, get someone else to!) before crucial money clashes.

(Luke, on the other hand, is not a professional chess player and probably doesn’t care that much about the money. While he didn’t make it through the tie-breaks, he’s still had a good tournament and has good chances of picking up a big consolation prize in the rest of the open. But still, from a purely academic perspective, the decision-making was dubious!)

Of course, this was mainly just an academic exercise for a bit of fun (although professional players may want to take note – I’m open for consultation Laughing). But there is one policy implication, and here I’m specifically talking to Maurice and organisers like him. The lesson is: Don’t be discouraged! The Millionaire Chess team have done exactly the right thing in their structure to promote fighting chess. It’s hardly their fault if the players haven’t yet worked out how to act in their own best interests. But this will happen through experience (and maybe through posts like this…), so there’s no need to panic.

For the time being, I’m going to sit back, relax and watch the final fight – in which, typically, I expect Hikaru to defy the odds, win the tournament and thereby blow a big, fat raspberry at my analysis  Cool

 
0

Is Tony Abbott a Misogynist? Part II: Comments, Clarifications and Corrections

Posted by David Smerdon on Oct 3, 2013 in Economics, Gender, Non-chess, Politics

Well, my attempts to break down the Tony Abbott gender issue into simple, undisputable mathematics proved anything but uncontroversial. On my Facebook page, the comments came thick and fast, quickly turning my wall into rigorous debating forum.  The post received so much interest that for a couple of days in a row last week, it came up as one of the top Google hits of searches for “Tony Abbott gender bias one woman cabinet” (although, as Roger Emerson cheekily pointed out, it was also the top Google hit for “David Smerdon misogynist”).

There was quite a lot of support for my post, but of course, those aren’t the most interesting comments…everyone loves controversy! Criticisms largely came in three categories: emotive, philosophical and mathematical. A brief summary follows, after which I offer a small correction to the statistical analysis.

Emotive

A small percentage of the comments fall into what can generously be termed “emotive” arguments – in other words, arguments based more on emotion than substance.

Some comments implied, directly or indirectly, that I was supporting or defending Tony Abbott’s sexism, with one commenter going so far as to dispute my own claims to promoting gender equality. Nothing could be further from the truth. I have for many years been active in my pursuit of gender equality, both in Australia and abroad, and I certainly would not call myself an Abbott supporter in any form. In fact, the main reason I imposed significance criteria ex-ante was to ensure that my own biases against Abbott didn’t massage the statistical results to suggest a gender bias that wasn’t there.

Furthermore, one of the driving factors for wanting to check the statistics is that I believe public criticisms in the media from feminists must not misrepresent the facts if their wielders truly want to further their cause. Gender equality is an emotion-charged topic and often provokes fiery reactions, but how can the voice of equality be trusted if it is found to have misrepresented, deceived or used false facts in the past? Proponents for change have to be especially careful in this regard, and I don’t think being factually careful is worthy of vilification from members of one’s own cause. As I said in response to one such comment, “Just call be Galileo.”

A further emotive criticism was that the analysis was irrelevant because it wouldn’t be understood by “about 99% of the population.” If this were true, I shudder to think about the future of science (as well as chess, making soufflés and speaking Swahili, come to mention it).

Philosophical

Elements of rational debate were employed by quite a few commentators seeking to invalidate the analysis. Two of my acquaintances pointed out that a prior belief that Tony Abbott is sexist would not be refuted by a statistical significance between 5% and 10%, and therefore his sexism could not be disproved. These and other comments led me to realise that the title of the article was a bit misleading. In the end, the analysis measures gender bias in the appointments; I make no comment on Abbott’s sexism per se, which is more of a personality assessment. However, the point about prior beliefs applies equally to gender bias in the Cabinet. If one believes beforehand that Abbott would choose a biased Cabinet, then the statistics do not disprove this belief, and thus it can be continued.

Quite a few others focussed on outside factors that could have confounded the analysis, such as ministerial and geographical quotas in the Liberal-National coalition – quotas I must admit I wasn’t aware of.  Coupled with heterogeneity (in other words, other differences) in pre-selection, electorates, merit and experience of the candidates, etc, it was claimed that the analysis was not valid and did not disprove the gender bias.

My response to both of these criticisms is the same. Confounding factors only serve to reinforce that one cannot claim a gender bias in the appointments from the facts. These arguments use what is known as “Burden of Proof Reversal”, a cardinal sin in rational debate, but unfortunately a commonly employed one in politics and the mainstream media. The approach usually claims that something is true simply by stating that is cannot be proved it is not true. Preaching that the earth is flat in an age without astrological tools is one example.

However, in science as well as law, the proof should be on the claimant. If a man is accused of murder, the onus is on the accusers (or their representatives) to supply the evidence to support the claim. Such ‘presumption of innocence’ should also apply to public vilification in the media. In this case, the claim is that Tony Abbott exhibited gender bias in the Cabinet appointments, and the evidence supplied is the ratio of women to men – nothing more (this was the case in, for example, media reports by the ABC, the Australian, the Huffington Post and Adelaide Now, among others). The fuzzier the evidence, then, the weaker the claim, and therefore more shame to these media outlets, in my opinion. My statistical analysis is meant as a rational attempt to clarify the factual ‘evidence’ supporting the claims, and my conclusion is that, even with confounding variables excluded, the numbers don’t stack up.

Mathematical

Finally we turn to the real embarrassment of this addendum to the original post: My mathematics was wrong! My thanks to Dave Mitchell and Melissa Hogan (two of my friends studying at the Australian National University and who I met from ju-jitsu, of all things) for pointing out the error. [EDIT: Since writing this, Barry Cox has also made the same mathematical point in the comments.] While the mistake does alter the analysis such that the chances of gender bias seem lower than they actually are, the bias is small enough that there still isn’t quite enough evidence to support the claim against Tony Abbott – although it’s now pretty close.

Before we get into the technical aspects, the problem can be summed up succinctly as follows: I chose an approximation to the true probability that wasn’t appropriate and systematically biased the analysis against there being a gender bias. Oops. The story of the error is a little bit amusing. I came up with the idea to do this analysis while sitting in my local café with nothing more than a pen, some napkins and my archaic mobile phone. When I started, I quickly realised that calculating the true probability, as you’ll see below, involves multiplying such huge numbers that I had no chance to work things out. Adopting a binomial approximation, on the other hand, meant that I only needed to calculate a couple of powers (e.g. 0.78118), which my phone-calculator was capable of handling. I never bothered to check things afterwards, much to my shame.

In fact, as both Dave and Mel commented, the binomial approximation can only be used in what is known as ‘sampling with replacement’. In other words, by using this approximation I was essentially posing the question, “If there is a 22% chance of choosing a women and I choose one person at random, and repeat this exercise 18 times, what are the chances of picking no more than one woman?”

Sounds reasonable – but it’s not quite correct. Melissa and her colleague backed up their criticism by running Monte Carlo computer simulations that showed the chances of randomly selecting not more than one woman in the Cabinet are roughly 5.5%. At first I have to confess I was suspicious of this claim, but I should have known better – Mel is one smart cookie J I ran my own simulations and got the same result (and about 6.3% with Bronwyn Bishop removed). Annoyed with myself, I did the proper calculations analytically doing the heavy calculations on a computer and, lo and behold, I got 5.49% and 6.29% respectively. D’oh.

This is still outside the stipulated 5% level, though it’s obviously much closer. Given the burden of proof on those claiming a bias, and given that three chief excluded variables – experience, a male-heavy Nationals party and the higher weighting of Julie Bishop’s portfolio – all systematically seem to move the analysis away from a significant gender bias, I think the main result still holds. Some commenters claimed, correctly, that a proper data set could be constructed to include most of these variables and thus get more accurate results. However, given I’m not paid for my writings and also the heat I’ve taken for the analysis to date, I’m probably not going to do it…but we’ll see.

Barry Cox also made the point that it could be argued Tony Abbott had little choice but to choose his chief Cabinet ministers, and in fact he could only exercise choice in the more minor positions. That rules out Julie Bishop, Warren Truss, Joe Hockey and whoever else one deems a ‘forced appointment’ from the analysis. Barry shows that the subsequent revised analysis would show above a 95% probability of gender bias unless one assumes Tony Abbott had a say in less than eleven Cabinet ministers (or, as I showed, in all 18 of them). This is a really interesting result in my opinion. However, my analysis has steered clear of political arguments for the most part, and so I have continued to assume that Tony Abbott chose his entire Cabinet, but this stream of analysis is definitely worthy of more attention.

Here is the correct graph of the probabilities for each possible number of female Cabinet members.

 

The correct distribution

For comparison, here is the graph I previously supplied.

 

The old, incorrect distribution, which places too much emphasis on the 'tails'

 

As you can see, they’re pretty similar, but if you look closely you’ll notice the correct graph is weighted higher in the middle section of the graph and weighted lower on the edges – so there is a marginally higher chance of more than women in the Cabinet.

What follows is the mathematical derivation of the correct probabilities. Feel free to skip it if it looks horrifying or hypnagogic.

(Note: In what follows, $latex n\choose k$ is the symbol for the so-called binomial coefficient. It is sometimes written as nCk or as $latex \frac{n!}{k!(n-k)!}$, which can be written in full as $latex \frac{n*(n-1)*(n-2)*\dots*2*1}{k*(k-1)*\dots*2*1*(n-k-1)*(n-k-2)*\dots*2*1}$. Looks scary, but a lot of numbers cancel out from the top and the bottom. For example, $latex 7\choose 3$ can be calculated as $latex \frac{7*6*5*4*3*2*1}{3*2*1*4*3*2*1}$, which simplifies to $latex \frac{7*6*5}{3*2}=35$.)

$latex Pr(No\quad more\quad than\quad 1\quad woman)$

$latex =Pr(0\quad women)+Pr(1\quad women)$

 

$latex =\frac{(\text{\#ways 18 men can be chosen})+(\text{\# ways 17 men and 1 woman can be chosen)}}{{\text{Total \#ways 18 Cabinet members can be chosen from 114 candidates}}}$

Then, breaking it down:

$latex Pr(0\quad women)=\frac{\binom {89} {18}}{\binom {114} {18}}$

$latex \qquad =\frac{89*88*\dots*73*72}{114*113*\dots*98*97}$

$latex \qquad \approx 0.0076$

 

$latex Pr(1\quad women)=\frac{\binom {89} {17} *\binom {25} 1}{\binom {114} {18}}$

$latex =\frac{25*18*89*88*\dots*73}{114*113*\dots*98*97}$

$latex \approx 0.0473$

Therefore,

$latex Pr(No\quad more\quad than\quad 1\quad woman)$

$latex \approx 0.0076+0.0473\approx 0.0549$

 

$latex \approx 5.49\%.$

 

 
11

Is Tony Abbott A Misogynist? A Statistical Analysis

Posted by David Smerdon on Sep 27, 2013 in Economics, Gender, Non-chess, Politics

[EDIT: Make sure you don’t miss Part II: Comments, Clarifications and Corrections for an update on the analysis.]

 

Like many Australians, I was dismayed to read that the newly elected Prime Minister of Australia, Tony Abbott, had appointed an incredibly male-heavy Ministry to the Parliament of Australia. Most news reports in the mainstream media, both at home and abroad, slammed the announcement by levelling a fairly routine string of sexist labels at our new head of government, the most common being “Misogynist”. However, I was a little surprised by the lack of any quantitative evidence suggesting that the appointments were based on sexism over, say, statistical chance, so I decided to do a rudimentary check myself. Below you’ll find the results of a basic statistical analysis to answer the question:

Is there a gender bias in Tony Abbott’s new Cabinet?

I should point out that this is hardly the first time Tony Abbott has been called this in his life. Throughout his political career, Abbott has regularly been called insensitive to gender equality and the concerns of women, as well as possessing views on gender issues more likely found among Australian males half a century ago. However, to me, none of those reports have been especially convincing, either. As a feminist as well as someone who strongly opposes a lot of Abbott’s policies (particularly with regard to climate change and refugee policy), I was looking forward to the opportunity to finally analyse some ‘hard’ data in coming to a conclusion about our new chief. After reading the initial reports that the new Cabinet contained only one woman out of 19 spots, I felt pretty confident. In the words of Australian of the Year Ita Buttrose, “You can’t have that kind of parliament in 2013. It’s unacceptable.”  How could the data suggest anything other than that the man is a raving chauvinistic pig?

However, it turns out that things are not so simple. For starters, the Australian media has a reputation for being (a) incredibly biased, and (b) terrible at statistics. First, a lot of reports link to the following graph, taken from the Australian Labor Party website:

The most obvious question that comes to my mind is: Why aren’t the values given as percentages? Of course, this doesn’t matter if all the cabinets are the same size…but a quick check shows that this is indeed not the case. For example, India’s cabinet (made up of ‘Union Members’) has 33 spots. My second concern was about the choice of countries, which seemed incredibly arbitrary. The ALP chose to compare Australia to such countries as Rwanda, Liberia and Egypt, but excluded the United Kingdom (our closest parliamentary sibling), most of the G20 countries, and in fact ALL of Europe! Show this graph to anyone with even the vaguest of quantitative training and they’ll start screaming “Data mining! Data mining!” before you can blink.

Comparing ourselves to other countries is a bit fishy in any case. If every country always did this, no women would ever have been elected to high office in any country, ever. No, what I really want to know is whether the election of one single female (Julie Bishop) to Abbott’s new Cabinet could have come about by chance, or whether it suggests deliberate sexist. To ensure that my own biases don’t interfere with the analysis, I established a threshold before I got into the numbers. In any sort of quantitative research, the standard measure is to be at least 95% confident of something in order to draw a conclusion (formally, ‘reject a hypothesis’). I therefore decided that Tony Abbott could be considered guilty of gender bias in his appointments if it could be shown that we could be 95% sure the male/female ratio did not come about by chance. To be perfectly clear, I decided beforehand (ex ante) the analysis would conclude that Tony Abbott’s appointments:

  • were gender-biased if the chances of them being random were less than 5%; or
  • were random, and the media reports should be condemned for factual inaccuracy, if the chances of them being random were greater than 10%; or
  • could not convincingly be shown to be gender-biased if the chances were between 5% and 10%.

So let’s set up the analysis. Now, Abbott was of course elected Prime Minister before he chose his  own Cabinet, so we should exclude him from the list – the relevant statistic is then “One woman out of 18 spots”. Not all of the seats had been officially declared by the time the Cabinet was announced, but according to the Liberal Party website, Abbott had a total of 114 Members and Senators to choose from to fill these 17 spots. Of these candidates, 89 (78.1%) are male and 25 (21.9%) are female. (Note that this excludes the appointment of Bronwyn Bishop as the Speaker of the House of Representatives, so called “the most important position in Parliament” Australia’s premier newspaper The Australian. If she is excluded from the list, the percentage of female candidates falls slightly to 21.2%.)

Further, let’s assume that each female candidate is equally as qualified as each male candidate to serve in Cabinet. Now, this has been a contentious issue in the media, with a lot of the justifications given to the male-dominated appointments revolving around the issue of ‘merit’. Former Liberal Senator and Ambassador to Italy Amanda Vanstone is quoted as saying, “I’d rather have good government, than have more women in the cabinet for the sake of it.” However, let’s ignore merit arguments and focus on the numbers. From a statistical perspective, the question then becomes:

“Assuming all candidates are equally likely to be picked, what is the chance that Tony Abbott appointed no more than one woman (5.6%) to the Cabinet?”

First, note that if we take the ratio of females from the list of candidates and apply it directly to the 18 Cabinet positions, we would expect roughly four women to be appointed (0.219*18 = 3.95). However, we would expect exactly four women to be selected around 20% of the time. We can model the random likelihood of any number of women being selected by what is known as a ‘binomial distribution’. Basically, if Tony Abbott was to put all 114 candidates’ names into a hat and take out 18 at random, and repeat this 100 times, the graph below tells us how many times we would expect each possible gender division to occur.

Therefore, the chances of no more than one woman being appointed – that is, the probability of appointing zero or one woman – looks to be around 7%. Indeed, calculations bear this out (‘P’ stands for ‘Probability’ in what follows):

P(No more than one woman)

= P(0 women) + P(1 woman)

= (0.781)18 + 18*0.219*(0.781)17

= 0.012 + 0.059

= 0.07

= 7%

So the answer falls within 5% and 10%, leading us to conclude that the actual Cabinet appointments do not convincingly suggest gender bias.

Still, you might think that finding only a 7% chance that a Cabinet with one woman was randomly selected is still something to think about. This may be true, but taking into account a few other factors dilutes the strength of the result even further. Excluding the new Speaker of the House, Bronwyn Bishop, from the initial sample raises the probability of randomly selecting no more than one woman to 8%.

Furthermore, the one woman who did make it into Abbott’s Cabinet, Julie Bishop, has been appointed Deputy Leader of the Liberal Party as well as taking on the esteemed Minister for Foreign Affairs portfolio. Along with Warren Truss (Deputy Prime Minister) and Joe Hockey (Treasurer), she thus takes one of the three chief roles in Tony Abbott’s leadership team. One woman out of these three key positions is technically something of an overrepresentation, given the candidates available, and so our result weakens further if we weight the spots accordingly. For example, just for argument’s sake, assume that getting appointed to one of these roles is doubly as important as other positions in the Cabinet. That is, assume a woman earns one ‘point’ for each normal Cabinet position and two ‘points’ for one of these chief positions. Then the current Cabinet earns two points through its women (or woman, in this case). The chance of the Cabinet earning no more than two points with a random selection of the candidates is then a whopping 17%. Don’t be scared of the formulas…

P(No more than two points earned by women)

= P(0 points) + P(1 point) + P(2 points)

= (0.781)18 + 15*0.219*(0.791)17 + 15*7*(0.219)2*(0.781)15

= 0.01 + 0.05 + 0.11

= 0.17

= 17%

Even less convincingly, when I use this weighted approach in conjunction with excluding Bronwyn Bishop from the list of candidates, the chance that the current parliamentary Cabinet could occur randomly without gender bias rises to 18%. Statistically, such numbers mean we can basically rule out any sort of gender effect at all.

There are a couple of little caveats I’d like to point out before we jump to any conclusions. This very basic statistical analysis makes a lot of assumptions which may or may not be justified. For example, the men and women in our list of candidates may not be equally capable to serve in the Cabinet after all. For example, what if, all else being equal, older politicians are on average better suited to the Cabinet than younger politicians? This could be relevant because the male and female candidates’ average ages might be different. Judging from the photos on the Liberal Party website, it seems to me that the men are on average older than the women, but of course I should actually get the ages and then compute some sort of weighting scheme if I want to really work out the effect. My intuition tells me, however, that including this feature would produce less sexism in the results.

Secondly, my analysis assumes that Tony Abbott selected all Cabinet positions simultaneously. Of course, it’s more likely that he selected the most important positions first and then worked down the order. I’m not sure how this would change my results; intuitively it shouldn’t make much of a difference, except that Julie Bishop’s position again takes on a little more precedence.

Finally, I’ve assumed that Tony Abbott was essentially just given a list of elected candidates and told to choose a Cabinet. That is, I assume Tony Abbott had no say in selecting the Liberal Party nominees for the electoral seats, which may have led to the gender bias in the candidates in the first place. But that’s a topic for another project.

In the end, then (if you’ve managed to read this far), it does seem that the emotive journalistic style of the Australian media has again got something to answer for in its vilification of Tony Abbott on this issue. I’m not saying our new Prime Minister is taint-free on matters of gender policy – far from it, but my own opinions shouldn’t weigh into it. So here it is, finally: The bottom line, from a basic statistical analysis.

We cannot conclude there is any gender bias in Tony Abbott’s appointment of his Cabinet.

 

 

 

 

Copyright © 2017 davidsmerdon.com All rights reserved. Theme by Laptop Geek.