Somewhere else, part 133

Some writings worth reading,

This year the idea that statistics is important for big data has exploded into the popular media. Here are a few examples, starting with the Lazer et. al paper in Science that got the ball rolling on this idea. The parable of Google Flu: traps in big data analysisBig data are we making a big mistake?Google Flu Trends: the limits of big dataEight (No, Nine!) Problems with Big Data. All of these articles warn about issues that statisticians have been thinking about for a very long time: sampling populations, confounders, multiple testing, bias, and overfitting. In the rush to take advantage of the hype around big data, these ideas were ignored or not given sufficient attention. One reason is that when you actually take the time to do an analysis right, with careful attention to all the sources of variation in the data, it is almost a law that you will have to make smaller claims than you could if you just shoved your data in a machine learning algorithm and reported whatever came out the other side. The prime example in the press is Google Flu trends. Google Flu trends was originally developed as a machine learning algorithm for predicting the number of flu cases based on Google Search Terms. While the underlying data management and machine learning algorithms were correct, a misunderstanding about the uncertainties in the data collection and modeling process have led to highly inaccurate estimates over time. A statistician would have thought carefully about the sampling process, identified time series components to the spatial trend, investigated why the search terms were predictive and tried to understand what the likely reason that Google Flu trends was working. As we have seen, lack of expertise in statistics  has led to fundamental errors in both genomic science and economics. In the first case a team of scientists led by Anil Potti created an algorithm for predicting the response to chemotherapy. This solution was widely praised in both the scientific and popular press. Unfortunately the researchers did not correctly account for all the sources of variation in the data set and had misapplied statistical methods and ignored major data integrity problems. The lead author and the editors who handled this paper didn’t have the necessary statistical expertise, which led to major consequences and cancelled clinical trials. Similarly, two economists Reinhart and Rogoff, published a paper claiming that GDP growth was slowed by high governmental debt. Later it was discovered that there was an error in an Excel spreadsheet they used to perform the analysis. But more importantly, the choice of weights they used in their regression model were questioned as being unrealistic and leading to dramatically different conclusions than the authors espoused publicly. The primary failing was a lack of sensitivity analysis to data analytic assumptions that any well-trained applied statisticians would have performed. Statistical thinking has also been conspicuously absent from major public big data efforts so far. Here are some examples: White House Big Data Partners Workshop – 0/19 statisticians, National Academy of Sciences Big Data Worskhop – 2/13 speakers statisticians, Moore Foundation Data Science Environments – 0/3 directors from statistical background, 1/25 speakers at OSTP event about the environments was a statistician, Original group that proposed NIH BD2K – 0/18 participants statisticians, Big Data rollout from the White House – 0/4 thought leaders statisticians, 0/n participants statisticians. [to be continued…]

Despite the rivers of ink that have flowed regarding the recent Heartbleed vulnerability, I believe the developer community has not addressed the right problem. Developers have fixated on a debate about one of open-source’s most touted advantages: With many eyes looking at the code, is open source able to correct bugs faster than closed-source projects? But this discussion misses the central issue, which in my view is not technical, but monetary. The OpenSSL team, whose project was the home for the Heartbleed vulnerability, discussed with remarkable candor how much the lack of funding from the product’s users has limited their development work and, by extension, their ability to find and remediate such defects. It turns out that major users of OpenSSL, such as Cisco and Google, among others, had incorporated the software into the important products, but sent little or no funds to the developers. Faced with this embarrassing revelation, the companies quickly got together, pooled some money, and assembled a committee that agreed to dispense funds to worthy projects, starting with OpenSSL. This is a hurried patch — one that will temporarily relieve the problem, but not address its root cause. The root cause is a fundamental conflict at the heart of open source: the opposing forces of building community vs. deriving a sustainable level of revenue from an open-source project. The tension between these forces is most acutely felt when choosing a license for the project. Projects that have a greater interest in fostering use of the software or projects that don’t care about much about monetization choose the “business-friendly” licenses (such as the Apache Software License, MIT, BSD), which impose nothing but the most minor responsibilities on the user or, more correctly, the licensee. [to be continued…]

On the evening of Jan. 27, Kareem Serageldin walked out of his Times Square apartment with his brother and an old Yale roommate and took off on the four-hour drive to Philipsburg, a small town smack in the middle of Pennsylvania. Despite once earning nearly $7 million a year as an executive at Credit Suisse, Serageldin, who is 41, had always lived fairly modestly. A previous apartment, overlooking Victoria Station in London, struck his friends as a grown-up dorm room; Serageldin lived with bachelor-pad furniture and little of it — his central piece was a night stand overflowing with economics books, prospectuses and earnings reports. In the years since, his apartments served as places where he would log five or six hours of sleep before going back to work, creating and trading complex financial instruments. One friend called him an “investment-banking monk.” Serageldin’s life was about to become more ascetic. Two months earlier, he sat in a Lower Manhattan courtroom adjusting and readjusting his tie as he waited for a judge to deliver his prison sentence. During the worst of the financial crisis, according to prosecutors, Serageldin had approved the concealment of hundreds of millions in losses in Credit Suisse’s mortgage-backed securities portfolio. But on that November morning, the judge seemed almost torn. Serageldin lied about the value of his bank’s securities — that was a crime, of course — but other bankers behaved far worse. Serageldin’s former employer, for one, had revised its past financial statements to account for $2.7 billion that should have been reported. Lehman Brothers, AIG, Citigroup, Countrywide and many others had also admitted that they were in much worse shape than they initially allowed. Merrill Lynch, in particular, announced a loss of nearly $8 billion three weeks after claiming it was $4.5 billion. Serageldin’s conduct was, in the judge’s words, “a small piece of an overall evil climate within the bank and with many other banks.” Nevertheless, after a brief pause, he eased down his gavel and sentenced Serageldin, an Egyptian-born trader who grew up in the barren pinelands of Michigan’s Upper Peninsula, to 30 months in jail. Serageldin would begin serving his time at Moshannon Valley Correctional Center, in Philipsburg, where he would earn the distinction of being the only Wall Street executive sent to jail for his part in the financial crisis. American financial history has generally unfolded as a series of booms followed by busts followed by crackdowns. After the crash of 1929, the Pecora Hearings seized upon public outrage, and the head of the New York Stock Exchange landed in prison. After the savings-and-loan scandals of the 1980s, 1,100 people were prosecuted, including top executives at many of the largest failed banks. In the ’90s and early aughts, when the bursting of the Nasdaq bubble revealed widespread corporate accounting scandals, top executives from WorldCom, Enron, Qwest and Tyco, among others, went to prison. The credit crisis of 2008 dwarfed those busts, and it was only to be expected that a similar round of crackdowns would ensue. In 2009, the Obama administration appointed Lanny Breuer to lead the Justice Department’s criminal division. Breuer quickly focused on professionalizing the operation, introducing the rigor of a prestigious firm like Covington & Burling, where he had spent much of his career. He recruited elite lawyers from corporate firms and the Breu Crew, as they would later be known, were repeatedly urged by Breuer to “take it to the next level.” [to be continued…]

With terabytes of data at hand, every business is trying to figure out the best way to understand information about their customers and themselves. But simply using Excel pivot tables to analyze such quantities of information is absurd, so many companies use the commercially available tool SAS to cull business intelligence. But SAS is no match for the open-source language that pioneering data scientists use in academia, which is simply known as R. The R programming language leans more frequently to the cutting edge of data science, giving businesses the latest data analysis tools. The problem: With loose standards and scores of diverse contributors, it is shaky ground for business. Will that ever change? At least one company thinks R is ready for commercial prime time. Like RedHat is to Linux and Cloudera is to Hadoop, Revolution Analytics is to the R language in the commercial world. Several years ago, David Smith, chief community officer at Revolution Analytics, noticed that a lot of academics and students used R but saw less usage in industry. “At the time, there was no company there to support R, provide expertise around R, or provide any kind of commercial backing for R. So that’s how Revolution Analytics was founded,” says Smith. To call Smith an R enthusiast is an understatement. He is a co-author of the programming manual An Introduction to R that comes with the open source R distribution. And he has a team of like-minded R evangelists working with him, who keep any mention of R in the business world on their radar, while also publishing R-related news on the company’s blog and giving educational workshops to other companies. He is an example of a curious breed of creative entrepreneur that only exists in the tech sector: someone doing great work on a free, open source resource, and in so doing, creating a commercial opportunity for themselves on the flip side. “I always look out for journal articles where R is used. I hear back from customers. And whenever a good visualization is used, there’s a good chance that it was created in R, so I can always trace back to the author. I’m always on social media, so whenever I see a reference to R, I usually shake down to [the team],” Smith says. [to be continued…]

et en français,

Did I miss something?


One thought on “Somewhere else, part 133”

  1. Dear Arthur,

    Thank you for recommending our blog post about “sharing is caring” from labfolder. We noticed that the link is broken (our blog url structure has changed) –> would you be able to fix it to http://labfolder.com/blog/sharing-is-caring/ ?

    Many thanks from all of us at labfolder, and keep up the great work!

    P.S. We actually wrote a new blog post about open data access – you might like it! https://www.labfolder.com/blog/why-we-need-open-data-access/

Leave a Reply

Your email address will not be published. Required fields are marked *