Heard this before?
- The results of empirical software engineering process research are not routinely and systematically used to inform theory, i.e. there is no feedback loop.
- Much theoretical software engineering process research is based on assumptions that do not reflect industrial software engineering practice.
- No unifying framework exists to systematically drive the advancement of software engineering process research.
- There is little consensus on what constitutes software engineering process quality, both in general and within the context of a given business environment.
- What are the characteristics of software process quality, as determined by interviews with software professionals and academics?
- Which software engineering activities can be positively impacted by a better understanding of software process quality?
- How can methods for software engineering process development, assessment and improvement be developed and enhanced?
Once you start looking for software glitches, you realize that they are happening all over the place! So I started batching these things up and thought I would share them each Friday. Sort of a light hearted end to the week, if you will. Here goes, then:
In Burbank, CA, the Burbank Leader reports that nearly 35,000 Charter Communications cable TV customers lost service for six hours, when the company upgraded its digital network control system software. Craig Watson, VP of Communications at Charter, offered an insightful comment: “This kind of problem is quite rare and was obviously not expected and, in this case should not have happened”. Well, yes.
Meanwhile, in Columbia, MO, access to the Columbia Tribune’s web site was interrupted when their ISP, CenturyTel installed another software upgrade, that also knocked out service to many of its other customers.
To Miami, FL, now, where Sunpass toll pass users have been overcharged owing to a software glitch. It seems that drivers whose transponders were scanned in certain lanes at the toll plaza suddenly found their account balances dropping excessively. Sunpass officials said that “fewer than 2,000 drivers were affected by this problem”. That’s alright then.
In Beaver, PA, software problems caused a delay to the start of a murder trial when the list of jurors was sorted alphabetically instead of being randomly listed, according to the Beaver County Times. I’m not making this up!
Now to Warren, OH, where there were more billing problems for Trumbull County, which was overbilled some $260,000 for electricity supplied by Ohio Edison dating back as far as 2003. The problem was caused by an incompatibility between the meters being used by the County and new software that was installed in ‘03. Other customers with the same type of meter had already found the problem and it had been fixed, according to Ohio Edison.
Billing problems are a common theme here at Glitch Watch, and this week residents in Kerrville, TX were sent the wrong tax bills thanks to software problems. One of the taxpayers says he wasn’t worried when he received the bill: “I knew we didn’t owe anything,” he said, “it’s just a computer issue.” I guess after a while, people get used to software glitches.
I was reading the Google Reader blog today and noticed a sense of openness there about a number of outages they have suffered recently. That got me to thinking about the difference between the sugar-coated brochure-ware of most corporate web sites, versus a more open, blog-centric face that companies can present to the world. Whatever else you say about Google, you couldn’t say that their web site(s) fall into the “sugar-coated brochure-ware” category.
And so I cast a critical eye over our own web site. Hmm, definitely some sweetness there, blog notwithstanding. Of course we all want to put our best foot forward, but I think that there’s something coming over the horizon as a result of all the hoopla around Web 2.0 and social networking, that means that in the future, customers and prospects will expect a more open, honest relationship with companies - and the people behind them. That doesn’t mean to say that your web site shouldn’t look nice, of course. But I think that people expect to be able to connect with some of the personalities behind a company these days, in ways that would have been unthinkable only a few years ago. It’s fascinating to me to see how many business people are opening Facebook accounts, uploading a bunch of personal information, and then finding that their business contacts find them there. That, to me, is symptomatic of a breakdown between our personal and business lives that seems like it’s inevitable. (If you want to friend me on Facebook, go ahead - you can find me here.)
We’ll be reworking our web site over the coming months as we lead up to a major product release we have coming up in January. As we work through that, I’ll be keeping some of these ideas in mind, and trying to strike the same delicate balance between professionalism and openness that works so well at Google.
Michael Feathers’ recent blog post Prosthetic Goals and Metrics That Matter that references Brian Marrick’s Code Coverage Misuse paper, seemed to cast a negative light on code coverage tools. I was glad to see that in a subsequent comment, he clarified his statement by saying that it was setting coverage numbers as an organizational goal rather than simply the use of coverage tools that he objects to.
Feathers states: “You can’t measure quality with [code] coverage.” I don’t necessarily agree with this. Just setting a threshold of say, 90% as an incentive marker is, I agree, not a good idea, because it encourages developers to look for the ‘easy’ tests to push their numbers up. However, with a good program of test reviews in place, setting goals can be motivational for developers and enhance the quality of the application, especially if thresholds are set low to start with, and gradually changed over the life of the project.
Note I said “changed” rather than just “increased.” If a major design change is applied, a lot of code and tests may be removed, which will throw off coverage numbers.
Is comparing these numbers a bad thing? I don’t believe so. It means developers who perhaps were not testing before are now gaining experience in writing tests, and a safety net is being put in place for more of the code (I agree that testing simple Getters and Setters is a waste of time though). If your team includes experienced developers, and peer/team lead reviews are occurring (which should be the case), any persistent gaming of these numbers by individuals will be recognized over time.
Using a code coverage tool will not only increase the quality of the application by showing what has been tested, but also what has not been tested. On many consulting engagements, I have used a code coverage tool to point to areas in the code that have, as yet, not been tested. I regularly find that within these applications, similar patterns present themselves (a frequent occurrence is a lack of testing of exception handling) and with the tool I use, the owner of the code is identified as well. I can quickly point out obvious areas in the code that would be good candidates (and high priority) for testing, and discuss suitable training areas for individuals who are consistently missing these areas that should be under test.
I believe we are at a time in the software industry where quality issues are being discussed more openly, and motivation to look into these quality areas needs to be encouraged. I believe that it is the responsibility of management and experienced technicians who have seen the lack of quality on prior projects, not to mention the results, to motivate and yes, maybe give incentives to teams that are pro-active in implementing quality procedures. However, a good review process to stop any gaming of these numbers is crucial for success.
I would argue that any tool, used properly, that can be incorporated into an automated build process (and used by individuals) to enhance the quality of a developer’s work and therefore any application they are contributing to, is a Good Thing
Hundreds of patients treated at an Arizona hospital were sent incorrect bills earlier this month after a billing software upgrade, the Boston Globe reports. 587 patients were affected, with the highest incorrect bill going out at $49m. Fortunately, it sounds like most of the bills were so large that the mistake was obvious.
In other billing news, more than 1,000 Hazleton, PA taxpayers received incorrect real estate bills owing to an error in the assessment software that should have included homestead discounts but did not.
Meanwhile, over in Macon, GA, software problems caused a minor delay in posting local election results. A 90-minute delay in posting the results is hardly a huge deal. Potentially more serious was the fact that one precinct’s votes were excluded from the results posted on the board of elections’ web site, although voters were assured that the overall results were not affected.
Elections supervisor Elaine Carr is hopeful the problems can be fixed before November 2008…
Here at Enerjy, we are putting a lot of effort into improving the way that code metrics are presented. We think that the key (or at least a major key) to improving code quality lies in identifying and tracking code metrics. We also think that metrics are only useful if (a) you choose the right metrics to start with, and (b) they are presented in a meaningful, engaging, easy-to-understand format. With that last point in mind, I often spend time looking for good examples of well executed data visualization techniques.
Meaningful, engaging, easy-to-understand. That’s what we will be striving for in our next major product release, which we are planning for early 2008.
A couple of weeks or so after launch, I thought it might be fun to scout around and find out whether iPhone users are experiencing significant software problems with the device or not. I haven’t traded my Blackberry for an iPhone (yet), but a couple of folks in the office have them and are all glassy eyed and in love.
It’s a slick device, there’s no question about it, and the software appears to be well executed. Of course there have been some early glitches, but these all seem relatively minor (and shrink even more in comparison to some of the problems experienced with the software on the Nokia N95 - probably the iPhone’s closest competitor).
The iPhone story that interested me the most though, was Bubba Murarka’s tale of his service experience with Apple. Here’s someone who clearly likes the product, but the whole experience is let down by the support model. I had a similar experience when I returned my malfunctioning Macbook 17 days after I purchased it. If the problem had arisen within 14 days, the Apple “Genius” happily told me, they would replace the device with no questions asked. But because it was now 17 days old, they would have to repair it. I won’t bore you with details of the story, other than to say that ultimately I was left with no laptop for more than 2 weeks.
Ultimately, it sounds like the iPhone launch has been a success. Apple deserves to do well, in my opinion, on the basis of their products, which are well designed and well executed. But I hope that the Genius Bar support model, along with some lame warranty policies, is not their undoing.
As if things for Enron’s ex-employees weren’t bad enough already, MSNBC reports that more than 20,000 ex-employees who finally received compensation payments have been underpaid or overpaid owing to a software glitch. Well, to be fair, it’s more of a data glitch, since the program used by consultants Hewitt Associates to calculate the $22m in payments that were messed up was simply picking up the wrong stock price…
In other news, the Richmond Times-Dispatch reports that a software upgrade knocked out cable TV service to an unknown number of Comcast subscribers in Richmond, VA this week.
Meanwhile, San Jose residents have been complaining about delays at 94 traffic intersections caused by software problems. Apparently light rail cars arriving at intersections cause the traffic lights to be thrown out of sequence, causing frustration for drivers who then have to wait an extra 3 minutes at the intersection. The most shocking thing about this story to me? The cost of the software upgrade to fix the problem: a cool $1.6m!
A few years ago, I researched a photoelectric solar generating system for my house. Fed up with rising energy costs, with a relatively large, south-facing roof, not to mention the global warming considerations, the whole thing made sense to me. At the time, the Commonwealth of Massachusetts was offering to subsidize roughly half the cost of installation, but, after I crunched the numbers, I worked out that it would take about 20 years to recoup the roughly $20,000 of upfront costs. Although I knew it would make sense in the long term, I just wasn’t experiencing enough pain in the here and now to make it happen.
The same problem applies to the price of gas. When I first moved to the U.S. in 1996, gas prices were running at less than a dollar a gallon. Now they are over three times that much. Because that change has happened in small increments, one or two cents at a time, there is never enough pain to cause behavioral change - it’s like the oft-quoted (but apparently mythical) story of the boiled frog. And so we continue, for the most part, to drive gas guzzlers.
And so it goes with software quality initiatives. Intellectually, we know that we should use static analysis to catch bugs early. We know we should unit test our code before it gets anywhere near QA, and we should be measuring the coverage of those tests. Yet still we continue with our old habits, because there is no immediacy to the pain of bug fixing. Like filling the tank with $3 gas, we don’t like it, but we feel like we have to do it.
So, why even bother talking about software quality? Why don’t we just give up on this cause and go sell ice cream instead? Because, when you talk about an issue for long enough, slowly, little by little, the world starts to change. Hybrid sales have slowly grown over the past three years. You can now buy a Chevy Suburban that runs on E85 fuel (if you can find somewhere to fill up). And maybe, just maybe, I’ll run the numbers on those solar panels again.
Trent Kroeger is a research student at the University of South Australia. He’s working on a PhD on the theory and application of software engineering process quality. Essentially, he’s set out to try and answer the question “what makes a “good” software engineering process?” Surprisingly, according to Kroeger, very little rigorous research has been undertaken to answer that question. And, he is blogging about his research project in a well-written, accessible way. There are some interesting snippets from the blog so far that caught my eye. In particular, Kroeger has identified four key limitations in the research that has been done to date in the area of software engineering process improvement:
Kroeger will attempt to address the last of these limitations in his research, in which he will address three key questions:
Although the results of the research themselves will be interesting, it will be equally interesting to watch the project unfold through Kroeger’s blog.