Fred Brooks’s law of ‘adding manpower to a late software project makes it later‘ is one most of us have tried to prove wrong…….and failed!I was at Agile 2008 and saw an interesting session, “Breaking Brooks’s Law” from Menlo Innovations, a Michigan based Java development company. They claimed to disprove this law and demonstrated their working environment and techniques that allowed them to do so.Although the presentation was only 45 minutes, we were in the room for almost 2 hours asking questions to determine how robust their techniques were, and to gain more insight into the conditions developers work under.Menlo’s results are based on a 3 year project that the customer had a deadline to demonstrate at a show. More features were required for the show than currently in the plan. So rather than re-prioritize, Menlo decided to add more developers to attempt to complete the work. They managed to complete the Project on time with all added functionality.The environment at Menlo is quite unique. All developers are co-located in the same large room (no offices or cubes) and pair program 100% of the time - they follow strict XP practices. A scheduling team determines which projects developers work on and who they pair with on a weekly basis. So developers work with different team members and possibly different projects every week.Also, as part of the contract, the customer comes to Menlo every week to prioritize the work for the next sprint.These techniques may appear somewhat draconian (100% paring for example). I managed to catch up with the team and interview them to discuss this project further, bug rates, staff attrition rates and how Project Managers can push the message of pairing to Senior Managements/Directors (see video).I thoroughly enjoyed talking with the team from Menlo and they invite anyone passing by to stop in and take a look at how they operate. They also have an interview process which involves a large number of candidates performing a number of tasks including Pair Programming, with an appointment you can observe this too. A detailed paper about their techniques and contact details are here.
- Gets the latest source code from the Repository system
- Compiles and runs unit tests
- Runs analytics and QA gates (at development level)
- Produces reports
- Informs the team (or at least the Build Manager) if any problems occur
- Publishes the application to a test server so the test team can get straight to work
To follow from the successful launch of our free Eclipse plugin, today sees the official launch of our Service Offerings for Java Development Teams.Over the last 5 years we have been helping development teams increase the quality of their applications.From these experiences and our own experiences we have formalized a collection of offerings that can help improve code quality further.In summary our Services consist of:- An Internal Quality Code Report- Continuous Integration Services- Context Specific Standards for the free Enerjy Software plugin.More details can be found on our Services page of our website.
The first page of the preface of this book made me wince! Not because the book is bad, far from it! The immediacy of Scott’s insight into the pain of software development can only come from someone who has been there and experienced the trials and tribulations of project failure (more than once).I was expecting this to be yet another book on Design Patterns, but it really isn’t. This book attempts to look deeper into questions that cannot be easily answered and suggests a road map to evolve the profession of software development. It concentrates on practices, principles and disciplines that developers should follow when creating software, especially when thinking about how to implement features. It covers a wide range of practices, including analysis, refactoring, testing, and looks at how existing patterns should influence our design decisions.The appendix includes some very good examples of common design patterns. Different styles are applied to each pattern to teach or remind us what type of problem each pattern is used for. UML diagrams, procedural code alternatives, non-software analogies and basic OO code for implementation are included for each pattern.Since so many of us have to deal with legacy code bases, it’s always helpful when a book like this addresses that issue. Scott mentions hearing comments such as “this code is too hard to unit test,” “unit testing takes too much time” and “too many permutations to unit test.” He explains how these all point to design issues, and that leads into a great chapter discussing refactoring.Why should we refactor if the behavior does not change? This and other similar questions are covered too, explaining the concept of technical debt and the frequency of developer burnout: “Decaying, hard to maintain software will disable a development team faster than anything I know.”I would thoroughly recommend this book to any developer, however experienced or inexperienced, who wants to understand more about design patterns and how thinking in a design-driven manner can evolve our profession.I caught up with Scott at SD West, to ask him a few questions about his book.
Last week I was at SD West in Santa Clara, and on Tuesday night I had the privilege to attend Stelligent’s round table event. Around 40 people were there including industry celebrities such as Alistair Cockburn, Neal Ford, Scott Ambler and Jeffrey Fredrick.An hour was spent on the subject: “What is Agile?”; an agreed definition for which eluded everyone (even with this august group in the room).Neal Ford asked people to define agile in two words: “have fun”, “speedy delivery” and “driving quality” were popular answers. Someone commented: “selling the quality helps get agile projects underway.” Andy Glover quickly responded, “quality doesn’t sell! Speed does!”This is an interesting point. At first, the idea of focusing on speed may seem to violate many agile principles. But the speed comes from injecting quality into the development process. At first, teams new to agile techniques will be slower at producing artifacts for the customer because agile often requires a change in culture, which takes time to adapt to. However, the quality built-in will ensure that over time, releases will be quicker as the foundation of the software is a much more robust.Many times we have seen products released too soon, because time to market has been considered the most important factor, but over time code has been ‘layered’ onto this brittle foundation, causing cost to be incurred later, necessitating refactoring (or even completely re-writing) these products.The two words I would use to describe agile are “better communication”. This is something I believe is a constant in any team using agile practices for any length of time. Whether it is pair-programming in XP or stand up meetings in Scrum, communication between developers has improved with the result that (a) most team members are aware of what issues are cropping up in other parts of the project; (b) they have a better understanding of the whole system and the business benefits of what they are producing; and (c) More synergy is created and morale within the team is almost always improved.By the way, the 2008 Jolt Awards were presented at the conference, and our congratulations go to Stelligent as Continuous Integration: Improving Software Quality and Reducing Risk by Paul Duvall (CTO), with Andy Glover (President) and Steve Matyas book won the Best Technical Book category.
A few weeks ago, I gave a presentation on Quality and Metrics at the Phoenix Java Users’ Group. The presentation covered how source code metrics can be used to drive quality initiatives in development teams. I also demonstrated a three-stage implementation of metrics tools – Static Code Analysis, Code Coverage and Dependency Analysis tools that can help developers rout out buggy code fast.One question from the audience was: “As a developer, how can I respond to a manager that wants all the features, excellent quality and everything completed by the required deadline, when I know it is not possible and almost all our budget has been used?”That experience reminded me that Bob Martin, CEO of Object Mentor, was asked the same question last year at Agile 2007. Bob clearly felt passionate about this subject, indicating that developers need to be more professional and responsible, and that a worthwhile manager should know better than to even make the request in the first place.So, when I caught up with Bob at the SD West conference in Santa Clara, CA last week, I thought it would be interesting to ask him to comment on this very question for Enerjy.tv.
I picked this up from the 37Signals blog. I really like the sentiment of this short post by Ryan Norbauer, which promotes the idea that programmer happiness is a significant factor in the level of quality of code that is written, and that code is meant to be read by humans first and by machines secondarily. It certainly seems like taking the opposite approach, i.e. trying to apply manufacturing principles to software development, building it in large, offshore factories, is failing to deliver the long term productivity gains that were once dreamed of.Norbauer says:”There is increasing sentiment in the software world that we should be happy to take performance hits if it means the process of software development can be made more sustainable, pleasant, and simple.”Well, maybe. I just wonder about how such lofty ideals can possibly be made to work in the real world. Unless we’re talking about open source development, inevitably there will be pressure from “above” to get that release out faster, squeeze more features in, and/or improve response times. Still, it doesn’t hurt to dream.
Lean programming has been a popular topic in conferences over the last couple of years, largely thanks to the experience and work of Mary and Tom Poppendieck. Lean programming has its roots in lean manufacturing, a management system focused on reducing waste and empowering workers to improve processes themselves. Lean manufacturing is largely based on the work of W. Edwards Deming, the statistician who revolutionized the culture and operations of many businesses by focusing on driving quality through the whole of the organization. Deming’s work was adopted and improved the results of many companies, most notably Ford, Toyota and Bell Labs (AT&T).I just finished reading Dr. Deming – The American Who Taught the Japanese About Quality, which was written by Rafael Aguayo who studied under Deming in the 80’s. The book is not, as I initially thought, a biography of Deming (although there is a short appendix on Deming’s life). It is a well-written explanation of Deming’s 14-point management system, littered with numerous examples covering a multitude of organizations and industries.Interesting points made in the book include:- Without having profound knowledge, making corrections via a feedback system is just tampering and can lead to disastrous results.- Who is responsible for quality? 90% of the things we define as ‘Quality’ are out of the “workers’” hands: training budgets, deadlines, design acceptance, tools budget and selection. These are all management issues, yet the “worker” is the one often blamed for poor quality - does that sound familiar?- Cooperation with your competitor in R&D. In Japan, R&D costs are lower because groups from different organizations are brought together to work on the technology, sharing ideas. Once they have technology figured out, competition in the market place is fierce, concentrating on features, price and performance issues.This last point may seem strange to many western development managers, but we have a great example in the Eclipse IDE project. Eclipse was created by numerous groups of people for a common cause, and then companies such as IBM and Borland compete in the marketplace with Eclipse-based products such as RAD and JBuilder.Having spent some time working as a sales manager myself, one part of the book I found tough to buy in to was the suggestion of eliminating sales quotas/targets. Aguayo presents no alternative to replace these metrics, and there may be a good reason for that - there isn’t one.Although written almost 20 years ago, this book is suitable for anyone wishing to learn more about how to change management techniques to focus on quality throughout the business. Although it is not software industry-specific, it provides some useful background to understanding many of the concepts of lean programming. Some of Deming’s management points and Aguayo’s examples may seem contradictory or even irrelevant in many development managers’ eyes, especially ‘Stable Systems’ and ‘Removal of Inspections’. This is something I will blog about some more in the next few weeks.
Last week, an article on sqazone reported on the results of an independent study commissioned by Forrester Consulting into large development organizations. The conclusion, in a nutshell, was that “the cost and complexity of metrics collection, and the reliance on superficial metrics – conspire to deter application development organizations from attempting to improve their metrics programs.”
This is a sad but true observation on an area we have been evangelizing about for a few years now. Implementing a metrics and measurement program is not easy, and interpreting the data and feeding it back into the SDLC in a meaningful way is harder still.
Coincidently, I asked several speakers at the Agile Development Practices Conference in Orlando, FL, why they thought organizations were slow to create or adopt a formalized metrics program. Here are their thoughts.
David Worthington’s recent article in SD Times is based on research results from Forrester’s “Problem Resolution Survey Results and Analysis,” and makes for interesting reading. The article states that “the biggest time-sink in the application production life cycle [receives] the least regard from development managers.” The time-sink to which Worthington is referring? Investigating and resolving application problems.
A couple of other gems from the article:
“The respondents spend almost three out of every 10 hours (29 percent) in various stages of troubleshooting: documenting, reproducing or testing. On the average, a problem takes six days or more to resolve, and one in four of the problems reported by a QA or test group are returned as irreproducible.”
“Of the time spent on defect resolution, 26 percent is spent reviewing information, 34 percent on reproducing the behavior, and the remaining 40 percent goes toward isolating the root cause of the problem.”
Someone more cynical than me may wonder why there is no time left over to actually code and resolve the problem! Seriously though, these numbers reinforce the need to continue investigating different ways of building more robust code in the first place, meaning to detect possible bugs earlier in the development life-cycle and to implement a program of continual process improvement.
The article does not divulge any specific methodologies these projects use. It would be interesting to know if any were using agile techniques such as incremental development or TDD (or even doing any unit testing - in our experience, most teams don’t).
Surprisingly, only 66% of managers would be interested in a solution to these problems, even if “it created significant efficiencies and improved quality” (two somewhat subjective dependencies). This reflects a serious attitudinal problem: for the remaining 34% it smells to me like: “post deployment this is someone else’s problem.”
By the way, these issues are not confined to niche areas: the findings were universal across verticals and enterprises.
The build process is still an area I see in many organizations that, perhaps surprisingly, is overlooked. Many teams do just enough to compile and package up an application, and not much more. There is significantly more value that a well defined build process can add.
I am an advocate of a full build process. What do I mean by full? I mean that a build does the following:
And, does all of this automatically, eliminating mundane, repetitive manual processes (which can, and often do, go wrong). The ultimate goal, of course, is Continuous Integration (CI), but let’s not get ahead of ourselves.
By scheduling this process nightly (or even more frequently), the team is guaranteed to discover compilation errors that may not be present on their workspace. (The developer’s workspace and the build machine may not be in sync, and there may be other software that needs to be added to the build and test machines.)
Also, unit tests can be run against the integrated code, again showing any issues that may not arise on a single developer’s machine. If a problem does occur, the system can email the build manager to inform him/her of the problem so they can investigate and report back to the team what the problem is.
Another huge benefit is the fact that the test team can walk in and get straight to work without the hassle of setting anything up and jumping over technical hurdles to get the application configured and working before they can start doing their job. I’ve seen examples where testers have to spend up to half a day trying to resolve these issues.
By adding analytics and reporting (i.e. going beyond the minimum requirements), management can receive automated updates of the health of the project to be prepared for any meeting with the team. You can produce a lot of reports from different plug-ins which can provide great data for constructive feedback to the team and provide visibility into the project at different levels.
ANT or Maven can be used to write scripts to perform the tasks of compiling, executing tests, reporting and setting the application up to be copied to a test server, while CruiseContol, Hudson and Continuum are all free CI Servers that can perform scheduling and automate these tasks.
If you are new to this, or feel that your build process is at a ‘bare minimum’ all this may seem like a daunting task. ‘Pragmatic Project Automation’ by Mike Clark spells out how to automate the build process in less than 150 pages and even shows how to use lava lamps to indicate whether the build succeeds or fails.
CI introduces the concept that the build process gets triggered every time a change to the code or a configuration file is committed to the version control system. The two greatest benefits of CI, in my opinion, are that (a) risk is further reduced (any defects, by definition, must have occurred with the last edit, and can be fixed straight away) and (b) the fact you can produce deployable software at any time. ‘Continuous Integration – Improving Software Quality and Reducing Risks’ by Duvall, Matyas and Glover is a good book that explains this further.