In our recent webinar, Function Point Analysis Over the Years, presented by Don Beckett, we received some great questions from our audience. Here are the highlights from the Q&A:
Q: The advice in recent years is to break large projects down into smaller ones to make them more likely to "succeed" by whatever measure. Is the advice now to make projects bigger?
A: I don't know if it's advice, but the data seemed to indicate that there is a benefit to grouping projects by larger size than the projects that are 50 or 100 function points. So I would say, where it's possible, where they can be grouped together, it would be a good idea.
Q: Why do you use the PI (Productivity Index) as opposed to the industry standard hours per function point or function points per person month?
A: Well hours per function point and function points per person month are ratios that take the ratio between effort and size and what we have found is that the schedule has a huge impact on how productive a project can be. The PI incorporates three major things: the size of the project, the amount of effort leveraged against it, and the time required to do it, so in a sense, it accounts for schedule, which function points per person month does not do. So that's why we use it.
Q: How do we convert a project from SLOC to function points to find the PI for a specific project?
A: I'm assuming the project is sized SLOC, not in function points and the question is how do you convert that to function points to get the PI. What I would suggest is not to bother to convert it to function points. The only way you would convert it to function points would be with some sort of backfiring and backfiring is not usually considered a good practice in IFPUG. PI can be calculated directly off of SLOC, if you use them. If you have project size in function points, you would use gearing factors of your development languages and fit that into the equation to determine the PI.
Q: Were differences between agile and waterfall examined? If so, what were the findings?
A: Not in this study. I have done research into that also, which you can find on the website here. It shows that agile's pretty good. It tends to use slightly higher staff. The PI's are higher. The time to market's shorter. The raw effort's about the same. The quality's as good.
Q: Most of the slides use medians instead of averages, why is that?
A: A lot of it was just to eliminate the effect of extreme outliers, to find the real measure of central tendency. Within any of these categories, there's a great deal of variation, so it was a way of find what's the center point of it without giving undue weight to things that were extremely large or extremely small.
Q: You showed how productivity decreases the more a schedule is compressed. What is the process you used to assemble that data?
A: The process was partially manual. What we did using our SLIM-Metrics tool, was fit a regression line through the entire data sample. Size was on the X axis and Schedule on the Y. The tool was nice enough to calculate the variance of each project from the average. I sorted the projects by their variance and placed them into categories each of which was half a standard deviation wide. Then I calculated the median productivity in function points per person month for each category and graphed the results.
Q: How do your SLIM tools support function points?
A: I would say maybe a quarter of the projects in our database are function point projects. I was for many years an estimator with a large systems integrator and used function points extensively. One of the nice things about them, from an estimation standpoint, is you can count them for the projects before coding. SLOC you can't do that. So you can leverage our function points gearing factor table on our website, which is updated every couple of years, to find the normal number of SLOC per function points for a certain programming language. So yes, our SLIM tools work very well with function points.