SOS Open Source: Save Our Souls from Open Source assessments

The inclusion of open source solutions and developments into software qualification and selection methodologies is not a new research topic. For many years different approaches have existed, including QSOS by Atos Origin, OSMM by B.Golden, OSMM by Cap Gemini, EOS by Optaros, OpenBRR, OSS Watch, IRCA by David Wheeler, ohloh recently acquired by Black Duck Software, and others. Some of them are well structured methods, others provide tools for quality evaluation or a list of metrics to be considered in an open source assessment.

The real problem to solve is to find an approach that is:

  • understandable (well-presented and easy-to-understand results, along with the supporting information)
  • easy to use (well-identified metrics based on effective data that can be efficiently gathered and verified by evaluators)
  • the objectiveness of the approach (as much as possible), based on a scientific and strict method.

QualiPSo project has given a big effort to this last point, producing both MOSST, the Model of Open Source Software Trustworthiness, in order to evaluate the quality of open source products, and OMM, the Open Maturity Model, in order to evaluate the quality of the open source development process. These approaches are probably complex (complexity is a characteristic of all quality assessment models), but they will certainly improve over time in terms of understanding and easiness. Nevertheless, they mostly contributed to the openness of information (publication and sharing of supporting information), specification of reference metrics in order to achieve a general consensus and a strict and scientific approach to the assessment method (metrics accounting and evaluation).

Nowadays many organizations are focusing on OSS quality, including open source communities like OW2 Consortium (which has just launched the SQuAT initiative), and enterprises or the public administration, willing to adopt open source solutions. It’s the consequence of open source becoming mainstream and of the maturity achieved by many OSS solutions.

Authoritative IT research organizations (such as Gartner and Forrester) give more effort in evaluations of OSS solutions (or include OSS solutions in their market evaluations) – mainly because their customers ask for them. However, they often adopt a method suitable for the evaluation of proprietary solutions, not giving the solutions the right value to the specific OSS characteristics. Also single consultants are entering this market.

Now it’s time to be very careful in reading and evaluating the information they provide. Those are usually resumed in posts, tweets, wikis, with no supporting information: the risk is to forward evaluations out of their original context or to lack in comprehensiveness, lowering the value and correctness of the assessment.

A recent clear example is SpagoBI evaluation by SOS, a new approach (a QSOS customization) that has very recently discovered some information about the metrics that the author adopts  (nevertheless, crucial information to understand the evaluation method are missing, such as metrics aggregation, relevance of the different metrics, etc.).

Knowing both the topic and the solution, I can give you two example of wrong and misleading information that can be found in the evaluation summary posted in the evaluator blog.

It says:

  • The project […] is not enlisted among OW2 top 10 downloads. What a mistake! SpagoBI is usually in the top 10 weekly downloads (probably the assessor was unlucky to find the right figures!). By simply asking the OW2 community, he could have got to know that SpagoBI is on top of OW2 most downloaded solutions every year. Moreover, how is this information about a community top-ten list correlated with the metrics of the evaluation method?
  • SpagoBI full support is priced at 25.000 euro per year. Yes, it’s the full price for the support to the entire suite, including the suite core and 17 engines. No reference to the fact that SpagoBI supporting price is customizable (users can build and price their SpagoBI maintenance services according to their specific needs, starting from the entry level of about € 10,000, and rising it only when they’ll increase the number of engines for which they require SpagoBI maintenance services). Moreover, no reference to the fact that that the price is applied to one single project, with an unlimited number of CPUs and users. This could make the difference when compared with other solutions.

I won’t enter into more details (just to point out that the evaluation says nothing about business intelligence functional coverage – a crucial evaluation for adopters in all functional domains). My main remark concerns a subjective evaluation, mostly referring to the assessor reputation, whose value relies on its nature: a post in a blog (a personal opinion like my current one, through this post).

SOS from OSS assessments

Joking with acronyms, a tool that was born to provide users with SOS to solve this issue, could be turned into a call for help.

Quality assessment is a tricky issue. In enterprise environments they are driven by very qualified organizations, using specific well documented methods, interacting with producers; they must prove scientific strictness in adopting the methodology and independence of the method and its adoption.

In open source software complexity scales. This is the consequence of the increase of the “open source” parameters to be taken into account (community, distributed developments, reputation and adoption of the solutions, etc.), as well as of the risk implied in an evaluation carried out using public data, without involvement of producers (on the other hand, it’s one of the greatest opportunities provided by OSS), and the lack of independent assessing teams and of globally accepted assessment methods.

Final recommendations.

To assessors:

  • verify your information with OSS producers, when possible; without risking your independence, you can get explanations and add important details, improving your assessment and its credibility.

To users:

  • pay attention to the metrics and to the completeness and legitimacy of the evaluation supporting information. A close approach drives to a subjective evaluation. Remember that it’s you who are going to take a decision, not the assessor: he will just provide you with a set of supporting information.
  • open source software cannot be compared using exactly the same approach used in proprietary software evaluations; in OSS assessments you must look at the weakness of the solution, but also at plus and specific characteristics differentiating the solutions in the market.

3 Replies to “SOS Open Source: Save Our Souls from Open Source assessments”

  1. SpagoBI summary report points to 22 internet (public) resources, including OW2 forge, Ohloh meta-forge, project’s and vendor’s web pages.

    All information provided is neither wrong nor weak, but based solely on verifiable evidence (included of course top of the week downloads and pricing, both retrieved on the 14th of December 2010).

    QSOS proved to be a valid concept, and we selected 24 metrics that were both usable and practical, eventually democratizing the business of open source assessment.

    SOS Open Source actually provides only tools to retrieve dispersed information in a fast and objective way, how information is automatically collected is the only secret sauce there is.

    SOS Open Source doesn’t rely on assessor’s ability or judgment, in fact it provides the assessor with material and suggested grades, so that you don’t need to be an expert to identify interesting open source candidates (yet you need to spend time and effort to try out functionalities, performance and test security).

    Until now few open source vendors have already asked for an assessment, but full reports (available only to our customers) are not always disclosed. Sometimes (in fact, frequently) vendors realized that they need time to refine their open source strategies – e.g. better nurtuting their communities, enabling stakeholders to contribute more to the planning process or react faster to bugs – and in these cases the publication of excerpts is postponed.

  2. A faithful exposition of the data is welcome, in the future. Just a little example: ” it is not enlisted among OW2 top 10 downloads on the 14th of December 2010″ (also if it’s a misleading information that everyone can verify looking at the usual presence of SpagoBI in the top 10 list – also if it adds nothing about popularity in the BI domain)

  3. OW2 presents download numbers on a weekly basis, we can’t help if some forges don’t make all statistics available to the public (something OW2 could probably address for better transparency).

    SpagoBI popularity in the BI domain has been analyzed using google trends, though. Readers can always make their mind up based on resources referred, 22 links (including the ones mentioned above).

    What you call subjective is clearly supported by a large evidence base.

Leave a Reply to Gabriele Ruffatti Cancel reply

Your email address will not be published. Required fields are marked *