Posted on
 

Decentralization: Worth The Wait

Ethan Zuckerman has a piece in Wired that says building decentralization tools is a sucker’s bet. He and his coauthors, Chelsea Barabas and Neha Narula, mention the FreedomBox, which I helped lead, as an example of how difficult this stuff is. They point to a list of things that make decentralized efforts prone to failure and conclude:

Our research—a combination of technical and historical analysis, and dozens of interviews with open web advocates—indicates that there is no straightforward technical solution to the problem of platform monopolies. Moreover, it’s not clear we can solve the nuanced issues of centralization by pushing for “re-decentralization” of publishing online. The reality is that most people do not want to run their own web servers or social network nodes. They want to engage with the web through friendlier platforms, and these platforms will be constrained by the same forces that drive consolidation today.

They point to a “better strategy” of policies aimed at “data portability, interoperability, and alternatives to advertising-based funding models”.

I’m no longer with FreedomBox, and none of what they write is wrong (I was one of the open web advocates they interviewed), but I wanted to chime in because there’s more to decentralization than seeking a “straightforward technical solution” and building a better social networking app. It’s true that we haven’t realized the grand vision of Diaspora and FreedomBox. They’re right that we need enlightened policy. We need the centralized platform monopolies to behave better. Those steps, though, won’t ever give people control over the means of communication. Without that control, we’ll always be at the mercy of Facebook or whatever comes next.

Redecentralizing the web is hard. FreedomBox and other projects I’ve worked with found that delivering exciting, secure, privacy-respecting apps required starting from scratch. Every piece of centralized tech depends on a bunch of other pieces that also tend toward the efficiency of centralization. Most of it is invisible to end users but it’s centralized pieces all the way down. We wanted to build a beautiful fountain but were missing basic plumbing. If you want to fundamentally change the relationships between the visible parts, you can’t just skim the surface. You have to dive deep and you need to invent all the decentralized building blocks that make up a complete web service.

It’s a little like switching cars from gas to electricity. You can’t just aim sunbeams at your fuel tank. You have to change every moving part in the vehicle, rethink gas stations, reinvent batteries, redesign a chunk of the electric grid, adopt new regulations, grow a new supply chain, and maybe fundamentally change the average person’s relationship to their car.

The internal combustion car and all the pieces of our world that relate to it benefited from billions of dollars of investment and a century of development. We’re not going to shift everybody to electric cars overnight. With all the might of the car companies, it will still take decades of small changes, each one ambitious on its own. Likewise, we’re not going to dislodge the Internet monopolies overnight with weekend coders and money begged on Kickstarter.

What we’re going to do instead is start at the bottom and build some boring infrastructure that makes other decentralized things possible. That piece will make new decentralized work possible. Those efforts will also be boring to everybody except privacy geeks. But we’ll keep building and refining. Eventually we’ll have a towering stack of pieces, each providing some crucial bit of decentralized functionality. They’ll add up to something that, finally, some end users want.

And when that thing is built, nobody will notice. Unless you care what’s under the hood, it will be boring, like a Prius. People who use it will like that it works well and respects their privacy, but they will understand decentralization about as well as they get battery chemistry or regenerative braking.

That is what we’re building towards. All of Zuckerman’s points about this being a monumental task are true. We should pursue the policies he identifies. But we should firmly reject any suggestion that we stop building just because it’s hard and will take a while.

Posted on
 

Report on GeoNode’s path to open source success

A map of flood zones in Haiti, rendered with GeoNode. Vulnerable areas are highlighted in red, over a base map showing a street map of Port-au-Prince. The mouse is hovering over a menu item which explains that "This map layer modelizes areas of frequent flooding for Port-au-Prince region."
Haiti Flood Zone map. Source: http://haitidata.org/maps/153/view.

Recently, OTS was asked to write a report about the GeoNode project by one of its primary sponsors, the Global Facility for Disaster Reduction and Recovery (GFDRR) a global partnership that is managed by the World Bank.  GeoNode is a facility for sharing and displaying geographical information.  It is “web-based, open source software that enables organizations to easily create catalogs of geospatial data, and that allows users to access, share, and visualize that data,” as we put it in our report.

GFDRR and its Open Data for Resilience Initiative (OpenDRI) work to increase community resilience to natural disasters, in particular by sharing data.  GeoNode helps them do this by providing a way to easily share geographical data – what parts of a city are the lowest and most prone to flooding?  Where exactly has an earthquake hit, and where is assistance needed most quickly?

Why write a report?

OpenDRI began the GeoNode project in 2009, and has been a major contributor to the code and community since then.  In 2016, they decided that they’d like to contribute in another way: GeoNode as a project has been tremendously successful, and they wanted to share lessons about how to incubate an open source project.  Large institutions sometimes sponsor open source projects, and do not always have success involving partners to the extent they would like.  OpenDRI brought in major partners and is now no longer the primary contributor to GeoNode’s development.  They would like to be able to replicate this success in future projects, and wanted to offer an example to the open source community about releasing and cultivating a project like this.  To clarify the example, OpenDRI asked OTS to write a report about the GeoNode project.

Our process

To complete the report, we interviewed participants in the project, from
people who had been involved from the very beginning to more recent
additions.  In general, we strove to get perspectives from people from
different levels of the contributing organizations — not just
developers, but decision-makers and funders.  We also looked back at
tangible evidence from the earlier days of the project: commits, mailing
list archives, blog posts, and documents generated from early in-person
meetings.

Results

These interviews and research resulted in 9 major lessons for organizations starting future open source projects:

  • Run as an open source project from the very beginning
  • Engage other organizations commercially
  • Focus on communications and evangelism early
  • Find and encourage the right partners
  • Invest in collaboration infrastructure
  • Hold events and sponsor attendance
  • Use funding choices as a signal to peer institutions
  • Improve user experience to attract new users
  • Change the nature of your investment as needed

Flexibility and openness were both key for OpenDRI’s success.  While GeoNode benefited from having been built on existing open source libraries and platforms, its widespread use and adoption were due just as much if not more to the openness of its community.  Its early supporters at OpenDRI shared it widely with similar organizations and worked hard to invite in new contributors.

The full report is available from OpenDRI – see their summary and a download link here, and their announcement blogpost for more.

Posted on
 

Attending the Wontfix Cabal

Pile of stickers that read "wontfix_" in green monotype font on a black background.
Courtesy of Jess Frazelle.

GitHub hosted the “Wontfix Cabal” last week in San Francisco, and I was lucky enough to attend, thanks to a pointer from a friend.  The organizers, led by Jess Frazelle, conceived the gathering as a chance for people maintaining open source projects to discuss their particular difficulties and some strategies for addressing them.  About 100 maintainers took them up on it and convened in San Francisco.  At the beginning of the day, we compiled dozens of sticky notes worth of problems that come up in our various projects.  These ranged from tips for maintainers who had received the classic question “Is this project still maintained?” to “Ethics and Exploitation in Open Source.”  In a common unconference pattern, attendees voted on those topics.  The most popular topics were chosen to become discussion groups, and we split up for a morning session and an afternoon session.

In the morning, I joined the “Code Review” discussion group.  At OTS, we frequently advise clients who aren’t familiar with the ins and outs of accepting and reviewing contributions from outside parties.  This group confirmed our best practices around code review, and reaffirmed its importance.  Much of this particular discussion was about projects with an overabundance of pull requests (PRs), which is of course a good problem to have.

A few themes emerged:

  • Encourage contributors to start a conversation before putting in too much time and effort.   The discussion phrased this as “open an issue before submitting a PR,” but really the goal is to avoid a situation where a contributor puts a lot of work and time into a pull request that the maintainers don’t want to accept.  If the person opens an issue for their proposed change, that gives project maintainers an opportunity to respond with reasons why they wouldn’t accept a PR with that change.  For small changes, opening a PR might be enough to start the conversation, and making a separate issue could be too much overhead.  The important thing to communicate to contributors is that they should be checking in with the project’s maintainers on the direction of their work when they begin.
  • Use GitHub’s templates for PRs and issues.  These save time by giving contributors a clear idea of what kind of information the project expects, like a DCO (Developer Certificate of Origin), tests for any new functionality, and updates to the documentation.  Templates also allow projects to use bots for initial review and response to new contributions.
  • Automate initial responses.  Responding to PRs and issues with bots saves time for the maintainers while still ensuring that contributors get a response (to retain contributors, it’s crucial to engage with them — that attention is their reward for contributing).  Another discussion group pointed out that contributors might take more kindly to being corrected by a bot than by a person.  Robots aren’t judging, so the embarrassment of getting something wrong is much reduced when it’s a bot pointing out the mistake.

In the afternoon, I joined a group on “internal versus external contributions.”  That is, what happens if an open source project is primarily sponsored by just one company?  For one thing, sometimes that company’s priorities will not line up with those of the community.  See the full notes here.  Some of the main points were relatively straightforward: make clear to contributors what kind of governance your project has, or if it has none.  Determine early whether this is primarily a community-driven or a company-driven project, and indicate that to everyone involved.  Overall, make sure that the decision-making process is clear.

One part of this discussion that surprised me was how few of the long-lived, well-known projects represented in the group had any kind of formal governance procedure.  Kubernetes was mentioned as an example of a project that does have explicit governance.  Many others, though, do not.  One recommendation that came out of this session was to research and adopt governance practices in more of these single-company-led projects.  This calls back to the determination to “be clear about the decision-making process.”  It’s fine if internal contributors have outsize influence on project direction, so long as that is communicated at the outset.

I was glad to see that this is consistent with the advice OTS typically gives our clients: don’t worry about governance too soon!  Our clients are frequently concerned about how to set up their governance structures before they have a single outside contributor.  The fact that many successful and widely used projects don’t have formal governance until their lack of it becomes a problem is just more evidence in favor of deferring these structures, which can quash early project momentum.  There is little need to worry about governance until conflicting priorities between contributor groups become apparent.  Later in a project’s life, however, more explicit governance is crucial in order to avoid forks.

Most of the conference went on outside these specific sessions.  We spent twelve hours together, talking about current projects, aspirations for our communities, opportunities for mentoring, and more.  I’m thrilled that I was able to attend — meeting the other attendees was fascinating, as was hearing about different approaches to maintainership.  Many thanks to the organizers and sponsors for making this possible.  I don’t yet know if there’ll be a follow-up next year, but if so, count me in.

 

Posted on
 

Sharing data across Red Cross projects: the Smoke Alarm Portal and allReady

smokealarm-allready-link

OTS has been lucky enough to work with the Red Cross of Chicago and Northern Illinois (CNI) for the past year and a half, thanks in large part to the civic data community at Chi Hack Night.  With Jim McGowan, CNI’s Director of Planning and Situational Awareness, we developed the open source Smoke Alarm Request Portal, where you can request to have a volunteer come install a smoke alarm in your home for free.

Now we’re connecting the Smoke Alarm Request Portal to allReady, an open source platform for volunteer preparedness and coordination (part of the Humanitarian Toolbox suite of disaster preparedness and prevention tools).  With help from both open source communities, the two applications will share data to simplify the process of scheduling smoke alarm installations.


The Red Cross and open source

The American Red Cross is a volunteer-driven organization: their disaster relief and prevention campaigns depend on the work of many people who are not on staff.  CNI in particular believes that this focus on recruiting, training, and involving volunteers across their organization means that they should run their software initiatives as collaborative open source projects.  Open source offers their volunteers another way to be involved in their work, and allows people who may not be able to offer time on the ground a way to contribute to the Red Cross’ mission.


Smoke Alarm Portal

getasmokealarm-screenshot

The Red Cross currently has a goal to reduce home fire deaths and injuries in the United States by 25% by 2020 (see their resources page on home fires).  One piece of their campaign to prevent fires is free smoke alarm installation.  Anyone can sign up to receive a free smoke alarm for their home, and Red Cross staff and volunteers will bring and install one or more as needed.

CNI was already accepting smoke alarm installation requests by phone, but they (and the rest of the Northern Division) realized they could manage requests much more efficiently via the web.  They just needed a simple application where users could request a free smoke alarm.  OTS built an open source application and deployed it at getasmokealarm.org.  See the project on GitHub to see the code, or to get involved (as many people have).


Humanitarian Toolbox and allReady

allreadybiglogo

While the Smoke Alarm Portal was being developed, Jim McGowan began working with the Humanitarian Toolbox (HTBox) project on an ambitious open source solution to increase community resilience and disaster prevention efficiency.  The allReady project, as described in their announcement blog post and project page, aspires to “put disaster out of business” by increasing the resiliency of communities across America.  Its goal is to be a central application for coordinating volunteers across disaster prevention and relief campaigns for the use of non-profit groups like the Red Cross as well as town and city governments. (See the allReady GitHub page here.)


Bringing them together

Since allReady aims to be the central warehouse of all disaster prevention data for its users, it’s only natural that it should be used as an interface into the Smoke Alarm Portal.  Even before allReady is in production, OTS is working with the HTBox team to add API endpoints to both projects that will allow Red Cross volunteers to update the status of smoke alarm requests from within allReady, without affecting the workflow for people requesting smoke alarm installation.

Once this work is complete, end users will only need to interact with getasmokealarm.org, while administrative users (Red Cross volunteers and staff, in this case) can sign in to manage smoke alarm requests and other kinds of prevention work from within allReady.  If a Red Cross staffer knows that a volunteer team is going to a neighborhood to install smoke alarms, they might also be able to stop and drop off supplies for a family in the same area that recently suffered a different kind of emergency.  Once the volunteers update the smoke alarm installation status in allReady, that application will also update the status in the Smoke Alarm Portal.


Next steps

Linking allReady with the Smoke Alarm Portal is still in progress.  We’re happy to work with anyone who wants to to improve and test the endpoints on both projects, to review documentation, and to give feedback on design.  Most importantly, though, let your local disaster prevention groups know about allReady and the Smoke Alarm Portal, and check the batteries in your smoke alarms!

Posted on
 

OS IV&V: Independent Verification and Validation for Open Source

When you hire a development shop to build an open source product, you want to make sure the result is truly open source. You want to guarantee that:

  • The end product is independently deployable by others.
  • There are clear instructions for how to get involved.
  • Commercial third parties are welcome (because that’s usually where new development energy comes from).
  • There are no unexpected proprietary dependencies.
  • The developers respond constructively to bug reports.
  • There are procedures in place (as there should be for any software) for receiving sensitive security vulnerability reports.
  • The project is poised to become a multi-participant and even multi-vendor community.

However, often first forays into open source do not meet these goals — not because of bad intentions, but because vendors who are new to open source need some help.

Open Source IV&V provides vendors that help. An independent vendor specializing in open source works alongside the development vendor, playing the role of open source community from the start of the project. The IV&V vendor works with the development vendor out in the open, just as third-party participants would. By the time the first production release is ready, the development vendor knows how to navigate an open source project, technically and culturally.

OS IV&V helps expand the range of vendors you can consider hiring to do open source development, and it ensures that by the time the project reaches beta, there are at least two vendors who have technical and deployment knowledge of the code base.

The long version:

Traditionally, “Independent Verification and Validation” (IV&V)1 is a method of independent oversight for technical projects, in which a disinterested third party is brought in to test that a project’s results fully meet the specifications as set out by the client. It is most commonly used with private-sector contractors delivering technical work to a government agency. For example, NASA has been using IV&V arrangements for many years to ensure that its contractors — and its own engineers — are meeting safety and cost-effectiveness requirements.

Open Source IV&V” is designed specifically for open source software projects, and its purpose is to not only ensure that the end product is legitimately open source software, but that the project is created and managed in a sustainably open source way, which sometimes can include changes in how the customer and contractor relate to each other.

We have gradually designed the OS IV&V process through the repeated experience of being brought into projects after development is already under way with a primary development vendor — often one that does not have extensive experience with open source practices — and then attempting to retroactively transform the work into a real open source project, with all the benefits open source can bring.

In open source software, the result of a project is not merely the software code itself. It’s also the ecosystem around the code: all the publicly-visible discussions between developers and users, all tickets in the bug tracker (each ticket recording not just a technical fact about the software, but also a conversational dynamic between different people at different organizations), all the feedback from both automated and manual testing, contributions from interested third parties (some of these contributions may be already incorporated, others may be still pending review), a project roadmap reflecting input from multiple sources, documentation not just about using the software but about how to contribute to the software, etc.

The health of this larger ecosystem is crucial to the long-term success of the project, and to reaping the benefits of open source. What OS IV&V verifies and validates is not just the technical components of a project — code, licensing, documentation, open source collaboration infrastructure, etc — but also the contractor/customer relationship: how communication happens, how they bring in third-parties as the project grows, how they handle user feedback, etc. The goal of OS IV&V is help the customer and the development contractor structure their working relationship in a way that is compatible with the project flourishing as an open source ecosystem.

Why is OS IV&V necessary?

When a software contractor who has little or no experience running open source projects agrees to create an open source product for a customer (say, for a government agency), the contractor usually fails to adopt standard open source processes from the beginning of the project. For example, the contractor often does not ensure that development happens in the open from the start, is not prepared to integrate commercial or volunteer third-party involvement, does not use typical open source tools and procedures to receive and handle user feedback publicly (nor have proper procedures in place for receiving security vulnerability reports), does not post clear instructions for how to join the development effort, does not encourage its developers to be responsive to genuine technical inquiries, etc.

This is because most primary development contractors are accustomed to doing software development entirely in-house, and simply continue to use in-house techniques even after taking on an open source contract. They use an internal bug-tracking system from which tickets cannot be made public, they use internal code repository servers because that’s what their developers are set up to use, they write documentation in proprietary formats aimed at a primarily in-house audience, and so on. Most importantly, they and their customers tend to communicate almost exclusively in private about the projects they work on together. A typical workflow is that the customer submits feedback by sending private email to some designated contact point at the contractor, which results in a private ticket being filed in the contractor’s bug-tracking system; the customer may know the bug ticket number but often not even have read-only — let alone read-write — access to ticket itself.

To get the full benefits of open source, the contractor needs to adopt practices that may be unfamiliar, and the customer and contractor need to use an open source approach to their own communications around the project. A project that is run in a truly open source manner will be much better able to engage third-party interest, which in turn makes innovation by other companies much more likely, and naturally lowers the customer’s risk of vendor lock-in. It also, in our experience, results in higher-quality, more easily maintainable software.

The role of the OS IV&V vendor is as much to assist both sides in making these adjustments, as it is to verify and validate that the adjustments have been made. A successful OS IV&V engagement results in a new way of working that is repeatable for future projects.

Elements of OS IV&V

Although OS IV&V is as much an art as a science, we have done enough of it now to extract some key ingredients. An OS IV&V vendor should ensure that:

  • The project is conducted in the open from day one:

    • Source code checked into a public repository (e.g., at GitHub)

      • Corollary: The public repository has to be the master repository. Make sure there is not some internal private repository where the real development is happening, with tranches of commits occasionally exported in a throw-it-over-the-wall fashion to the public repository.

    • Public bug-tracker

      • As with the repository, the public bug tracker must be the primary bug tracker.

    • Public documentation

      • There is often a wealth of necessary documentation that the contractor considers to be private deployment documentation, but that is actually necessary for anyone trying to deploy the project, even just for development purposes. The OS IV&V vendor needs to sniff that out and help that stuff see the light of day.

  • Communications forums are available and configured according to open source norms (e.g., publicly archived, publicly subscribable via email or web, etc)

  • The appropriate people (from the contractor and the customer) are responsive in those forums

    • Corollary: Ensure that conversations don’t get silently sidetracked into private forums where they are inaccessible to the rest of the project; make an exception for when privacy is actually necessary, of course.

  • The project does not acquire any proprietary dependencies except those explicitly required (if any) by its specifications, and the code is compatibly licensed for outbound distribution under an open source license of the customer’s choice.

  • The customer is the copyright owner of the code, unless otherwise specified, and is named as such in the copyright notices in the source tree.

  • Development procedures are properly documented, and follow open source norms

    • A third-party developer should be able to deploy the software, for testing or for development sandboxing, using only the instructions and sample data provided by the public project.

    • There is a documented path for receiving and reviewing incoming third-party contributions, whether commercially solicited, volunteer, or due to some other motivation. This includes establishing DCO (developer certificate of origin) and/or CLA (contributor license agreement) procedures as needed.

    • Automated testing and continuous integration are set up in such a way that they are available to all developers, not just to the primary development contractor.

  • Deployment procedures are thoroughly documented and follow open source norms.

    • Sample data is available, in the formats developers would expect.

  • Secure coding practices and procedures for handling vulnerability reports are in place and are followed.

  • The primary development vendor and the customer use standard OSS communications channels for as much of their communications as possible.

  • The project is compatible with any relevant open standards, or at least acknowledges the relevance of those standards and has a long-term plan for compatibility.

  • The project has the governance policies it needs at every stage. This includes both ensuring that governance structures are not baked in too early, which would risk stifling potential early contributors with too much bureaucracy, and making sure that reasonable governance structure is added as more participants join the project.

  • Normal open source practices for receiving, recording, and handling user feedback are documented and followed.

    • This includes feedback from the primary customer: it should be submitted and processed using the same channels as any other feedback. The development contractor may prioritize it differently from other feedback, of course, but that does not change where and how the customer’s feedback is submitted.

  • Potentially adverse public events are anticipated (or swiftly detected) and handled.

    • Assistance with planned events, such as launch announcements, to ensure that they keep the audience of potential contributors in mind.

    • Assistance with unplanned events, such as the discovery of a “zero day” security vulnerability, a data leak, etc.

  • Hackathons and other events that invite the public to become more involved in the project are conducted effectively. The exact assistance an OS IV&V vendor provides here can vary widely, depending on the experience and inclination of the primary development vendor, and can extend as far as the OS IV&V vendor being the chief organizer for such events, though of course the primary development vendor should still be included in whatever way they can be.

  • There is effective, high-quality mentoring provided if the project participates in the Google Summer of Code program2, the Outreachy program3, and other similar formal third-party contribution programs. Again, if the primary development vendor is inclined to take an active role in these programs, then the OS IV&V vendor can step back and merely provide assistance where needed; but the OS IV&V vendor can also directly arrange and mentor participation in the program if the customer and the primary development vendor agree.

  • The project publishes estimates of deployment costs and monthly marginal costs, for hosting of typical datasets. These estimates help facilitate involvement by other commercial entities, and are difficult for the primary development vendor to provide, both because the development vendor often designed the software around skills or hosting configurations they already had in place (and are thus unsuited to estimate a newcomer’s adoption of), and because their potential conflict of interest is too strong in any case.

  • Deployed code matches public code. That is, when the primary development vendor is also the hosting provider, as is commonly the case, their hosting SLA must not serve as a de facto justification for a divergent internal fork. Any deployment-specific customizations must be propagated back to the public, “upstream” version in an appropriate fashion, except for traditionally non-distributed minor customizations such as configuration file parameters, passwords, etc.

  • Structuring an OS IV&V Relationship

    The OS IV&V vendor reports to customer, not to any of the development contractors, and only the customer decides whether the OS IV&V vendor is fulfilling their responsibilities. This is crucial: if the OS IV&V vendor were instead responsible to a development contractor, it could not provide the independent verification and unbiased advice that is the core of OS IV&V’s value.

    This does not affect contracting relationships. It is fine, for example, for an OS IV&V vendor to contract through the development vendor simply as a prime contracting vehicle, or for both to subcontract through some other vendor as the prime, as long as the reporting relationship is clear: OS IV&V reports to the customer and no one else.

    The OS IV&V vendor is also best positioned to assist the customer in higher-level planning related to the project, such as drafting extensions to the development contract, writing future RFIs and RFPs, and budgeting for post-deployment maintenance and enhancement, because the OS IV&V vendor has far fewer conflict of interests in these matters than the primary development vendor.

    The OS IV&V vendor can also assist with the presentation of the project to potential partners, including to the local development community (particularly useful in large municipalities), or to relevant open source or civic technology organizations. This also includes assistance with staffing and contractor evaluation, since the OS IV&V vendor’s work remains largely the same no matter what the balance is between customer-side in-house technical staff and the staff on the development contractor’s side. In general, OS IV&V may be thought of as an independent body whose role is evaluate other participants in the open source ecosystem.

    OS IV&V As Backup Support Vendor

    It is okay for an OS IV&V vendor to have the potential to become an independent provider of commercial support in the project themselves4. The best OS IV&V vendor is not just a management-consulting shop with the enough in-house development knowledge to provide technically competent oversight, but rather a software development shop in its own right, with direct experience performing the kinds of contracts that it now being asked to provide oversight for. This has several advantages. First, it’s easier to communicate credibly with the primary development vendor when one understands their line of work intimately. Second, it’s easier to anticipate the needs of potential third-party contributors when one has been in that position oneself. Third, from the customer’s point of view, part of the purpose of OS IV&V is that by the time of product launch, there will be two independent vendors who could deploy and support the software on a commercial basis. If the original development vendor is performing satisfactorily, then their contract can simply be extended. But if they are not, then there is an alternative readily at hand.

    Footnotes

    1 See https://en.wikipedia.org/wiki/Verification_and_validation for more discussion of IV&V in general and its history.

    4 Though it is of course advisable to drop their OS IV&V role with a customer once they start providing direct project-related services to that customer.

    Posted on
     

    What OTS Does.

    When I tell people that I work at an open source consulting company, I often get puzzled glances (or those agreeable nods that mean “I have no idea what you’re talking about”). What does “open source consulting company” mean? What does OTS do, exactly?

    Who are we?

    At Open Tech Strategies we do open source development and process consulting for corporations, government agencies, and non-profits who want to produce or use open source software more effectively.

    Okay, but what does that mean?

    Why open source?

    Formally, “open source” just means software code released under a free license. But publishing code that way often turns out to have very powerful effects, so a more useful way to think of open source is as a way of aligning your organization’s goals and its production processes. When you are trying to extend the life or the applicability domain of a software project, improve transparency and accountability in your organization, collaborate more effectively with partners or volunteers, increase user engagement with a service, promote user privacy, or grow a vendor ecosystem, open source provides a ready-made set of practices and promises that enable your natural allies to find & join you in the effort.

    Open sourcing a project doesn’t necessarily mean giving up control, except insofar as you decide that that doing so better serves your goals, but it makes it possible for a project to grow beyond the interest or budget of just one organization; we’ll look at some examples of that in later blog posts. And building on open source software means that organizations don’t need to pay to re-invent technological wheels like content management or databases and can focus on their own specific needs.

    (See also Ben Balter’s excellent post “Why open source” for a more in-depth look at why organizations choose open source.)

    So that’s open source. What exactly does OTS do, though?

    Usually, I describe our work as falling into one of a few different situations: building, opening, and buying.

    Example Client #1: Builder

    This first example client, Builder, needs a piece of software to complete their mission work. They might be a non-profit fighting homelessness or youth violence or a corporation that needs a new research tool. Builder decides to use an open source custom-built solution for reasons of cost-effectiveness, transparency, or one of the other points I mentioned above. Over the course of an agile development process, we meet with the end users of the product and involve them in testing as soon as there’s anything to test at all. All of our development takes place in the open, and we maintain install documentation so that anyone could set up a sandbox instance of the project to try out. When this process is done, Builder has a working piece of software in production that can be set up and maintained by any software development team using our open code and documentation. Their partners can set up the same system, extend it, and share any enhancements they make with Builder. We often continue to work with the customer beyond this point, of course, but the software’s “developer surface area” is now much greater than it would have been were it proprietary: both we and Builder have the ability to welcome new partners into the project, and being able to exercise that ability has proven quite useful.

    Example Client #2: Opener

    Another kind of client, Opener, comes to us with an existing software product. Possible Openers are a data visualization and outreach engine for astronomers or a transparency tool for governments. While the software product is not the entirety of Opener’s mission, it is a crucial part of the work and face of the organization. For Opener, we focus on code resiliency, partner collaboration, contributor engagement, and (where applicable) development of a commercial ecosystem around the project. When we complete our work with Opener, they have a usably open source project that has clearly-documented code, findable discussion forums on the web, an issue tracker in use, and all the other affordances of a well-arranged open source project. The project is well-positioned to invite new collaborators and engage users.

    Example Client #3: Buyer

    A third type of client can be called Buyer. This client is an organization that wants to use and/or contribute to open source software, but needs advice on adapting their procurement processes and on how to invest strategically in software technology. Buyer might be a government agency that needs to set up definitions and standards around procuring and using open source software, or could be a foundation or corporation that wants to invest in open source but needs some advice on how to select and support the right projects. For Buyer, OTS researches the state of the field in their area of interest and provides recommendations for investment and/or procurement routes. When we complete our work with Buyer, we often deliver a highly specific report on the best ways they can use or invest in OSS. They are prepared to interact with open source in the way that best advances their strategic goals.

    That’s the story (mostly).

    Naturally, not everything we do falls neatly into these categories of building, opening, and buying open source software. For example, we also offer advice for companies that want their employees to be able to contribute to upstream open source projects and helps open source projects draft CLAs or think through their business strategies. If you’re a Builder, Opener, or Buyer or just have questions, feel free to reach out. Thanks for reading!