My OSS CI/CD Pipeline

As far back as I’ve been doing open source, I’ve borrowed other project’s build scripts. Because build scripts are almost always committed with source control, you get to see not only other projects’ code, but how they build, test and package their code as well.

And with any long-lived project, I’ve changed the build process for my projects more times than I care to count. AutoMapper, that I can remember, started off on NAnt (yes, it’s that old).

These days, I try to make the pipeline as simple as possible, and AppVeyor has been a big help with that goal.

The CI Pipeline

For my OSS projects, all work, including my own, goes through a branch and pull request. Some source control hosts allow you to enforce this behavior, including GitHub. I tend to leave this off on OSS, since it’s usually only me that has commit rights to the main project.

All of my OSS projects are now on the soon-to-be-defunct project.json, and all are either strictly project.json or a mix of main project being project.json and others being regular .csproj. Taking MediatR as the example, it’s entirely project.json, while AutoMapper has a mix for testing purposes.

Regardless, I still rely on a build script to execute a build that happens both on the local dev machine and the server. For MediatR, I opted for just a plain PowerShell script that I borrowed from projects online.  The build script really represents my build pipeline in its entirety, and it’s important for me that this build script actually live as part of my source code and not tied up in a build server. Its steps are:

  • Clean
  • Initialize
  • Build
  • Test
  • Package

Not very exciting, and similar to many other pipelines I’ve seen (in fact, I borrow a lot of ideas from Maven, which has a predefined pipeline).

The script for me then looks pretty straightfoward:

# Clean
if(Test-Path .\artifacts) { Remove-Item .\artifacts -Force -Recurse }

# Initialize
EnsurePsbuildInstalled

exec { & dotnet restore }

# Build
Invoke-MSBuild

# Test
exec { & dotnet test .\test\MediatR.Tests -c Release }

# Package
$revision = @{ $true = $env:APPVEYOR_BUILD_NUMBER; $false = 1 }[$env:APPVEYOR_BUILD_NUMBER -ne $NULL];
$revision = "{0:D4}" -f [convert]::ToInt32($revision, 10)

exec { & dotnet pack .\src\MediatR -c Release -o .\artifacts --version-suffix=$revision }

Cleaning is just removing an artifacts folder, where I put completed packages. Initialization is installing required PowerShell modules and running a “dotnet restore” on the root solution.

Building is just msbuild on the solution, executed through a PS module, which MSbuild defers to the dotnet CLI as needed. Testing is the “dotnet test” command against xUnit. Finally, packaging is “dotnet pack”, passing in a special version number I get from AppVeyor.

As part of my builds, I include the incremental build number in my packages. Because of how SemVer works, I need to make sure the build number is alphabetical, so I prefix a build number with leading zeroes, and “57” becomes “0057”.

In my project.json file, I’ve set the version up so that the version from the build gets substituted at package time. But my project.json file determines the major/minor/revision:

{
  "version": "4.0.0-beta-*",
  "authors": [ "Jeremy D. Miller", "Joshua Flanagan", "Josh Arnold" ],
  "packOptions": {
    "owners": [ "Jeremy D. Miller", "Jimmy Bogard" ],
    "licenseUrl": "https://github.com/HtmlTags/htmltags/raw/master/license.txt",
    "projectUrl": "https://github.com/HtmlTags/htmltags",
    "iconUrl": "https://raw.githubusercontent.com/HtmlTags/htmltags/master/logo/FubuHtml_256.png",
    "tags": [ "html", "ASP.NET MVC" ]
  },
}

This build script is used not just locally, but on the server as well. That way I can ensure I’m running the exact same build process reproducibly in both places.

The CD Pipeline

The next part is interesting, as I’m using AppVeyor to be my CI/CD pipeline, depending on how it detects changes. My goal with my CI/CD pipeline are:

  • Pull requests each build, and I can see their status inside GitHub
  • Pull requests do not push a package (but can create a package)
  • Merges to master push packages to MyGet
  • Tags to master push packages to NuGet

I used to push *all* packages to NuGet, but what wound up happening is I would have to move slower with changes because things would just “show up” on people before I had a chance to fully think through the changes to the public.

I still have pre-release packages, but these are a bit more thought-out than I have had in the past.

Finally, because I’m using AppVeyor, my entire build configuration lives in an “appveyor.yml” file that lives with my source control. Here’s MediatR’s:

version: '{build}'
pull_requests:
  do_not_increment_build_number: true
branches:
  only:
  - master
nuget:
  disable_publish_on_pr: true
build_script:
- ps: .\Build.ps1
test: off
artifacts:
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:
- provider: NuGet
  server: https://www.myget.org/F/mediatr-ci/api/v2/package
  api_key:
    secure: zKeQZmIv9PaozHQRJmrlRHN+jMCI64Uvzmb/vwePdXFR5CUNEHalnZdOCg0vrh8t
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  api_key:
    secure: t3blEIQiDIYjjWhOSTTtrcAnzJkmSi+0zYPxC1v4RDzm6oI/gIpD6ZtrOGsYu2jE
  on:
    branch: master
    appveyor_repo_tag: true

First, the build version I set to be just the build number. Because my project.json file drives the package/assembly version, I don’t need anything more complicated here. I also don’t want any other branches built, just master and pull requests. This makes sure that I can still create branches/PRs inside the same repository without having to be forced to use a second repository.

The build/test/artifacts should be self-explanatory, I want everything flowing through the build script so I don’t want AppVeyor discovering and trying to figure things out itself. Explicit is better.

Finally, the deployments. I want every package to go to MyGet, but only tagged commits to go to NuGet. The first deploy configuration is the MyGet configuration, that deploys only on master to my MyGet configuration (with user-specific encrypted API key). The second is the NuGet configuration, to only deploy if AppVeyor sees a tag.

For public releases, I:

  • Update the project.json as necessary, potentially removing the asterisk for the version
  • Commit, tag the commit, and push both the commit and the tag.

With this setup, the MyGet feed contains *all* packages, not just the CI ones. The NuGet feed is then just a “curated” feed of packages of more official packages.

The last part of a release, publicizing it, is a little bit more work. I still like GitHub releases but haven’t found yet a great way of automating a “tag the commit and create a release” process. Instead, I use the tool GitHubReleaseNotes to create the markdown of a release based on the tags I apply to my issues for a release. Finally, I’ll make sure that I update any documentation in the wiki for a release.

I like where I’ve ended up so far, and there’s always room for improvement, but it’s a far cry from when I used to have to manually package and push to NuGet.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in OSS. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • It should be noted that `exec` in your powershell script is part of psake.

  • Hope those were not your real myget and nuget keys

    • jbogard

      Nope :)

  • I find FAKE scrips much more usable.

    • jbogard

      Yeah, our actual client projects are starting to use FAKE, so far so good.

      OSS stuff, I’m trying to make things as simple as possible, removing as many dependencies as possible. Anything to lower the bar for collaboration.

      • I see the concern, hope one day more people will prefer F# over PS… I gradually improved my “service” FAKE script to include SemVer, AssemblyInfo generation, nuget packages pack and push (using paket) and deploying via Octopus. I am now able to use one script for nearly all projects without any modification, really pays off on many independent components that are being built and deployed continously.

        • jbogard

          Oh god, I don’t actually like PS. It’s an abomination imo.

          I did try something though, which was “how far could I get with raw powershell before it hurt”. For my simple OSS, the plain ol’ script works.
          But for AutoMapper, it uses psake, the build is a tad more complex. That one I’m planning on converting to FAKE.

          • I did not mean you, you wrote “lower bar for collaboration”, meaning that PS is easier for some people. That’s it :)

          • jbogard

            Ah yeah I gotcha now. It probably doesn’t matter too much – not many people mess around with build scripts. Taking on a dependency is a big deal for me in OSS, it’s just another barrier.

            When I introduced DB tests, it required localdb, and I still get issues that people don’t have the right version of localdb installed etc. So the more bare-metal the better, if I want to encourage collaboration.

  • Tim

    Do you use any desktop build notification tools like Catlight / CCTray, or the builds are so fast that you don’t need them?

    • jbogard

      Nah, we’ve got our builds hooked up to HipChat.

  • Harry McIntyre

    I’m trying to apply the ‘servers should be cattle not snowflakes’ concept to my applications these days.

    Rather than storing the build scripts in the repo alongside the application, I’m putting the build scripts in their own repository, then in the CI build, including that repo as a second VCS root under a subfolder.

    The build scripts then can be passed the ProjectName (and the branch/env name and custom values), and find the {projectName}.sln etc. and carry out the build/deployment.

    The apps end up conforming to the build process, rather than the other way round, and it becomes a lot easier to understand the application architectures. (Rather than having to examine every single applications source tree, powershell scripts/TeamCity/Jenkins/Octopus etc. configuration, to grok how they work)

    Currently the applications I’ve been doing this with are very similar, but I envisage that I will end up with several standard buildpacks for differently structured apps.

    • jbogard

      Hmmm that’s interesting. So like a git submodule sort of thing?

      • Harry McIntyre

        TeamCity lets you add a second VCS root under a folder so you don’t need a submodule. You could also add a single build step to just do ‘git checkout-index’ (I think) to get a clean copy of the build scripts if your CI server didn’t support it.

        I wouldn’t want to use a submodule as that would tie the repo to a particular set of scripts/revision.

        • jbogard

          Ah right. What’s the advantage of having a second repo for build scripts, vs the ones in your app? Standardization?

          • Harry McIntyre

            Yup, don’t underrate it!

            I find it plain confusing when I start a new contract and the org has 50 different applications each with their own build/deploy configs with logic and settings scattered across source code and TC/Octopus. I end up spending a lot of time digging through the particulars of each app, often there are many uniquely configured ways of achieving the same result.

            Also, as the scripts handle deployment, it makes it dead easy for people to create a new application/ *sigh* microservice. Clone a template, add the build config to CI and add the appropriate BuildScript folder and you’re away.

            The scripts themselves can be made pretty clever, doing things like
            – creating and hosting a web app or window service if appropriately named projects are present
            – if present, creating an app db and running migration scripts
            – doing CD with blue/green against whatever target you like (mine run against an IIS webfarm, but you could do docker or whatever)
            – checking the app for a /swagger.json and generating and publishing an SDK Nuget package

            Doing all that consistently per-app can get tough.

            I do appreciate what you were saying in your other post on how microservices aren’t about PAAS, but as the number of apps increase, the number of “facts” in your system can explode. There’s only so much space in my head!

          • Asad

            This is very interesting. I’ve felt this pain myself and flounder around every time I have to set up a new internal project with CI. Would you have any of these uniform scripts / conforming apps available as a public repo?

          • Harry McIntyre

            I have a repo of common scripts which I have forked and used to build standard processes at clients (generally using a single Teamcity template which is used by each project). The Teamcity projectname environment variable is used everywhere to discriminate environments.

            https://github.com/mcintyre321/BuildScripts

            The problem which arises with a totally generic build and deploy repo is that different places are using different target platforms (TC vs Jenkins, SQL Server vs db, octopus vs msdeploy vs ftp, IIS + ASP.NET vs docker + OWIN vs azure websites).

            I can’t quite decide which particular set to target for the repo! We can discuss further in a github issue if you like.

  • Thanks for sharing this. I haven’t used the appveyor.xml yet, but I now realize I don’t need separate AppVeyor builds just to separate PRs from merges to master. The problem is that my build script is doing the NuGet publishing, but if I move this responsibility to AppVeyor, I can drop one build. Nice.

    Regardless, did you consider using PSake?

    • jbogard

      I use psake when my build script gets longer than a single screen. That usually means I need the complexity of tasks etc.