Fellow Salesforce developers, you know the drill; you likely have a team of developers, a production instance of Salesforce, a full copy sandbox we’ve named QA (because that’s it’s rightful place), maybe a dev integration sandbox, and, if we’re lucky enough, each developer gets their own sandbox…copacetic, right? It allows each developer to pull down the metadata into their trusty IDE, write some code that compiles and works correctly…the first time!
How could you ever need anything more? Well, if you’re like me, you’re aware that the most tedious part of this process is the actual deployment of all of that metadata (both code and configuration) to each environment. In this article I’m going to outline a few approaches that make the entire process less onerous, as well as let you get back to focusing more on code, and less on how to deploy it.
At a high level there are two options to be aware of, the changesets we all know and love, and the Metadata API. Changesets are the well known, out-of-the-box option for admins and small teams. Aside from the horrendous changeset UI, they do offer a relatively easy way to handle small deployments without requiring any future technical knowledge. That said, changesets can quickly become the bane of one’s existence as the deployments increase in scale. For what it’s worth, there is a chrome extension, Boostr for Salesforce, that allows you to do wildcard searches and speed the process. If you’re still reading this article, you know there must be a better way.
Enter our lowly workhorse, the Metadata API. Changesets actually use this API behind the scenes, and it allows us to retrieve, deploy, create, update or delete custom metadata. This API is the “automation saving grace” that will save you from the explicative filled adventures that accompany enterprise-scale Changesets. It can be used via your IDE or the Force.com Migration Tool, built on Apache ant.
This said, in my experience, the underlying difficulty and common theme regardless of how these deployments are performed, is headache management: avoiding the accidental overwrites to other developer’s changes. This is simply the reality of multiple hands in the cookie jar at once.
Managing Headaches!
Fret not, for the next few paragraphs tackle those headaches directly, as well as give you a couple of examples on the best ways to mitigate risk and manage these challenges in the deployment lifecycle. I’ve had success in the past with two specific ways of automating this process. I will take a moment to pause here and mention that a prerequisite beyond this point is a proper version control system. Whatever the case may be, many Salesforce teams that I’ve encountered throughout the years loathe the thought of version control. “It’s all in the Cloud though, it’s already backed up in my sandbox!” Stop. It’s 2017. Go read these articles ALM: Using Version Control and A successful Git branching model, understand why it’s necessary, and come on back when you’re ready.
Now that we’ve gotten that out of the way, I personally recommend either github or bitbucket and a proper branching strategy. One of the most common is nvie’s gitflow.
Force Migration Tool:
Using the Force.com Migration tool to deploy (or undeploy) changes is good for repetitive deployment using the same parameters. The only requirements are basic command line skills, the Java SDK and Apache ant.
A deployment will consist of the following items:
A build.properties file
: this contains credentials. Be aware that in an organization that doesn’t allow storing credentials in plain text, this won’t pass security checks.
2. A build.xml file: contains a list of applicable tasks or targets. Deploy, retrieve, run all tests, run targeted tests, etc.
3. The package.xml (manifest): contains a list of the components to deploy.
4. Finally, the actual metadata files will need to be added to the deployment package. Including the meta-xml file.
A strength of this process is the repetitive nature of the tasks. It’s going to execute the same way each time. This provides a consistency and reliability to deployments.
For those engineers that aren’t comfortable with the command line or deployments, a build system like Jenkins can be leveraged in it’s place. Out of the box, Jenkins provides support for executing ant commands, With the simple addition of adding the Salesforce-ant.jar file we can execute salesforce ant commands within Jenkins.
There are still limitations to deploying this way, without building additional scripts. We can customize the process by creating scripts that can do the following:
Build a script that takes the package.xml file and creates a deployment, moving all necessary metadata files. (As shown in step 4 above)
Build a script that diff’s two branches and creates the package.xml
This requires some additional overhead but is definitely worth the effort when it’s complete, true end to end deployment automation. These steps work in concert with your branching strategy, but those are topics for a future post.
Open Source options using the Metadata API:
Finding the time to create build scripts and setup configurations might seem like a daunting task. And you aren’t alone. Luckily, there are some open source tools out there have been created already that can be leveraged.
The Salesforce Migration Assistant (SMA) is one that is feature rich and quite impressive. Full disclosure, I worked with Anthony Sanchez, the engineer who wrote it.
The gist: SMA is a Jenkins plugin, built in Java, that automatically deploys metadata changes to a Salesforce instance based on diffs between two commits in git. The power of this plugin is in using git flow and pull requests.
The basic process can be described as follows:
As a developer you get a new feature request.
Create branch off of develop and work the feature until completion.
Open a git pull request against the develop branch.
Once pull request is created, it triggers a Jenkins build job.
The SMA plugin does a diff on the develop branch with your PR, only looking at files changes.
It then looks for sibling test files and validates against the sandbox
When Deploying FooController.cls SMA looks for FooControllerTest.cls test class.
If the validate passes, Jenkins writes a comment to the pull request.
It then adds a comment on the PR with the results including the exact code coverage for each class and the average code coverage for the deployment.
Once approved, merge the pull request into develop and Jenkins gets triggered again to actually deploy the code.
Continuous delivery to production is also available, just setup another Jenkins job to be triggered once testing is complete. The only prerequisites are Jenkins, SMA plugin and git.
More options:
These are certainly not the only options. There are other build tools like CircleCI and Heroku that can be extended with scripts and are cloud based. Additionally, Salesforce.org uses CulumusCI in the development of the Non Profit Success Pack. I haven’t used them for this purpose yet, so I won’t speak to them. But my recommendation is to always look at the options that are out there.
Daunting or not, taking the time to setup an automated deployment process is worth it and will pay for itself over time. Being able to produce a higher rate of successful deployments, improve code coverage and reducing time dealing with conflicts is worth it.
Keeping Developer Sandboxes in sync
The final step for both of these processes is syncing all developer sandboxes after a deployment, without refreshing sandboxes. I’ll be candid, I haven’t perfected this process yet. Constant communication and an efficient process always proves to lower the priority of this need.
Salesforce DX — The future
As Salesforce put it, “the inherent problem is that you have two sources of truth: the sandbox and your git repo”. This is in large part due to some development being authored in an IDE (apex, vf, triggers), and some of it configured in your sandbox (object changes, workflows, etc).
Salesforce has identified this pain point and started to put emphasis on source driven development! They announced during Dreamforce this year, they are working hard to build out the Salesforce DX which is designed to solve these problems. I highly recommend looking into it, because at the 50,000 foot view, it looks powerful.
At the end of the day there’s no “one size fits all” way to do CI/CD with Salesforce. Each organization has to discover which solution meets their needs. In this article I wanted to overview the two setups that I’ve been successful with, because I’d much rather be worried about writing good code.