So you’ve wrapped up the last controller, run your tests, and pushed to master. What now?
A project’s deploy process can be pretty nerve-wracking, so it’s important to come up with a standard procedure you can run every time. Let’s get cooking.
I won’t cover configuring your webserver, as there’s plenty of material on that. I’m going to cover the application side: the scripts and commands you should run when you deploy or update your application.
Up first, I’ll assume we’re using Git. We’ve got some sort of version control system, yes? I sure hope so. If you’re using something other than Git, that’s cool, but we’ll run with Git for this article.
Got your test suite ready? You should! In deployment contexts, the test suite ensures new code works, plus detecting any accidental issues with old code. It means you can sleep tight after deploying! Ideally, your test suite is strong enough that simply running one or two commands can guarantee that everything works.
Finally, have a staging server. It should be a safe server in a sterile environment that perfectly mirrors production. It’s to make sure deploys work smoothly and as intended. A common trick here is to supply the staging server with a copy of the production database to make sure it works with real data.
On to the actual script! You can write the deploy script simply as a .sh
file containing a list of shell
commands that get run sequentially. Simply call
the script install.sh
, then run sh install.sh
while SSH’d to your server to invoke it. If you’re on a CI service,
you can usually give each line as a separate deploy command, so it’ll automatically detect failures.
Before we start, of course, we need to pull the latest version of the codebase.
git pull
I’m not calling this the “first” step, because this is often done by a CI service for you — so you don’t usually
need this in the “deploy” script per se, since the git pull it considered the deploy itself. Additionally, this
assumes you’ve already created the repository, such as with a git clone
.
Step 1: Install composer packages. This will update all of your vendor directories to the version used by your
developers. Make sure you use composer install
, not composer update
. Update will pull down the latest versions
of packages — not the ones the rest of the team is using (when someone runs update
, the packages they’re on are written
to composer.lock, and this is what’s used for install
— so make sure the lock file is in Git). If a new version is
released between the last update
and the deployment, it might break the build!
composer install --no-dev --optimize-autoloader
The --no-dev
flag excludes packages listed under require-dev
. Note that this means tools like phpunit
should
not be listed under the dev packages if you use this method, because you’ll need it here. Instead, place packages
like the IDE Helper or Clockwork in the dev section — you won’t need these in production. Some people have a
different deploy script between their production servers and staging servers, so dev packages are used on staging
(where tests are run) but excluded on production.
We also optimize composer a bit with --optimize-autoloader
. When composer encounters files loaded via PSR-0 or PSR-4,
the namespace-to-file-path mappers, it usually does some string manipulation to figure out what file to load. Optimizing
composer will scan all the files available, and list them into one big array for easier lookups. Cheap way to cut
down on execution time!
Step 2: Optimize Laravel. Pretty simple.
php artisan optimize
php artisan route:cache
Commonly used classes will be put into one file and loaded up at once, instead of having composer hunt through a couple
dozen PHP files and load them individually. You can also specify your own classes to be included in this process in the
config/compile.php
configuration file.
Similarly, route:optimize
will compile your application routes so they don’t need to be recalculated on every
request. It’s not enabled by default because you’d have to empty the cache every time you add a new route, but it’s
free to use on a production server.
Step 3: Clean up old caches. Let’s take out the trash and make sure all stale data is out of the way.
php artisan cache:clear
You definitely don’t want any bits of an old version of your application persisting in the cache, so it should be cleared on every deploy. If you have a command that warms up your cache — filling it up with values instead of waiting for users to fill it on demand — you can run that now, too.
Step 4: Migrate your migrations. You know this drill.
php artisan migrate
An important tip: do not be tempted to use models in migration scripts.
Say you’re moving things around, and you had an old schema with first_name
and last_name
columns, but want to make
a unified name
column instead. Don’t be tempted to reach into eloquent and do this:
// do NOT do this in a migration
$users = User::all();
$users->each(function($i){
$i->name = $i->first_name . ' ' . $i->last_name;
$i->save();
});
It’ll work at the time of writing, but remember — migrations act upon snapshots of the database at the time of writing. The database is defined by prior migrations, which can never change, so it’ll always have the same database state at any given time. But your code — your models and other files — are dynamic and changing. There is no guarantee that, a year later, the User model even has a $name, $first_name or $last_name field. If someone were to deploy your code from scratch, this migration could potentially break it. This is an especially ugly mess to fix later on.
Instead, you can act on the database:
// do THIS instead
$users = DB::table('users')->get();
foreach ($users as $user) {
DB::table('users')->where('id', $user->id)->update(['name', $user->first_name . ' ' . $user->last_name]);
}
This script only relies on the database schema being in its current state, so it’s future-safe.
Step 5: Run tests.
I usually use phpunit, so…
phpunit
Is all there is to it. You can use phpunit’s status code to determine if your application is healthy and good to go — many CI services will detect a failure and call off the build if a command fails.
Okay, to conclude, this is what my base script looks like:
composer install --no-dev --optimize-autoloader
php artisan migrate
php artisan optimize
php artisan route:optimize
php artisan cache:clear
phpunit
Remember that you can stick more commands in here, like minimizing assets or whatever else your application needs. I usually have several more artisan commands tacked on as well…
I strongly discourage the use of Laravel’s seeders for “real” application data. It’s intended for test data — like a fake user so you don’t have to tediously register a new account when testing your code. Don’t use it for actual data, such as a list of default user roles, or a table of country names.
Instead, I usually have separate, dedicated seeders — one for each purpose. Each one is an artisan command. Most importantly, these commands are safe to execute on every deploy. Executing them repeatedly shouldn’t break anything; effectively, the commands just add base data when they’re missing, or nothing otherwise.
class SeedUserRoles extends Command {
protected $name = "acme:seed:roles";
public function fire() {
// check if the roles exist
if (!UserRole::where('name', 'admin')->exists()) {
UserRole::create(['name' => 'admin']);
}
// you could also delete any unrecognized user groups here
}
}
Note that these are individual, standalone classes that can and should evolve as your application grows. So when you know you need a new user group, you just attach that into the command above — the deploy script doesn’t need to change. Essentially, these commands make sure that a given data source contains exactly the data it needs, regardless of the application state.
(Yes, you could write this same code in Laravel’s seed files. But I prefer these being a completely separate logical unit, and keep Laravel’s seeds specifically for test data, which shouldn’t be run on production servers.)
Let’s take that one step further!
Diagnostic commands go through your data and verify that things are intact. For instance, polymorphic relations in Eloquent aren’t enforced by database foreign keys — build a command to check for any orphaned items! These are especially useful in old, large codebases where data evolves over time, and you want to make sure your data is in a stable state.
They’re also useful for debugging issues in a project. You can even reactively build a diagnostic script around and against a bug you can’t quite fix just yet, at least so you can detect it. You can then go further and have scripts attempt to automatically fix issues if they’re spotted.
I also enjoy having greenlight scripts alongside the diagnostic tools. These commands examine service-related availability:
Next time a client comes running because a server is down, you can brandish a set of automatic fault detectors as your first line of defense. They’ll save your life one day!
An obligatory hat-tip: Forge and Envoyer, both tools by Taylor Otwell, creator of Laravel, can really help you along with deployments. Forge will configure a well-built webserver for you, and Envoyer will automatically deploy your codebase to any number of servers. Of course, these won’t magically build the build procedure for you, just execute it — but once you’ve gone through this article, take a peek at Matt Stauffer’s Envoyer article to get started.
Automatic deployments are also the subject of Continuous Integration, of which many tools exist — from self-hosted Jenkins, to the free-for-open-source (or paid for not) Travis CI, to dploy.io by the Beanstalk guys. CI is a fantastic process which I’ll cover separately.
← Newer • The Registry Pattern, Laravel, and Your Sanity Older • Simple Transformers for JSON in Laravel →