Skip to content

Recent Articles

21
Jun

Deploying Magento 2 Using Capistrano

This guest post by Erik Hansen will cover how to use the capistrano-magento2 Capistrano Gem (created by yours truly, with the support of Classy Llama allowing me the time spent initially building it) to deploy Magento 2 to stage and/or production servers.

Capistrano is a a tool used to run scripts on multiple servers, typically with the purpose of deploying code changes to those servers. Capistrano works well with both single node deployments as well as multiple node deployments that include separate application and database servers sitting behind a load balancer. When you use Capistrano, you get a lot of functionality “for free” and you only need to implement scripts to handle your application-specific deployment steps.

The Magento community has been using Capistrano to deploy Magento 1 for many years. There are a couple of Capistrano Gems for Magento 1 (capistrano-magento by Peter Rhoades and Magentify by Alistair Stead). However since Magento 1 was fairly simple to deploy (compared to Magento 2), many merchants and agencies didn’t use a deployment tool like Capistrano. However Magento 2’s deployment requirements are greater and it is necessary to have some sort of scripted process to deploy Magento 2 updates. Rather than building your own deployment scripts from scratch, it makes sense to use an open source community tool that is time-tested and used by many developers.

A Capistrano deployment can be initiated from any machine with Capistrano installed and the proper configuration files in place. When Capistrano deploys changes, it connects via SSH to the configured server(s) and runs the necessary commands to deploy a new revision of the project. The typical workflow is to run the deploy command from your local development machine. Deployment configuration files are typically committed to the Magento repository. That way, any developer working on the project can deploy changes to a configured environment assuming they have the appropriate SSH access.

Prepare your Magento 2 project/repo

This article assumes the following:

  • You’ve chosen the “Composer metapackage” method of installing Magento.
  • You have a copy of Magento 2 that you’ve installed locally (or on a non-Capistrano enabled stage server) and you’d like to deploy it to stage and/or production environments using Capistrano.
  • You have a Git repo that contains composer.lock and composer.json files that were created when you installed Magento using composer create-project. Your .gitignore should look something like this: magento2-community-edition/blob/master/.gitignore See the first few paragraph’s of Alan Kent’s blog post for more details regarding this project setup approach.
  • You have a Magento 2 database dump that you’re able to import into the server(s) on which you’re going to deploy to using Capistrano. capistrano-magento2 does not support deploying a Magento 2 application from an uninstalled state, and assumes a working database and the app/etc/env.php and app/etc/config.php files exist on the target environment (more on this later).

Installation

To install Capistrano, run this command:

1
$ gem install capistrano

See this page if you run into issues: http://capistranorb.com/documentation/getting-started/installation/

To install capistrano-magento2, run this command:

1
$ gem install capistrano-magento2

Install Capistrano in your Magento 2 project

Follow steps 1-3 in this section of the
capistrano-magento2 readme. Once you follow these steps, the tools directory in your Magento 2 project should look like this:

1
2
3
4
5
6
7
8
9
10
11
12
.
└── tools
    └── cap
        ├── Capfile
        ├── config
        │   ├── deploy
        │   │   ├── production.rb
        │   │   └── staging.rb
        │   └── deploy.rb
        └── lib
            └── capistrano
                └── tasks

I recommend committing these files to your Magento 2 Git repo.

Prepare your application server(s)

Before you deploy, you need to prepare your stage and/or production environments. For the purpose of this article, let’s assume that you are going to deploy Magento 2 to two environments:

  • www.example.com
  • stage.example.com

Classy Llama uses Capistrano to deploy to both production and stage environments. While we could get away with deploying to stage manually (especially if stage is running on developer mode, which I don’t recommend), using the same deployment process in both environments ensures that your stage environment is an accurate reflection of production.

For the rest of this article, we’re going to talk about deploying to the stage.example.com environment, but the steps will be identical for the production environment. Here is an example of the commands that you would need to run to prepare the stage.example.com environment:

1
2
3
4
$ ssh www-data@stage.example.com
$ cd /var/www/html
$ mkdir -p shared/app/etc
$ touch shared/app/etc/env.php

The capistrano-magento2 gem deploys files using the ownership of the SSH user that is used to connect to the server. In order to avoid permissions issues, you should ensure PHP and your web server are running as that user. While it is considered best practice to run your stage and production sites on separate servers, Capistrano doesn’t care about this can be easily configured to deploy stage/production to the same server by changing the config values in the tools/cap/config/deploy/*.rb files.

Edit the shared/app/etc/env.php file and add the necessary settings (DB credentials, etc) and ensure a proper app/etc/config.php file is committed to your repository (this file is created when bin/magento setup:install is run). Now import the Magento database and update the necessary values in the core_config_data table to reflect the new url.

Configure your web server to use /var/www/html/current/pub as the document root. If you’re web server is running Apache, make sure you have mod_realdoc installed and configured to avoid issues with symlinked deployments and the Zend Opcache. If your web server is running Nginx, make sure your fastcgi proxy configuration is using the $realpath_root variable instead of $document_root for both SCRIPT_FILENAME and DOCUMENT_ROOT params sent to PHP-FPM.

1
2
fastcgi_param  SCRIPT_FILENAME  $realpath_root$fastcgi_script_name;
fastcgi_param  DOCUMENT_ROOT    $realpath_root;

First deployment to stage

Now you can deploy Capistrano to your server. On your development machine, run this command in the tools/cap directory:

1
$ cap staging deploy

When you run that command, Capistrano will verbosely report on its progress. Here is a summary of what Capistrano does in this process:

  • Creates a new /var/www/html/releases/ directory for the new release
  • Checks outs the latest copy of the develop branch (the :branch value specified in staging.rb)
  • Symlinks the :linked_files and :linked_dirs into the new release folder
  • Runs composer install in the release directory. This will download and install the dozens of dependencies declared in composer.lock. Composer caches packages, so subsequent deployments should be faster than your initial deployment.
  • Changes directory/file permissions in the new release folder to 770/660, respectively (these may be updated in future releases of the capistrano-magento2 gem to be further secured by default, and also allow for greater configurability).
  • Runs bin/magento setup:static-content:deploy and bin/magento setup:di:compile-multi-tenant
  • Once all of those commands run, the new release directory will be nearly ready to be switched to the “current” release. However immediately before doing so, the current release (the “current” symlink) will be put into maintenance mode using bin/magento maintenance:enable. The reason this is done is because the bin/magento setup:upgrade --keep-generated command will be run immediately after the site is put into maintenance mode. This is important because any new or updated modules will have their version numbers updated in the setup_module table and if the current release sees that its modules aren’t in sync with that table, it will throw an error.
  • Capistrano deployments are atomic, so if any of the commands up to this point failed, the deployment will fail and the /var/www/html/current symlink will continue to point to the release directory it was pointed to before you began the deployment.
  • At this point, the /var/www/html/current symlink will be switched to point to the new releases/XXXXXXXX directory.
  • Flushes cache via bin/magento cache:flush

The deploy process implemented for deployment of Magento 2 by the capistrano-magento2 Gem can be seen in lib/capistrano/tasks/deploy.rake

If you completed all of the previous steps properly, you should now be able to browse to http://stage.example.com and see your newly deployed Magento site. When you want to deploy future code changes to stage, ensure the changes are in the develop branch and run the cap staging deploy command again to deploy the latest changes to stage.

Once you deploy to stage, the /var/www/html directory on your stage server should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
.
├── current - /var/www/html/releases/20160526030129
├── releases - this directory will contain the latest X releases. X is defined by :keep_releases which defaults to 5
│   └── 20160526030129
├── repo - this is a bare clone of your Git repository
├── revisions.log - a line for each release with the commit hash, release date, and username of the machine that deployed
└── shared - this is a permanent directory that contains the files/directories referenced by :linked_files and :linked_dirs
    ├── app
    │   └── etc
    │       └── env.php
    ├── pub
    │   ├── media
    │   └── sitemap.xml
    └── var
        ├── backups
        ├── composer_home
        ├── importexport
        ├── import_history
        ├── log
        ├── session
        └── tmp

Now that your stage deployment is configured, follow the same steps to setup your production deployment.

Additional capistrano-magento2 Features

In addition to deploying changes, the capistrano-magento2 Gem also supports a number of Magento maintenance commands. Run cap -T in the tools/cap directory to see a list of commands. Many of the commands are simply aliases for bin/magento commands. The benefit of running them from Capistrano is two-fold: You don’t need to SSH into your stage or production server in order to run the command, as Capistrano handles that for you. Secondly, if you have multiple application servers, you can run a single command in all of your application servers simultaneously. These are the commands available in version 0.2.4 of capistrano-magento2:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cap magento:cache:clean                   # Clean Magento cache by types
cap magento:cache:disable                 # Disable Magento cache
cap magento:cache:enable                  # Enable Magento cache
cap magento:cache:flush                   # Flush Magento cache storage
cap magento:cache:status                  # Check Magento cache enabled status
cap magento:cache:varnish:ban             # Add ban to Varnish for url(s)
cap magento:composer:install              # Run composer install
cap magento:indexer:info                  # Shows allowed indexers
cap magento:indexer:reindex               # Reindex data by all indexers
cap magento:indexer:set-mode[mode,index]  # Sets mode of all indexers
cap magento:indexer:show-mode[index]      # Shows mode of all indexers
cap magento:indexer:status                # Shows status of all indexers
cap magento:maintenance:allow-ips[ip]     # Sets maintenance mode exempt IPs
cap magento:maintenance:disable           # Disable maintenance mode
cap magento:maintenance:enable            # Enable maintenance mode
cap magento:maintenance:status            # Displays maintenance mode status
cap magento:setup:di:compile              # Runs dependency injection compilation routine
cap magento:setup:permissions             # Sets proper permissions on application
cap magento:setup:static-content:deploy   # Deploys static view files
cap magento:setup:upgrade                 # Run the Magento upgrade process

Reverting

Capistrano supports reverting to previous releases using cap deploy:rollback. This can be useful if you mistakenly deploy code to an environment and want to immediately revert it. However if you want to use this feature, there are some things you need to be aware of:

  • If the your newest release contained new extensions or newer versions of extensions, you’ll need to manually adjust the values in the setup_module table or else you will get errors.
  • If a deployment fails or you cancel it, the releases/ directory for that failed release will still exist. If you rollback, Capistrano will use the most recent release directory, even if it was not a successful deploy.
  • After reverting, you’ll need to run the cap magento:cache:flush command
  • I’ve only used this command once, so there may be other things you need to do that I’m not listing here.

OS X notifications

If you’d like to get OS X notifications of successful/failed deployments, follow the steps outlined under “Terminal Notifier on OS X” in the README

magento2-capistrano-notifications

Review pending changes before deploying

If you want to see what changes you’re about to deploy to an environment, you can add the capistrano-pending gem as a dependency. Add this line to tools/cap/Capfile:

1
require 'capistrano-pending'

Now, cd to tools/cap and run these commands to see pending commit messages and file changes (respectively):

1
2
$ cap stage deploy:pending
$ cap stage deploy:pending:diff

Flushing Varnish cache

If you’re using Varnish to cache your static assets, you’ll need to take some extra steps to get capistrano-magento2 to flush Varnish. I’m not going to cover those steps in this article, but if you’d like to learn more, feel free to post a comment below.

Summary

It can take a while to get familiar with and setup an automated deployment system, but due to the amount of effort required to manually deploy changes to stage/production environments with Magento 2, the payoff should be worth the investment. Especially because deploying with Capistrano ensures a minimal amount of site downtime when deploying.

If you have any questions or feedback, I’d love to hear them in the comments section below.

This guest post made possible and written by Erik Hansen, a fellow Llama over at Classy Llama. Please give him a follow on Twitter.

25
Nov

Running the Magento 2 Test Suite

Magento 2 ships with a complete testing suite out-of-the-box. These include a complete set of operational unit, integration, and static tests. The simplest of these to run is probably the unit tests, having all necessary components setup and in-place automatically after installing Magento 2 successfully. Here I’ll show you how to run the full tests suite, run each type of tests independently, and also how to run a small portion of the unit or integration tests if need be. Before starting, you’ll need to ensure you have a working copy of Magento 2 installed and running successfully. Instructions on how to do that can be found here in my post on Installing Magento 2.

Note: The Magento Testing Framework does support functional testing, but that goes beyond the scope of this post and may be covered in a future writeup.

To power the test suite, Magento uses phpunit, with the test suite configuration residing under ./dev/ in your installation of Magento 2. In the following tree you’ll notice a few directories. The three named unit, integration, and static are where you’ll be working from to run the tests.

1
2
3
4
5
6
7
8
9
10
$ tree -d -L 1 dev/tests/
dev/tests/
├── api-functional
├── functional
├── integration
├── js
├── static
└── unit
 
6 directories

Since Magento 2 uses phpunit to run it’s test suite, you will need to either install this tool in your environment globally or run the copy composer installs at vendor/phpunit/phpunit/phpunit. For the examples below, I’ll be using the phpunit installed by composer in the project directory.

Running Unit Tests

These are simplest tests to run and they require no configuration on your part beyond ensuring you have a working copy of phpunit available. To run just the unit tests:

1
2
cd dev/tests/unit/
../../../vendor/phpunit/phpunit/phpunit

To run an individual set of tests, like say all the unit tests for the Magento_Catalog module:

1
2
3
cd dev/tests/unit/
../../../vendor/phpunit/phpunit/phpunit \
    ../../../vendor/magento/module-catalog/Test/Unit/

You can even pair this down further and run only the tests in a single file:

1
2
3
cd dev/tests/unit/
../../../vendor/phpunit/phpunit/phpunit \
    ../../../vendor/magento/module-catalog/Test/Unit/Model/Product/ImageTest.php

The following is the output of a successful run from this last example:

1
2
3
4
5
6
7
8
9
PHPUnit 4.1.0 by Sebastian Bergmann.
 
Configuration read from /sites/m2.demo/dev/tests/unit/phpunit.xml.dist
 
........................
 
Time: 621 ms, Memory: 12.00Mb
 
OK (24 tests, 86 assertions)

Running Integration Tests

Getting integration tests up and running takes a little bit of extra work. This is because for these tests to run, they have to standup a copy of Magento 2 for tests purposes complete with a working database to query against. Here’s how to do this:

Step 1: Place the following (updating it with proper credentials for your MySql server) in dev/tests/integration/etc/install-config-mysql.php:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<?php
 
return [
    'db-host' => 'dev-db',
    'db-user' => 'root',
    'db-password' => '',
    'db-name' => 'magento_integration_tests',
    'db-prefix' => '',
    'backend-frontname' => 'backend',
    'admin-user' => \Magento\TestFramework\Bootstrap::ADMIN_NAME,
    'admin-password' => \Magento\TestFramework\Bootstrap::ADMIN_PASSWORD,
    'admin-email' => \Magento\TestFramework\Bootstrap::ADMIN_EMAIL,
    'admin-firstname' => \Magento\TestFramework\Bootstrap::ADMIN_FIRSTNAME,
    'admin-lastname' => \Magento\TestFramework\Bootstrap::ADMIN_LASTNAME,
];

Step 2: Create the database referenced in the above config:

1
mysql -e 'create database magento_integration_tests'

Step 3: Use the following to run the integration tests:

1
2
cd dev/tests/integration/
../../../vendor/phpunit/phpunit/phpunit

You can, as with the unit tests, choose to run a sub-set of the integration tests. Because these are global to application, all tests are found within this dev/tests/integration location. For example:

1
2
3
cd dev/tests/integration/
../../../vendor/phpunit/phpunit/phpunit \
    testsuite/Magento/Catalog/Model/Product/ImageTest.php

Here is the output from a successful run of this last example:

1
2
3
4
5
6
7
8
9
10
11
12
13
PHPUnit 4.1.0 by Sebastian Bergmann.
 
Configuration read from /sites/m2.demo/dev/tests/integration/phpunit.xml.dist
 
....
 
Time: 1.1 minutes, Memory: 409.75Mb
 
OK (4 tests, 5 assertions)
 
=== Memory Usage System Stats ===
Memory usage (OS):	430.38M (105.16% of 409.25M reported by PHP)
Estimated memory leak:	21.13M (4.91% of used memory)

Running Static Tests

These are also fairly simple to run, but they are broken into two chunks:

1
2
cd dev/tests/static/
../../../vendor/phpunit/phpunit/phpunit

…and this…

1
2
cd dev/tests/static/framework/tests/unit
../../../../../../vendor/phpunit/phpunit/phpunit

Running The Complete Suite

Assuming you have setup the configuration for integration tests as noted above, running the complete tests suite is fairly trivial, although doing so can take an incredible amount of time to run. From the root of your installation:

1
bin/magento dev:tests:run

This handy command built into the CLI tool will run through the unit tests, the static-framework tests, integration tests, and the remaining static tests (in that order). When complete, you’ll see summarized out put of the results from the run.

Have fun testing!

20
Nov

Installing Magento 2 for Project Development

There are two common ways to install Magento 2. One is to simply clone the Magento 2 GitHub repository and install. This is great for contributing, but not so much for starting a new project. Magento released the Magento 2.0 GA this Tuesday, November 17th at the MagentoLive Australia event. Very exciting stuff! With that they threw the switch on the new meta-packages server which hosts all you need to install Magento 2 with only a set of authorization credentials and composer. Well…almost. You’ll need to ensure your development environment supports this set of minimal technical requirements first:

  • PHP 5.5.x* / 5.6.1 (or later) / PHP 7 RC (that’s right…supports it right out of the gate!)
  • MySql 5.6.x
  • Apache 2.2 / 2.4 or Nginx 1.8 (or later)

Note: PHP 5.5.10 – 5.5.16 and 5.6.0 have a known bug which interferes with timezone support. PHP 5.4 is no longer supported. For a complete / detailed list of system requirements, head over to this page on devdocs.

So how do you get this software installed? Let’s get started…

Step 1: Make sure you have a set of authorization credentials for the Magento meta-packages set in your auth.json file. This is located in your COMPOSER_HOME directory, which by default in *nix system is ~/.composer/auth.json. Instructions on how to get these credentials can be found on this page. My auth.json looks about like this after I added my pair of credentials:

1
2
3
4
5
6
7
8
9
{
    "http-basic": {
        "repo.magento.com": {
            "username": "",
            "password": ""
        }},
    "github-oauth": {
    "github.com": ""}
} 

Step 2: Use composer to create a project and pull in all meta-packages, then cd in and adjust permissions. The first time you ever run this it will take a while. Subsequent runs will be blazing fast, as composer will cache the packages locally for later use! In my case, I’m installing the site at /sites/m2.demo:

1
2
3
4
composer create-project --repository-url=https://repo.magento.com/ \
        magento/project-community-edition /sites/m2.demo
cd /sites/m2.demo
chmod +x bin/magento

Step 3 (optional): Deploy the sample data so it will install along with everything else in the following step:

1
2
bin/magento sampledata:deploy
composer update

Note: You will see a composer error running sampledata:deploy on 2.0 GA due to a bug. The followup composer update call here is to workaround that issue.

Step 4: Create a new database and execute the setup:install command:

1
2
3
4
5
mysql -e 'create database m2_demo'
bin/magento setup:install --base-url=http://m2.demo --backend-frontname=backend \
        --admin-user=admin --admin-firstname=Admin --admin-lastname=Admin \
        --admin-email=user@example.com --admin-password=A123456 \
        --db-host=dev-db --db-user=root --db-name=m2_demo

Step 5: Make sure you have DNS properly configured and/or an entry in /etc/hosts, and that your Apache/Nginx is configured to work with the chosen webroot.

Step 6: Load up http://m2.demo in your browser and bask in the glory of Magento 2! It should look something like this:

Magento 2.0 GA - Homepage

Magento 2: Homepage

 

23
Jun

Configuring Magento 2 To Use Redis Cache Backend

Using the file-system for a cache backend on a production Magento site is just asking for your site to run slowly. But what about development? Should you use a file-system based cache there or not? Well, that is ultimately up to you, but you’ll be able to work faster if you develop on a stack with a fast cache backend. My personal favorite is currently Redis. Magento supports this out of the box in later versions of Magento 1 as well as in the developer betas of Magento 2. There are added benefits such as not constantly having a slew of cache files updating and triggering backups.

I’ll assume you already have redis installed on your system, but if not, go ahead and install that on Mac OS X with a simple brew install redis or use the appropriate package manager of your preferred OS. You may want to comment out all “save” directives in the redis.conf file. Being a dev environment, I don’t care about writing the cache data to disk, and if I reboot…let it start with a clean cache database.

So how do you configure Magento 2 to use Redis? When I started digging into Magento 2 development, I couldn’t really find much on this. It’s even changed a few times since I first spent the time to figure it out. By now though, this shouldn’t change much by the time the GA hits later this year.

In your app/etc/env.php file you should find a large PHP array with settings similar to the well-known app/etc/config.xml from Magento 1. It doesn’t matter where you put the settings in here, just as long as they only appear once, and result in a syntactically correct PHP array declaration. I like to add them directly above the cache_types index.

Here is the configuration I tend to use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
  'cache' => 
  array (
    'frontend' => 
    array (
      'default' => 
      array (
        'backend' => 'Cm_Cache_Backend_Redis',
        'backend_options' => 
        array (
          'server' => '127.0.0.1',
          'port' => '6379',
          'persistent' => '',
          'database' => '0',
          'force_standalone' => '0',
          'connect_retries' => '1',
          'read_timeout' => '10',
          'automatic_cleaning_factor' => '0',
          'compress_data' => '1',
          'compress_tags' => '1',
          'compress_threshold' => '20480',
          'compression_lib' => 'gzip',
        ),
      ),
      'page_cache' => 
      array (
        'backend' => 'Cm_Cache_Backend_Redis',
        'backend_options' => 
        array (
          'server' => '127.0.0.1',
          'port' => '6379',
          'persistent' => '',
          'database' => '1',
          'force_standalone' => '0',
          'connect_retries' => '1',
          'read_timeout' => '10',
          'automatic_cleaning_factor' => '0',
          'compress_data' => '0',
          'compress_tags' => '1',
          'compress_threshold' => '20480',
          'compression_lib' => 'gzip',
        ),
      ),
    ),
  ),

If you have this setup on all your dev sites, a simple call to redis-cli flushall will clear all your caches. Handy tools like n98-magerun or bin/magento cache:flush --all in Magento 2 will also do the trick.

6
May

Upgrading ICU to Install OroCRM on CentOS 5

Recently the crew over at Classy Llama decided to finally jump on board the OroCRM train and begin the process of implementing the software for our own internal use. Sometimes I’m known as the server dude internally, and that’s exactly how I come into play in this endeavor. The question always must be answered, where will we host it? In my neck o’ the woods we run our software on old-fashioned bare-metal servers, which does mean we have some older software than I’d like at times. And the server we’ve got over here which will have OroCRM added to it’s responsibility list is one of those machines which has been around a few years. It’s running CentOS 5 still! Who runs CentOS 5? Yeah. Old. And with OroCRM being a modern piece of software and all, it quickly became upgrade time.

What needed upgrading? A few things. PHP, MySql… and ICU. The first two were obvious, and needed to be upgraded anyways, so I did, missing the latter. Found that one when I tried to run composer install on the code to deploy it. OroCRM uses a Symfony component which requires ICU v4.4 or higher. The latest packaged version available in this case is v3.6. Even CentOS 6 only has packages for ICU v4.2! Solution? Turns out it’s not all that complicated. Install our own versions of ICU and the PHP intl extension compiled against this version of the ICU lib.

In my case, I chose to install ICU 4.4 instead of the latest version since there are pre-compiled binaries of ICU 4.4 for CentOS 5 available on http://site.icu-project.org. Were I running CentOS 6 I would have most likely gone with the very latest version of ICU.

Here’s how… (running as root):

    # remove the php-intl package built against the native icu version
    yum remove php-intl

    # make sure we have the developer packages needed for procedure
    yum -y install php-devel php-pear

    # download and unpackage tar which code for relevant ICU lib version
    mkdir src
    cd src/
    wget http://download.icu-project.org/files/icu4c/4.4.2/icu4c-4_4_2-src.tgz
    tar zxf icu4c-4_4_2-src.tgz

    # build and install the library into /opt/icu4c-44_2
    cd icu/source/
    ./configure --prefix /opt/icu4c-44_2 && make && make install

    # build and install the php-intl version
    # enter /opt/icu4c-44_2 at prompt for ICU library location
    pecl install intl
    ldconfig

    # add an ini file wity contents: extension=intl.so
    vi /etc/php.d/intl.ini

    # you can now check to see if it's loaded
    php -i | grep intl

    # restart the apache web server and you're good to go
    service httpd restart

Now go have fun playing with your new OroCRM setup!

19
Jan

Bash Completion on OS X With Brew

I live and breath OS X on a daily basis, with a large portion of my work revolving around the command line using mostly tools which I’ve installed with brew. Bash completion has likely saved me days worth of time over the past decade or so. Little did I know, up until recently, however, that there is an official tap with completion scripts (in addition to the ones which come with individual recipes such as git) which can be installed for tools like docker, vagrant and grunt. Using it is dog simple. To start with you’ll want to go ahead and install bash-completion (if you haven’t already) and then tap homebrew/completion to gain access to these additional formulae:

$ brew install bash-completion
$ brew tap homebrew/completions

After you run that first command, in typical brew fashion, it will request that you add the following tidbit to your ~/.bash_profile. Don’t forget this part. It’s critical!

if [ -f $(brew --prefix)/etc/bash_completion ]; then
. $(brew --prefix)/etc/bash_completion
fi

Once you’ve done this, you’ll be able to install the additional completion scripts. You can find a complete list of these over here on GitHub. Happy tabbing!

13
Jan

Using Tinyproxy for Mobile Development on OS X

A common problem encountered with web development where private or local development sites with no public DNS sites are being used is the one of viewing these locally running development sites on an iPhone or iPad for testing purposes. This becomes even more of a problem when working on a mobile theme or particularly a responsive design where it’s kind of important to test simultaneously on multiple screen sizes while your implementing the breakpoints required for the content to adapt to the different displays.

For the last few weeks I’ve been working on a large scale responsive project with a team of other awesome Classy Llama’s. So this got me to thinking, and the result was: eureka, I’ll run a proxy server!

There are many proxy servers out there which would likely do just as good a job at accomplishing the same goals. I chose Tinyproxy because it’s a very lightweight proxy daemon built specifically for POSIX operating systems, meaning it will run seamlessly on the OS X development machines we primarily use, but can also be readily used on Linux if needed. It’s fast and very lean on resource utilization. It’s also freely distributed under the GNU GPL v2 license, so the cost certainly fits the bill here as well.

If you need a proxy with more advanced features, you may find Charles Proxy will serve you better. If I ever give it a go, I may come back with another blog post, but for now I’ve not tried it. One thing it has is traffic limiting to simulate network conditions, but my iOS devices have that ability built-in.

I use the Homebrew package manager to install the majority of my CLI tools which I use on a daily basis. Fortunately, there is a pre-existing package for installing Tinyproxy with Homebrew. So it’s a snap to setup! Here is how…

  1. Run the one line install command. Your output may vary slightly.

    $ brew install tinyproxy
    
    ==> Downloading https://www.banu.com/pub/tinyproxy/1.8/tinyproxy-1.8.3.tar.bz2
    Already downloaded: /Library/Caches/Homebrew/tinyproxy-1.8.3.tar.bz2
    ==> Downloading patches
    ######################################################################## 100.0%
    ==> Patching
    patching file configure
    Hunk #1 succeeded at 6744 (offset -1 lines).
    ==> ./configure --prefix=/usr/local/Cellar/tinyproxy/1.8.3 --disable-regexcheck
    ==> make install
    /usr/local/Cellar/tinyproxy/1.8.3: 14 files, 244K, built in 44 seconds
            
  2. Next, for this to be useful to you, you’ll need to allow other devices on your network access to your proxy server. I’ve added localhost and the local class C subnet to my allow list, but it’s possible to specify hostnames of connecting devices as well. It’s hardly very secure if you’re running on a public network, so keep this in mind if you frequently work from Starbucks.

    $ cd /usr/local/Cellar/tinyproxy/1.8.3/
    $ perl -i.bak -pe 's/^Allow 127.0.0.1$/Allow 127.0.0.1\nAllow localhost\nAllow 192.168.1.1\/24/' ./etc/tinyproxy.conf
            
  3. The one thing the brew package missed was creating two directories which tinyproxy needs to run. Launch it in non-daemon mode and it will tell you what these are, allowing you to create them.

    $ tinyproxy -d
    tinyproxy: Could not create file /usr/local/Cellar/tinyproxy/1.8.3/var/log/tinyproxy/tinyproxy.log: No such file or directory
    tinyproxy: Could not create file /usr/local/Cellar/tinyproxy/1.8.3/var/run/tinyproxy/tinyproxy.pid: No such file or directory
    tinyproxy: Could not create PID file.
    
    $ mkdir -p ./var/log/tinyproxy/ ./var/run/tinyproxy/
            
  4. And finally, launch your proxy in the background like so. It’ll spawn a few threads to handle incoming connections and start working immediately.

    $ tinyproxy
            

Ok, so the proxy is up and running, but how can we use it? Not to worry, you’re almost there! The last thing you need to do is configure your iOS device’s proxy settings so it will route all http traffic through the proxy.

  1. Launch the Settings app and open the WiFi section.WiFi Settings
  2. Click the information button to edit the advanced settings.Advanced iOS WiFi Settings
  3. Set the hostname or private IP address of the machine running the proxy and port number under HTTP Proxy -> Manual, all the way at the bottom of the screen.Manual HTTP Proxy Settings
5
Nov

A New Breed of Cron in Magento EE 1.13

By now the overhaul of the indexer system used in Magento Enterprise Edition 1.13 is fairly common knowledge, especially amongst those that have the privilege to work with it on some large builds. I’ve had the chance to work with it a fair bit on a rather large project currently still in the oven over at Classy Llama. This project has over 40 million product pages and 5 million parts in inventory!! I don’t think it would have been possible before EE 1.13 was released…

Amongst all these leaps and bounds forward also came some more behind-the-scene type changes. Those under the hood changes which support the development of new things, but which can also be a pain to figure out given just the right circumstances. One of these was the introduction of a new breed of cron task along with a unique dispatch mechanism. If you don’t understand it (and have shell_exec disabled on your servers) the cron will fail 100% silently and without remorse.

If you’ve configured a cron job in the past, you’ll be somewhat familiar with this bit of XML which incidentally uses the same type of expression as used by crontab on *nix systems to define the frequency of it’s run:

1
2
3
4
5
6
7
8
9
10
11
12
    <crontab>
        <jobs>
            <task_name>
                <schedule>
                    <cron_expr>0 1 * * *</cron_expr>
                </schedule>
                <run>
                    <model>mymodule/observer::dailyUpdateTask</model>
                </run>
            </task_name>
        </jobs>
    </crontab>

What you may not have noticed or seen before is the existence of crontab schedules with a cron_expr value of simply always — this would happen to be the case because it’s indeed a new breed of cron job. As it stands right now EE 1.13 is the only version of Magento using this type of cron… but the changes under the hood supporting it are present in Magento CE 1.8 as well.

Here is an example (actually the only real case at this point) of this type of cron task being used:

1
2
3
4
5
6
7
8
9
10
11
12
    <crontab>
        <jobs>
            <enterprise_refresh_index>
                <schedule>
                    <cron_expr>always</cron_expr>
                </schedule>
                <run>
                    <model>enterprise_index/observer::refreshIndex</model>
                </run>
            </enterprise_refresh_index>
        </jobs>
    </crontab>

The new indexers were probably a good candidate for it’s inaugural process, but a few other pesky EE cron tasks also come to mind as perfect candidates for this type of handling. Yes, I’m looking directly at the enterprise_staging_automates job which schedules itself every single minute resulting in a very bloated cron_schedule table. Same goes for the cron task for the new queue system in EE 1.13 which has the same every-minute schedule and potential to build up.

I mentioned that there is a new dispatch mechanism for this new breed (as I’ve decided to affectionately call it since ‘always’ is obviously breaking with the traditional cron expression syntax) but how exactly does this new type of cron differ functionally? There are a few differences:

  1. They do not fill up the cron_schedule table with entries for future runs. You won’t see them in there until they have run.

  2. They are executed from the new Mage_Cron_Model_Observer::dispatchAlways method called via Mage::dispatchEvent('always'); in the cron.php entry point.

  3. The frequency of such tasks is defined by the frequency at which the cron.sh file is executed by crontab on the server. If you call cron.sh every 15 minutes, they’ll run every 15, etc.

It appears one of the goals of this new cron type was allowing them to run potentially long cron jobs (such as a re-index where triggered by admin actions) without affecting the frequency of other scheduled cron tasks. This is accomplished through the cron.sh script now supporting a mode flag be passed to determine whether it will run the default or always cron tasks. With one of the functions of cron.sh being to prevent multiple cron “threads” from running on the server simultaneously, breaking it up like this allows a maximum of two, with one dedicated to functions like indexing where literally nothing or anything could be done.

The most substantial (or relevant) differences to the cron entry point and how the cron should be setup are in the cron.php file. In a nutshell, it checks for the presences of an option specifying which “mode” to work in and if not found it will attempt to spawn to cron.sh processes in the background… unless your server happens to have shell_exec disabled. In this case, the only real workaround is to put two entries in the crontab to accomplish the same thing: having both a job for default cron tasks and one for these new ‘always’ tasks.

Another thing to note is that although CE 1.8 doesn’t yet make use of the new type of cron task, the functionality differences in the entry point are the same as EE 1.13 and so the ramifications still apply.

Pretty much from here on out, my crontabs will look something like this:

1
2
* * * * * /home/mypretty/myprettysite.com/html/cron.sh cron.php -m=default
* * * * * /home/mypretty/myprettysite.com/html/cron.sh cron.php -m=always

As long as the servers you work with do not have shell_exec disabled as part of their security hardening, you should be able to keep right on truckin’ with one old fashioned crontab entry to keep both modes fed.

23
Jul

API Exporter For The Done Done Issue Tracker

It all started when a number of us over at Classy Llama decided to stop using the Done Done issue tracking service and move to a self-hosted Active Collab based project management system. We weren’t merely moving to Active Collab for issue tracking, we needed a way to track time, along with a hoard of other things. Tracking time was something that we’d been doing in Unfuddle, but it wasn’t working very well because it was separate from our issue tracking software, and we couldn’t let clients onto Unfuddle due to it’s limited access control abilities. Now moving project management systems is no small hassle: clients we’d already engaged with needed to start using a new system that was entirely foreign to them, and to top it off, there isn’t generally any easy way to migrate data from one system to another. Open issues on in-progress projects were manually moved into Active Collab, permissions were set, and we jumped in head first using this new system. In short, we LOVE it. Is there room for improvement? Definitely! None of the themes available for it present the information in any sort of polished manner, but among the few themes out there, the Modern theme by CreativeWorld does the best job. So with a couple of design changes, we started using it, and are planning on letting it evolve further as time progresses.

After running on our new system for about two months, it came time to terminate the accounts with both Unfuddle and DoneDone. Being the man in charge of IT at CLS, that job fell on my shoulders, and thus began the quest of creating backups of all data past and present in both of those systems. I had already begun self-hosting our Subversion repositories about 6 months prior, including moving all of them from Unfuddle to our own dedicated server, so that made the account closure much more doable. Unfuddle provides a relatively easy way to create backups of each project, but Done Done on the other hand? Nothing… there is NO way provided by the authors of DoneDone that allows you to readly export or backup the data stored in your account! We needed a backup though, and I can’t just close an account and lose all of the highly important written communication revolving around the dozens of projects we’d used DoneDone with. Thankfully, there is a rather basic SOAP API provided by Done Done. This API is by no means all inclusive, and is rather poorly designed if I dare say so myself, it does not even provide a way to read nor modify user details. But, what it does do is allow me to programmatically pull a list of all projects and a list of issues in each of those projects, along with all of the issue’s historical (i.e. comments and status changes) data. Being as familar with PHP as I am, I sat down to write a script to load the information into four MySql tables. About two and a half hours later, I had a roughly put together and approximately 200 line script that did just that.

Being fairly certain that we are not the only ones that have ever used DoneDone before, I’m going to make the logical assumption that someone else might just find this script useful, both as a backup tool and a migration assistant, and am releasing here it under the OSL 3.0 license.

There are two things you should keep in mind when using this script.

  • It’s best run using the account owners username and password, otherwise it won’t have access to all projects on Done Done.
  • You must set each project in Done Done to allow API access.

You can download a zipped copy of the code or take a peek at it here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
<?php
/**
 * mdd-backup.php
 *
 * @author     http://davidalger.com/
 * @copyright  Copyright (c) 2011 David Alger & Classy Llama Studios, LLC
 * @license    http://opensource.org/licenses/osl-3.0.php  Open Software License (OSL 3.0)
 */
 
/*
 * Information needed to connect to the database the data from MyDoneDone will be stored in.
 */
define('DB_CONN', 'unix_socket=/usr/local/zend/mysql/tmp/mysql.sock'); // I'm using a socket, but you should be able to use any type of connection here.
define('DB_NAME', 'portalcl_portal_dev'); // update this with your database name
define('DB_USER', 'root'); // update this with your database user
define('DB_PASS', 'root'); // update this with the password for the given database user
 
/*
 * Connection informtion for the MyDoneDone account owner.
 */
define('MDD_WSDL', 'https://classyllama.mydonedone.com/api/DoneDone.asmx?WSDL'); // change this to the URL for your account WSDL
define('MDD_USER', ''); // set this to the username of the account owner
define('MDD_PASS', ''); // set this to the password on your user account
 
/**
 * Quick and dirty method to easily print an error message and die if an error occurs while reading/writing to the database.
 *
 * @param string $message 
 * @return void
 */
function db_fail($message) {
    global $db;
    echo "\n\nDB FAILURE: $message\n";
    var_dump($db->errorInfo());
    die;
}
 
/**
 * Prints a message to the shell..
 *
 * @param string $message 
 * @return void
 */
function status($message) {
    echo $message."\n";
}
 
/*
 * Initialize the database tables that will store the data.
 */
try {
    // connect to database
    $db = new PDO('mysql:'.DB_CONN.';dbname='.DB_NAME, DB_USER, DB_PASS);
 
    // drop tables if we need to
    $sql = <<<SQL
        DROP TABLE IF EXISTS `mdd_issue_history`;
        DROP TABLE IF EXISTS `mdd_issue`;
        DROP TABLE IF EXISTS `mdd_user`;
        DROP TABLE IF EXISTS `mdd_project`;
SQL;
    $db->exec($sql);
 
    // create project table
    $sql = <<<SQL
        CREATE TABLE IF NOT EXISTS `mdd_project` (
            `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
            `name` VARCHAR(255),
            PRIMARY KEY (`id`)
        )ENGINE=INNODB;
SQL;
    $db->exec($sql);
 
    // create user table
    $sql = <<<SQL
        CREATE TABLE IF NOT EXISTS `mdd_user` (
            `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
            `name` varchar(255),
            `company` varchar(255),
            PRIMARY KEY (`id`)
        )ENGINE=INNODB;
SQL;
    $db->exec($sql);
 
    // create issue table
    $sql = <<<SQL
        CREATE TABLE IF NOT EXISTS `mdd_issue` (
            `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
            `project_id` INT(10) UNSIGNED NOT NULL,
            `created_at` DATETIME,
            `updated_at` DATETIME,
            `title` VARCHAR(255),
            `description` TEXT,
            `creator_user_id` INT(10) UNSIGNED NOT NULL,
            `resolver_user_id` INT(10) UNSIGNED NOT NULL,
            `STATUS` VARCHAR(255),
            `priority` VARCHAR(255),
            PRIMARY KEY (`id`),
            FOREIGN KEY (`project_id`) REFERENCES `mdd_project` (`id`),
            FOREIGN KEY (`creator_user_id`) REFERENCES `mdd_user` (`id`),
            FOREIGN KEY (`resolver_user_id`) REFERENCES `mdd_user` (`id`)
        )ENGINE=INNODB;
SQL;
    $db->exec($sql);
 
    // create issue history table
    $sql = <<<SQL
        CREATE TABLE IF NOT EXISTS `mdd_issue_history` (
            `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
            `issue_id` INT (10) UNSIGNED NOT NULL,
            `title` VARCHAR(255),
            `description` TEXT,
            `created_at` DATETIME,
            `creator_name` VARCHAR(255),
            PRIMARY KEY (`id`),
            FOREIGN KEY (`issue_id`) REFERENCES `mdd_issue` (`id`)
        )ENGINE=INNODB;
SQL;
    $db->exec($sql);
 
} catch (Exception $e) {
    die($e->getMessage());
}
 
/*
 * MyDoneDone API Connection
 */
 
// init wsdl
$client = new SoapClient(MDD_WSDL);
 
// authenticate with API
$res = $client->login(
    array(
        'username_or_email' => MDD_USER,
        'password' => MDD_PASS,
    )
);
 
// fail if login wasn't succesfull
if ($res->LoginResult !== true) {
    die('Login failed!');
}
 
// user cache array
$users = array();
 
// load the project list
$projects = $client->getProjects();
$projects = $projects->GetProjectsResult->ProjectInfo;
 
// iterate all projects collecting data
foreach ($projects as $project) {
    // insert project record into db
    if ($db->exec("INSERT INTO `mdd_project` (`name`) VALUES (".$db->quote($project->Name).")") === false) {
        db_fail('Insert to mdd_project failed.');
    };
    $pid = $db->lastInsertId();
    status("Created project $pid with name of {$project->Name}");
 
    // load project issues
    $issues = $client->getIssuesInProject(array(
        'project_id' => $project->ID,
        'should_load_issue_details' => true,
    ));
    $issues = $issues->GetIssuesInProjectResult->IssueInfo;
 
    // skip processing if there are no issues.
    if (!is_array($issues) || (is_array($issues) && count($issues) == 0)) {
        continue;
    }
 
    // iterate issues
    foreach ($issues as $issue) {
        // create creator user if not exists
        if (!isset($users[$issue->Creator->ID])) {
            $sql = "INSERT INTO `mdd_user` (`name`, `company`) VALUES (".$db->quote($issue->Creator->Name).", ".$db->quote($issue->Creator->CompanyName).")";
            if ($db->exec($sql) === false) {
                db_fail('Insert into mdd_issue');
            }
            $users[$issue->Creator->ID] = $db->lastInsertId();
            status("Created user {$users[$issue->Creator->ID]} with name of {$issue->Creator->Name} for {$issue->Creator->CompanyName}");
        }
 
        // create resolver user if not exists
        if (!isset($users[$issue->Resolver->ID])) {
            $sql = "INSERT INTO `mdd_user` (`name`, `company`) VALUES (".$db->quote($issue->Resolver->Name).", ".$db->quote($issue->Resolver->CompanyName).")";
            if ($db->exec($sql) === false) {
                db_fail('Insert into mdd_issue');
            }
            $users[$issue->Resolver->ID] = $db->lastInsertId();
            status("Created user {$users[$issue->Resolver->ID]} with name of {$issue->Resolver->Name} for {$issue->Resolver->CompanyName}");
        }
 
        // insert issue id
        $sql = "INSERT INTO `mdd_issue` (`project_id`, `created_at`, `updated_at`, `title`, `description`, `creator_user_id`, `resolver_user_id`, `status`, `priority`)
            VALUES ('$pid', '{$issue->CreateDate}', '{$issue->UpdateDate}', ".$db->quote($issue->Title).", ".$db->quote($issue->Description).", '{$users[$issue->Creator->ID]}', '{$users[$issue->Resolver->ID]}',  '{$issue->IssueStatus}', '{$issue->PriorityLevel}')";
        if ($db->exec($sql) === false) {
            db_fail('Insert into mdd_issue failed.');
        }
        $issueId = $db->lastInsertId();
        status("Created issue $issueId for project $pid with title of {$issue->Title}");
 
        // grab history into convenience var
        $history = $issue->History->History;
 
        // skip if there are no records
        if (!is_array($history) || (is_array($history) && count($history) == 0)) {
            continue;
        }
 
        // iterate issue history records
        foreach ($history as $record) {
            // insert history record
            $sql = "INSERT INTO `mdd_issue_history` (`issue_id`, `title`, `description`, `created_at`, `creator_name`)
                VALUES ('$issueId', ".$db->quote($record->Title).", ".$db->quote($record->Description).", ".$db->quote($record->CreateDate).", ".$db->quote($record->CreatorName).")";
            if ($db->exec($sql) === false) {
                db_fail('Insert into mdd_issue_history failed.');
            }
            status("Created history record for issue $issueId with title of {$record->Title}");
        }
    }
}
 
// deauthenticate
$client->logout();
19
Feb

Disabling a Magento Observer in config.xml

In some rare cases there is functionality that clients need me to develop that requires disabling some built-in observers due to them conflicting with the desired custom behavior. There are also some that aren’t needed by everyone that you can gain a performance boost from by disabling. I have previously disabled a few observers by rewriting the observer model and returning NULL inside the observer method. Brought to light by Colin M. on the Magento Developer Group, it turns out there is a better technique and method of disabling observers that is both less intrusive and only requires a small bit of configuration XML in your custom modules config.xml file.

I decided to test it myself and find out if it worked and then determine if it was merely a side effect of something not being handled in the code or if the event dispatcher was intentionally coded to skip calling observers that have the type set to ‘disabled’. The one I chose for my test case was the customer_login event, which is called by an observer in the Mage_Catalog module; I added a call to Mage::log in my sandbox to easily see when it was being called.

1
2
3
4
5
    <customer_login>
        <observers>
            <catalog><type>disabled</type></catalog>
        </observers>
    </customer_login>

In a typical observer declaration you would have a type of ‘model’ or ‘object’, but if you specify a type of ‘disabled’ Magento will specifically skip calling the observer per the code found in Mage_Core_Model_App::dispatchEvent, which means not only is this less intrusive and only requiring a bit of XML to disable any observer, it is also does not rely on side-effects of being an invalid observer type.

1
2
3
4
5
6
7
8
9
10
11
switch ($obs['type']) {
 
    case 'disabled':
        break;
    case 'object': case 'model':
        ...
        break;
    default:
        ...
        break;
}