Documenting the journey more in 2019 than I did in 2018, but 2019 has more to document I guess.

I’ve always wanted to blog more but never really seemed to get around to it in 2018 as originally planned. Already feeling really motivated for 2019, I have plans to write a blog post at least once a week on something technology related, I have started a new podcast for myself which I am in the process of working on, I will also be doing another podcast with a fellow developer, both of which will be around the topics of technology, business and life.

This year has definitely been a fast one and an unexpected one too, for instance I got one of my biggest IT contracts to date in the financial sector working in Scotland for 9 months with one of the worlds largest investment banks, that was exciting, met some really cool and interesting people in tech who know their stuff; I started a new software company with a friend and hired another friend as our first employee, the process for that was…interesting and worthy of documenting more, we plan on delivering some kick ass software as a service and native iOS and Android apps as well as some other interesting ideas we have and hope to launch in 2019.

In January 2019 I will be finishing my contract in the financial sector and moving onto my next contract in the advertising & marketing sector, can’t wait for that, working with a very well know company that deal with some major brands across the world, so far 2019 is looking to be another good and productive year.

How To: Setup an API endpoint that is distributed over multiple servers using NGINX upstreams.

This post is especially useful if you are writing a service that utilises third party services on the internet that are rate limited by IP address, an example of this is the whois information service.

Today I am going to show you how to setup a simple API endpoint on your application using the open source NGINX proxy server, we are going to make this endpoint ( ) span across multiple servers for what I need, you can just setup the one IP address and leave it at that if you only have a normal API setup to do.

First we need to setup a few servers for my example, I’m going to setup a production frontend server that will host my ReactJS application and I am going to setup 2 or 3 more production servers to run my NodeJS / Express APIs.

I have setup 4 CentOS 7.4 Linux servers for this example.

Now I have my frontend production server running, I need to first update the software on the box, to do this I will run the command below:

yum update

This will update the software to the latest versions for security and bug fixes etc.

Now we need to install NGINX proxy server, use the command below to do this:

yum install nginx -y

If the above command fails because it cannot find the package requested, run the following command to install the epel release repo to the system:

yum install epel-release -y

If the install of the epel release was successful, go ahead and follow the NGINX install command again:

yum install nginx -y

Once NGINX is installed we need to set the service running:

service nginx start

Then we need to have the system start the service up again on reboot:

systemctl enable nginx

So by now if we visit the IP address of the server we should be seeing the default NGINX web page.

So far, so good. Now lets navigate the NGINX config folder and start playing with some configuration files. On CentOS this can be found in the following location:


For other Linux distributions, please visit the NGINX documentation for more information on where to locate this folder.

For this example I am not going to be setting up virtual host containers properly (in separate files inside the correct folders), I am going to use the default one that ships with NGINX just to show you for this example.

We need to create a new upstream config block called api_group, this block of code will contain all the IP addresses of all our API servers, this is a very basic use of the upstream functionality, there is a lot more you can do with this, but for now this is all we need, see the example below:

upstream api_group {
    server; // internal private IP address
    server; // internal private IP address

Once we have that setup, we just need go into the server block and add a location for the /api endpoint and map it to our upstream, see the code below for an example:

server {
    …rest of server config
    location /api {
        proxy_pass http://api_group/;

With this in place, all we need to do is restart the NGINX service using the command below:

service nginx restart

Now I have setup 2 more servers for my API application, I haven’t shown you the setup here but its pretty much the same setup as the frontend production server I have shown you here, all I have is an index.html page on each API server, one with A and one with B written inside.

When I hit the endpoint, it gives me A or B on each page refresh, this shows me that NGINX is doing a round robin on the servers I listed in my upstream block earlier (a type of basic load balancing) so if we were to replace these files with our API we would have just doubled our whois lookup limit (in theory anyway) due to the lookups being IP rate limited.

I will be keeping an eye on this going forward and seeing how well it works!

If there is anyway I can improve this post or if I have done anything wrong, leave a comment and let me know.


Using the Symfony VarDumper Component For Debugging

What is the Symfony VarDumper Component?

The VarDumper component provides mechanisms for walking through any arbitrary PHP variable. Built on top, it provides a better dump() function that you can use instead of var_dump.

Please visit the official documentation page for more detailed information about this Symfony component.

How can I install the Symfony VarDumper Component?

The quickest and easiest way is to find the package on packigst website and then use composer to install the package to your project.

To install to your project, run the following command on either Windows or Linux in the command line.

composer require symfony/var-dumper

This will add the current stable package to your project and will generate an updated autoload.php file in your vendors folder.

How do I use the Symfony VarDumper Component?

To use the VarDumper component, you must make some small changes to your AppKernel.php file, you will find this in your projects app folder.

Find the following section of code:

if( in_array($this->getEnvironment(), array('dev', 'test')) ){

    $bundles[] = new Symfony\Bundle\WebProfilerBundle\WebProfilerBundle();
    $bundles[] = new Sensio\Bundle\DistributionBundle\SensioDistributionBundle();


Now add to the code as shown in the example below:

if( in_array($this->getEnvironment(), array('dev', 'test')) ){
    $bundles[] = new Symfony\Bundle\WebProfilerBundle\WebProfilerBundle()
    $bundles[] = new Sensio\Bundle\DistributionBundle\SensioDistributionBundle();
    //add the line below
    $bundles[] = new Symfony\Bundle\DebugBundle\DebugBundle();


This will now allow you to use the VarDumper Component in your project and output information to the profiler bar.

This component can be used in your project files as the method dump()  is globally available from the component, you can also use the component in the command line interface too and the output will be written to STDOUT .

How to Use the Dump Method

If for example I had an entity in my project called Order and I was wanting to view the loaded entity from the database, the easiest way of achieving this would be like so:

$order = $this->getDoctrine()->getRepository('DemoCoreBundle:Order')->find(12345);

Resulting in the following output to the profiler bar:

Symfony Profiler Dump

This is quite a useful Symfony component and can be used in any project whether it is Symfony based or not, as long as it is a PHP project, go away and play with it, enjoy!

Using the Symfony Console Component To Improve Development & Deployment

What is the Symfony Console Component?

The Console component allows you to create command-line commands. Your console commands can be used for any recurring task, such as cronjobs, imports, or other batch jobs.

Please visit the official documentation page for more detailed information about this Symfony component.

How do I use the Symfony Console Component?

You can use the Symfony Console Component in either Windows Command Prompt (CMD) or Linux  Command Line Interface (CLI), the commands will work the same for both operating systems.

You will need to make sure PHP is in your PATH variable on Windows first.

Example Console Command

php app/console command:action --argumentName=argumentValue ...

Clearing the Production Cache

If for example you wanted to clear and warmup the cache for the production environment, you would execute the following commands:

php app/console cache:clear --env=prod
php app/console cache:warmup --env=prod

Now you should have your cache cleared and rebuilt ready to show any updates if you received no errors.

Make sure you do your database migrations before you clear your cache, as this will cause errors on the frontend of your site if you have schema changes.

Hopefully you are using database migrations for your project… you are using them right? I thought so…

Dumping Your Assetic Files

If you use the Assetic bundle in your projects, here is an example of how you would regenerate the assetic cache after making changes during development:

php app/console assetic:dump

That’s all you need to do to update your cached files.

I will add some more examples of the Symfony Console usage to future posts.