Flux is MVC


When learning React and Flux I was confronted with the following narrative:



The Stanford - Developing iOS 9 Apps with Swift - 2. Applying MVC has a good explanation of what MVC is, my take on it is similar:

If I want to be even more brief then I can just say: MVC is about decoupling logic from presentation.

Flux is stil MVC

MVC is about decoupling logic from presentation, Flux is about decoupling logic from presentation a specific way, which makes Flux based on MVC.

When Facebook developed their application they were using an MVC structure that was wrong for the problem that they were trying to solve. When they realised this, they just switched to a different MVC structure and decided to call it Flux.

I can understand that a lot of developers associate MVC with specific implementations of MVC and Facebook wanted to differentiate themselves from that, it's a shame that they made a bunch of factually wrong statements along the way.

See also: http://voidcanvas.com/flux-vs-mvc/


Around 2012 I was writing NodeJS microservices for a web crawler as part of a larger PHP application. Microservices weren't cool back then, I was doing it because PHP's sequential execution did not favor our web crawler. These microservices and the application were hanging out in the same repository.

In 2014 I was working on a project(for a separate company) where the technical director was convinced that we have to start with a micro service approach. I went along with the suggestion and started building everything out as separate services in their own repositories. 3 months into the project I realized that I am not getting any benefits from the microservice architecture, however, I am paying the microservice tax of complexity. Within one month I rewrote nearly all of the application in plain boring Symfony2/PHP and it was not only simpler, it was also faster and more reliable.


Today the industry is all over microservices, if you are not doing microservices with Docker, you are doing it wrong. There is an endless supply of articles explaining how microservices are being used at AWS, Google etc.

This article sums things up nicely: https://circleci.com/blog/its-the-future/

As with all things hyped on the internet, one needs to be highly skeptical. A couple of years back AngularJS was the hot new kid on the block, today I hear people talk about "legacy Angular applications".

Microservice tax

At the start of this article I hinted that microservices aren't free, we have something I like to call the microservice tax.

The following Hacker News comment sums the microservice tax up nicely:

You need to be this tall to use [micro] services:

  • Basic Monitoring, instrumentation, health checks
  • Distributed logging, tracing
  • Ready to isolate not just code, but whole build+test+package+promote for every service
  • Can define upstream/downstream/compile-time/runtime dependencies clearly for each service
  • Know how to build, expose and maintain good APIs and contracts
  • Ready to honor b/w and f/w compatibility, even if you're the same person consuming this service on the other side
  • Good unit testing skills and readiness to do more (as you add more microservices it gets harder to bring everything up, hence more unit/contract/api test driven and lesser e2e driven)
  • Aware of [micro] service vs modules vs libraries, distributed monolith, coordinated releases, database-driven integration, etc
  • Know infrastructure automation (you'll need more of it)
  • Have working CI/CD infrastructure
  • Have or ready to invest in development tooling, shared libraries, internal artifact registries, etc
  • Have engineering methodologies and process-tools to split down features and develop/track/release them across multiple services (xp, pivotal, scrum, etc)
  • A lot more that doesn't come to mind immediately Thing is - these are all generally good engineering practices. But with monoliths, you can get away without having to do them. There is the "login to server, clone, run some commands, start a stupid nohup daemon and run ps/top/tail to monitor" way. But with microservices, your average engineering standards have to be really high. Its not enough if you have good developers. You need great engineers.

Link: https://news.ycombinator.com/item?id=12508655


Most projects don't have Google's or Amazon's problems, looking at what tech giants do can be misleading.

Building your application in a microservice format can be a form of premature optimization: http://basho.com/posts/technical/microservices-please-dont/

If you are unsure what to do: just build a monolith and when you notice that some parts of your monolith would want to live on their own, extract them out into microservices.

Jonathan Blow on Software Quality at the CSUA GM2

Let's Encrypt


sudo pip install letsencrypt


sudo letsencrypt certonly --webroot -w /your/web/root/directory -d yourdomain.com

The keys will be stored in /etc/letsencrypt/live/

Sample nginx config:

    listen 443 ssl;

    ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

Setting up PHP 7.0 with MacPorts

sudo port install php70 php70-curl php70-fpm php70-gd php70-gettext php70-iconv php70-intl php70-mbstring php70-mcrypt php70-mysql php70-opcache php70-openssl php70-sqlite
sudo cp /opt/local/etc/php70/php.ini-development /opt/local/etc/php70/php.ini

Create the following PHP70-FPM config file: /opt/local/etc/php70/php-fpm.conf


error_log = log/php70/php-fpm.log
syslog.ident = php70-fpm
daemonize = no


user = nobody
group = nobody

listen = /var/run/php7-fpm.sock
listen.owner = nobody
listen.group = nobody
listen.mode = 0660

pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
slowlog = log/$pool.log.slow
catch_workers_output = yes

php_flag[display_errors] = on
php_admin_value[error_log] = /var/log/fpm-php7.www.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 64M
sudo port load php70-fpm

PHP FPM will be accessible via: /var/run/php7-fpm.sock

Setting up PHP5.6 with MacPorts

sudo port install php56 php56-curl php56-fpm php56-gd php56-geoip php56-gettext php56-iconv php56-imagick php56-mbstring php56-mcrypt php56-mysql php56-openssl php56-opcache php56-redis php56-xdebug
sudo port select --set php php56
sudo cp /opt/local/etc/php56/php.ini-development /opt/local/etc/php56/php.ini

Create the following PHP56-FPM config file: /opt/local/etc/php56/php-fpm.conf


error_log = log/php56/php-fpm.log
syslog.ident = php56-fpm
daemonize = no


user = nobody
group = nobody

listen = /var/run/php5-fpm.sock
listen.owner = nobody
listen.group = nobody
listen.mode = 0660

pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
slowlog = log/$pool.log.slow
catch_workers_output = yes

php_flag[display_errors] = on
php_admin_value[error_log] = /var/log/fpm-php.www.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 64M
sudo port load php56-fpm

PHP FPM will be accessible via: /var/run/php5-fpm.sock

Sample Nginx config:

server {
    listen       80;
    index index.php index.html;
    root /www;
    location / {
        # try to serve file directly, fallback to app.php
        try_files $uri /app.php$is_args$args;
    location ~ \.php$ {
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS off;

Installing composer on Mac OS X

curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /opt/local/bin/composer

Making Architecture Matter - Martin Fowler Keynote

Creating an encrypted Time Machine disk on ExFAT

hdiutil create -stdinpass -encryption "AES-256" -size 500g -type SPARSEBUNDLE -fs "HFS+J" YourImage.sparsebundle

Where YourImage is the name you want to give your backup image and 500g is the maximum size of your disk image.

open YourImage.sparsebundle
diskutil list

Find your mounted image in the list and get it's path, in my case it was: /dev/disk3s2

sudo diskutil enableOwnership /dev/disk3s2
sudo tmutil setdestination /Volumes/YourImage

references: http://hints.macworld.com/article.php?story=20140415132734925 http://garretthoneycutt.com/index.php/MacOSX#Creating_an_encrypted_sparsebundle https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/hdiutil.1.html

Running NodeJS in production

NodeJS is pretty straight forward to run on a developer laptop, however it is likely that at some point in time, we will want to run it in a production setting.

You can start up the Node app on the server by running node myapp.js. The narrative says that you can have a single NodeJS process serving all requests, this however does not work in practice:

Node does not have an official process manager, so we need to chose from community provided ones:

Even with a process manager, 1 faulty request that makes Node crash will crash all other requests that are in progress on the same process, there is no way to fix this.

Running multiple processes means we need an application load balancer. Some of the NodeJS process managers have load balancing capabilities, however since they are running with Node, it means they have the same limitations we are trying to overcome in the first place. Nginx is a good solution for this problem.

See also: http://geekforbrains.com/post/after-a-year-of-nodejs-in-production

My production NodeJS setup: https://github.com/istvan-antal/solid-node