Around 2012 I was writing NodeJS microservices for a web crawler as part of a larger PHP application. Microservices weren't cool back then, I was doing it because PHP's sequential execution did not favor our web crawler. These microservices and the application were hanging out in the same repository.
In 2014 I was working on a project(for a separate company) where the technical director was convinced that we have to start with a micro service approach. I went along with the suggestion and started building everything out as separate services in their own repositories. 3 months into the project I realized that I am not getting any benefits from the microservice architecture, however, I am paying the microservice tax of complexity. Within one month I rewrote nearly all of the application in plain boring Symfony2/PHP and it was not only simpler, it was also faster and more reliable.
Today the industry is all over microservices, if you are not doing microservices with Docker, you are doing it wrong. There is an endless supply of articles explaining how microservices are being used at AWS, Google etc.
As with all things hyped on the internet, one needs to be highly skeptical. A couple of years back AngularJS was the hot new kid on the block, today I hear people talk about "legacy Angular applications".
At the start of this article I hinted that microservices aren't free, we have something I like to call the microservice tax.
The following Hacker News comment sums the microservice tax up nicely:
You need to be this tall to use [micro] services:
Basic Monitoring, instrumentation, health checks
Distributed logging, tracing
Ready to isolate not just code, but whole build+test+package+promote for every service
Can define upstream/downstream/compile-time/runtime dependencies clearly for each service
Know how to build, expose and maintain good APIs and contracts
Ready to honor b/w and f/w compatibility, even if you're the same person consuming this service on the other side
Good unit testing skills and readiness to do more (as you add more microservices it gets harder to bring everything up, hence more unit/contract/api test driven and lesser e2e driven)
Aware of [micro] service vs modules vs libraries, distributed monolith, coordinated releases, database-driven integration, etc
Know infrastructure automation (you'll need more of it)
Have working CI/CD infrastructure
Have or ready to invest in development tooling, shared libraries, internal artifact registries, etc
Have engineering methodologies and process-tools to split down features and develop/track/release them across multiple services (xp, pivotal, scrum, etc)
A lot more that doesn't come to mind immediately
Thing is - these are all generally good engineering practices.
But with monoliths, you can get away without having to do them. There is the "login to server, clone, run some commands, start a stupid nohup daemon and run ps/top/tail to monitor" way. But with microservices, your average engineering standards have to be really high. Its not enough if you have good developers. You need great engineers.
Currently, by default v8 has a memory limit of 512mb on 32-bit systems, and 1gb on 64-bit systems. The limit can be raised by setting --max_old_space_size to a maximum of ~1024 (~1 GiB) (32-bit) and ~1741 (~1.7GiB) (64-bit), but it is recommended that you split your single process into several workers if you are hitting memory limits.
Even with a process manager, 1 faulty request that makes Node crash will crash all other requests that are in progress on the same process, there is no way to fix this.
Running multiple processes means we need an application load balancer. Some of the NodeJS process managers have load balancing capabilities, however since they are running with Node, it means they have the same limitations we are trying to overcome in the first place. Nginx is a good solution for this problem.