Recommended collection: Node.js project architecture that should not be missed

Express.js is a great framework for developing the Node.js REST API, but it does not give you any clues about how to organize your Node.js project.

It sounds silly, but it's a problem.

Proper organization of the Node.js project structure will avoid duplication of code and improve the stability and scalability of services.

This article is based on my exploratory research over the years dealing with some poor Node.js project structure, bad design patterns, and countless hours of code refactoring experience.

If you need help adjusting the Node.js project architecture, just send me a letter, sam@softwareontheroad.com.

Catalog
directory structure
Three-tier architecture
Service Layer
Pub/Sub layer
Dependent Injection
unit testing
Cron Jobs and Duplicate Tasks
Configuration and Key
Loaders
directory structure
This is the structure of the ode.js project that I want to talk about.

Each Node.js REST API service I've built uses the following structure to show us what each component does.

src
 │ app.js # App Entry
 └───api # Express route controllers for all the endpoints of the app
 └───config # Environment variables are configuration related
 └───jobs # Task Scheduling Definition for agenda.js
 └───loaders # Split the startup process into modules
 └───models # database model
 └───services # All business logic should be here
 └───subscribers # Event handler for asynchronous tasks
 └───types # Type declaration file for Typescript (d.ts)

These are not just a way to organize JavaScript files...

Three-tier architecture
The idea is to use the separation of concerns principle to remove business logic from Node.js API routing.

Because one day, you will want to use your business logic on a CLI tool, or you will never use it.For some duplicate tasks, it's obviously not a good idea to call itself from the Node.js server.

Don't put your business logic in the controller!!
You may want to use the Controllers layer of Express.js to store your application-tier business logic, but soon your code will become difficult to maintain and you will need to write complex simulations of Express.js req or res objects as long as you need to write unit tests.

Determining when a response should be sent and when it should continue to be processed "in the background" (for example, after a response is sent to a client), these two issues are complex.

route.post('/', async (req, res, next) => {

 // This should be a middleware or should be handled by a library like Joi
 // Joi is a data validation library github.com/hapijs/joi
 const userDTO = req.body;
 const isUserValid = validators.user(userDTO)
 if(!isUserValid) {
 return res.status(400).end();
 }

 // There's a lot of business logic here...
 const userRecord = await UserModel.create(userDTO);
 delete userRecord.password;
 delete userRecord.salt;
 const companyRecord = await CompanyModel.create(userRecord);
 const companyDashboard = await CompanyDashboard.create(userRecord, companyRecord);

 ...whatever...

 // This is the "optimization" that ruins everything.
 // The response is sent to the client...
 res.json({ user: userRecord, company: companyRecord });

 // But the code block is still executing: (
 const salaryRecord = await SalaryModel.create(userRecord, companyRecord);
 eventTracker.track('user_signup',userRecord,companyRecord,salaryRecord);
 intercom.createUser(userRecord);
 gaAnalytics.event('user_signup',userRecord);
 await EmailService.startSignupSequence(userRecord)
 });

Use business logic for the service layer
This layer is where your business logic is placed.

Following the SOLID principle for Node.js, it is just a collection of classes with a clear purpose.

This layer should not have any form of "SQL Query" and can use the data access layer.

Remove your code from Express.js router.
Do not pass req or res to the service layer
Do not return any HTTP transport layer related information from the service layer, such as status code or headers
Example

route.post('/', 
 validators.userSignup, // This middle tier is responsible for data validation
 async (req, res, next) => {
 // The routing layer is actually responsible for
 const userDTO = req.body;

 // Call Service Layer
 // Abstraction of how to access data and business logic layers
 const { user, company } = await UserService.Signup(userDTO);

 // Return a response to the client
 return res.json({ user, company });
 });

This is how your service works in the background.

import UserModel from '../models/user';
import CompanyModel from '../models/company';

export default class UserService {

 async Signup(user) {
 const userRecord = await UserModel.create(user);
 const companyRecord = await CompanyModel.create(userRecord); // needs userRecord to have the database id 
 const salaryRecord = await SalaryModel.create(userRecord, companyRecord); // depends on user and company to be created

 ...whatever

 await EmailService.startSignupSequence(userRecord)

 ...do more stuff

 return { user: userRecord, company: companyRecord };
 }
}

Publishing and Subscription Layer
The pub/sub mode goes beyond the classic three-tier architecture proposed here, but it is very useful.

Now create a simple Node.js API endpoint for the user, perhaps a call to a third-party service, an analysis service, or an e-mail sequence.

Soon, this simple "create" operation will complete a few things, and eventually you will get 1,000 lines of code, all in one function.

This violates the principle of single liability.

Therefore, it's best to divide responsibilities from the beginning to keep your code maintainable.

import UserModel from '../models/user';
 import CompanyModel from '../models/company';
 import SalaryModel from '../models/salary';

 export default class UserService() {

 async Signup(user) {
 const userRecord = await UserModel.create(user);
 const companyRecord = await CompanyModel.create(user);
 const salaryRecord = await SalaryModel.create(user, salary);

 eventTracker.track(
 'user_signup',
 userRecord,
 companyRecord,
 salaryRecord
 );

 intercom.createUser(
 userRecord
 );

 gaAnalytics.event(
 'user_signup',
 userRecord
 );

 await EmailService.startSignupSequence(userRecord)

 ...more stuff

 return { user: userRecord, company: companyRecord };
 }

 }

It is not a good practice to force calls to dependent services.

One of the best ways to do this is to trigger an event,'user_signup', which is done like this, leaving the event listener behind.

import UserModel from '../models/user';
 import CompanyModel from '../models/company';
 import SalaryModel from '../models/salary';

 export default class UserService() {

 async Signup(user) {
 const userRecord = await this.userModel.create(user);
 const companyRecord = await this.companyModel.create(user);
 this.eventEmitter.emit('user_signup', { user: userRecord, company: companyRecord })
 return userRecord
 }

 }

You can now split event handlers/listeners into multiple files.

eventEmitter.on('user_signup', ({ user, company }) => {

 eventTracker.track(
 'user_signup',
 user,
 company,
 );

 intercom.createUser(
 user
 );

 gaAnalytics.event(
 'user_signup',
 user
 );
})
eventEmitter.on('user_signup', async ({ user, company }) => {
 const salaryRecord = await SalaryModel.create(user, company);
})
eventEmitter.on('user_signup', async ({ user, company }) => {
 await EmailService.startSignupSequence(user)
})

You can wrap the await statement in a try-catch block of code or let it fail and process.on('unhandledRejection',cb) is handled through'unhandledPromise'.

Dependent Injection
DI or control inversion (IoC) is a common pattern that facilitates the organization of code by "injecting" or passing class or function dependencies through constructors.

In this way, you have the flexibility to inject "compatible dependencies", such as when you write unit tests for services or use services in other contexts.

Code without DI

import UserModel from '../models/user';
import CompanyModel from '../models/company';
import SalaryModel from '../models/salary'; 
class UserService {
 constructor(){}
 Sigup(){
 // Caling UserMode, CompanyModel, etc
 ...
 }
}

Code with manual dependency injection

export default class UserService {
 constructor(userModel, companyModel, salaryModel){
 this.userModel = userModel;
 this.companyModel = companyModel;
 this.salaryModel = salaryModel;
 }
 getMyUser(userId){
 // models available throug 'this'
 const user = this.userModel.findById(userId);
 return user;
 }
}

In which you can inject custom dependencies.

import UserService from '../services/user';
import UserModel from '../models/user';
import CompanyModel from '../models/company';
const salaryModelMock = {
 calculateNetSalary(){
 return 42;
 }
}
const userServiceInstance = new UserService(userModel, companyModel, salaryModelMock);
const user = await userServiceInstance.getMyUser('12346');

The number of dependencies a service can have is infinite, and refactoring each instantiation of a new service is a tedious and error-prone task when you add it.This is why the Dependency Injection Framework was created.

The idea is to define your dependencies in a class, so when you need an instance of a class, you just need to call Service Locator.

Now let's take a look at an example of the NPM library using TypeDI, and the following Node.js example will introduce DI.

More information about TypeDI can be found on the official website.

typescript example

import { Service } from 'typedi';
@Service()
export default class UserService {
 constructor(
 private userModel,
 private companyModel, 
 private salaryModel
 ){}

 getMyUser(userId){
 const user = this.userModel.findById(userId);
 return user;
 }
}

services/user.ts

TypeDI will now be responsible for resolving any dependencies required by UserService.

import { Container } from 'typedi';
import UserService from '../services/user';
const userServiceInstance = Container.get(UserService);
const user = await userServiceInstance.getMyUser('12346');

Abuse of service locator calls is an anti-pattern

Dependency Injection Combined with Express.js Practice
Using DI in Express.js is the last challenge in the architecture of the Node.js project.

Routing Layer

route.post('/', 
 async (req, res, next) => {
 const userDTO = req.body;

 const userServiceInstance = Container.get(UserService) // Service locator

 const { user, company } = userServiceInstance.Signup(userDTO);

 return res.json({ user, company });
 });

Great, the project looks great!It's so organized that I want to code now.

Unit Test Example
Unit testing becomes very simple by using dependency injection and these organizational patterns.

You do not have to impersonate req/res objects or require(...) calls.

Example: Unit test of user registration method

tests/unit/services/user.js

import UserService from '../../../src/services/user';

 describe('User service unit tests', () => {
 describe('Signup', () => {
 test('Should create user record and emit user_signup event', async () => {
 const eventEmitterService = {
 emit: jest.fn(),
 };

 const userModel = {
 create: (user) => {
 return {
 ...user,
 _id: 'mock-user-id'
 }
 },
 };

 const companyModel = {
 create: (user) => {
 return {
 owner: user._id,
 companyTaxId: '12345',
 }
 },
 };

 const userInput= {
 fullname: 'User Unit Test',
 email: 'test@example.com',
 };

 const userService = new UserService(userModel, companyModel, eventEmitterService);
 const userRecord = await userService.SignUp(teamId.toHexString(), userInput);

 expect(userRecord).toBeDefined();
 expect(userRecord._id).toBeDefined();
 expect(eventEmitterService.emit).toBeCalled();
 });
 })
 })

Cron Jobs and Duplicate Tasks
Therefore, since business logic is encapsulated in the service layer, it is easier to use it from Cron job.

You should not rely on the original method of Node.js setTimeout or other deferred code execution, but on a framework that persists your Jobs and their execution to the database.

This will allow you to control failed Jobs and feedback from some of the winners, as I wrote about the best Node.js task manager, softwareontheroad.com/nodejs-scal...

Configuration and Key
Following the Twelve-Factor App (Twelve Features Apply 12factor.net/) concept that has been tested and validated for Node.js, this is a best practice for storing API keys and database link strings, it is using dotenv.

Place an.Env file, which can never be submitted (but it must exist in the repository with default values), and the dotenv NPM package will load the.Env file and write the variables inside to the process.env object of Node.js.

That's enough, but I want to add a step.There is a config/index.ts file where the NPM package dotenv loads.env

Files, then I use an object to store variables, so we have structure and code completion.

config/index.js

const dotenv = require('dotenv');
 // config() reads your.Env file, parses its contents, and assigns it to process.env
 dotenv.config();

 export default {
 port: process.env.PORT,
 databaseURL: process.env.DATABASE_URI,
 paypal: {
 publicKey: process.env.PAYPAL_PUBLIC_KEY,
 secretKey: process.env.PAYPAL_SECRET_KEY,
 },
 paypal: {
 publicKey: process.env.PAYPAL_PUBLIC_KEY,
 secretKey: process.env.PAYPAL_SECRET_KEY,
 },
 mailchimp: {
 apiKey: process.env.MAILCHIMP_API_KEY,
 sender: process.env.MAILCHIMP_SENDER,
 }
 }

This way, you can avoid filling your code with the process.env.MY_RANDOM_VAR directive, and by autocompletion, you don't have to know how to name environment variables.

Loaders
I use this pattern from W3Tech's micro-framework, but I don't rely on their packaging.

The idea is to split Node.js startup process into testable modules.

Let's take a look at the classic Express.js application initialization

const mongoose = require('mongoose');
 const express = require('express');
 const bodyParser = require('body-parser');
 const session = require('express-session');
 const cors = require('cors');
 const errorhandler = require('errorhandler');
 const app = express();

 app.get('/status', (req, res) => { res.status(200).end(); });
 app.head('/status', (req, res) => { res.status(200).end(); });
 app.use(cors());
 app.use(require('morgan')('dev'));
 app.use(bodyParser.urlencoded({ extended: false }));
 app.use(bodyParser.json(setupForStripeWebhooks));
 app.use(require('method-override')());
 app.use(express.static(__dirname + '/public'));
 app.use(session({ secret: process.env.SECRET, cookie: { maxAge: 60000 }, resave: false, saveUninitialized: false }));
 mongoose.connect(process.env.DATABASE_URL, { useNewUrlParser: true });

 require('./config/passport');
 require('./models/user');
 require('./models/company');
 app.use(require('./routes'));
 app.use((req, res, next) => {
 var err = new Error('Not Found');
 err.status = 404;
 next(err);
 });
 app.use((err, req, res) => {
 res.status(err.status || 500);
 res.json({'errors': {
 message: err.message,
 error: {}
 }});
 });

 ... more stuff 

 ... maybe start up Redis

 ... maybe add more middlewares

 async function startServer() { 
 app.listen(process.env.PORT, err => {
 if (err) {
 console.log(err);
 return;
 }
 console.log(`Your server is ready !`);
 });
 }

 // Run the async function to start our server
 startServer();

As you can see, this part of the app can be a mess.

This is an effective treatment method.

const loaders = require('./loaders');
const express = require('express');

async function startServer() {

 const app = express();

 await loaders.init({ expressApp: app });

 app.listen(process.env.PORT, err => {
 if (err) {
 console.log(err);
 return;
 }
 console.log(`Your server is ready !`);
 });
}

startServer();

Now it's obvious that loaders are just a small file.

loaders/index.js

 import expressLoader from './express';
 import mongooseLoader from './mongoose';

 export default async ({ expressApp }) => {
 const mongoConnection = await mongooseLoader();
 console.log('MongoDB Intialized');
 await expressLoader({ app: expressApp });
 console.log('Express Intialized');

 // ... more loaders can be here

 // ... Initialize agenda
 // ... or Redis, or whatever you want
 }
The express loader

loaders/express.js

import * as express from 'express';
import * as bodyParser from 'body-parser';
import * as cors from 'cors';

export default async ({ app }: { app: express.Application }) => {

 app.get('/status', (req, res) => { res.status(200).end(); });
 app.head('/status', (req, res) => { res.status(200).end(); });
 app.enable('trust proxy');

 app.use(cors());
 app.use(require('morgan')('dev'));
 app.use(bodyParser.urlencoded({ extended: false }));

 // ...More middlewares

 // Return the express app
 return app;
})

The mongo loader

loaders/mongoose.js

import * as mongoose from 'mongoose'
export default async (): Promise<any> => {
 const connection = await mongoose.connect(process.env.DATABASE_URL, { useNewUrlParser: true });
 return connection.connection.db;
}

conclusion
We delve into the Node.js project structure that has been tested in production. Here are some tips to summarize:

Use a 3-tier architecture.
Don't put your business logic in the Express.js controller.
Use Pub/Sub mode and trigger events for background tasks.
Dependency injection to keep you safe.
Do not leak your password, secret, and API key. Use the Configuration Manager.
Split your Node.js server configuration into small modules that can be loaded independently.

Tags: Java Mongoose JSON Database npm

Posted on Fri, 13 Dec 2019 19:12:03 -0500 by englishman69