Hi, I'm Tuan, a Full-stack Web Developer from Tokyo 😊. Follow my blog to not miss out on useful and interesting articles in the future.
Do not use the functions JSON.parse
and JSON.stringify
I have been using this for a long time and it is very important to me, but when the amount of data I put in increases, it takes a lot of time and can slow down the process. As the amount of data increases, the time it takes also increases. If my application takes in and processes large JSON objects from users, I should be careful about how much data I take in.
The reason why these functions are taking a long time to complete is because they are trying to process the entire input at once. To make them faster, you can use a stream to break the object or string into smaller chunks. There are some npm modules that offer asynchronous JSON APIs. If you are dealing with small to medium size data, JSON.parse
and JSON.stringify
will work fine, but if you are dealing with large data sets, you should look into these other options:
- JSONStream, which has stream APIs.
- Stream-json
- Big-Friendly JSON, which has stream APIs as well as asynchronous versions of the standard JSON APIs.
Add a logger
Logger libraries can help you keep track of messages, errors, and successful requests that your application receives. Morgan
is a library that allows you to store logs about HTTP requests that have been sent. By adding Morgan as middleware, it will write the logs to the console. You can also look into other storage mechanisms to store the logs. Morgan will help you analyze the data and improve your application.
var express = require('express')
var morgan = require('morgan') var app = express() app.use(morgan('combined')) app.get('/', function (req, res) { res.send('hello, world!')
})
I suggest using Winston for keeping track of errors and messages. It is simple to use and can store information in different places like files, databases, and the console. You can also choose the level of detail you want to see, like notice, error, warning, and info.
What I normally do is create a separate logger file and export it.
const { createLogger, format, transports } = require('winston');
const config = require('./config');
const { combine, printf } = format;
const winstonFormat = printf( ({ level, message, timestamp, stack }) => `${timestamp} ${level}: ${stack || message}`
);
const { timestamp } = format;
const logger = createLogger({ level: config.env === 'development' ? 'debug' : 'info', format: combine( timestamp(), winstonFormat, config.env === 'development' ? format.colorize() : format.uncolorize() ), transports: [new transports.Console()],
});
module.exports = logger;
Choose one API to use for one specific task
It can be hard to organize APIs, but this method can help make them more independent, easier to maintain, and divide up tasks. It also makes your application run faster than if you put all the operations in one API.
For example, when creating a form that accepts both text and videos, it is best to have two separate APIs: one to store the text and one to upload the video in real time. This will make the response time faster.
Separate code into npm packages
If you are working on multiple projects and find yourself using the same code over and over again, it is a good idea to put that code into an npm package. This will save you time in the long run, and make it easier to collaborate with others.
Make heavy calculations asynchronous
Node JS is good at dealing with input and output operations, but it is not suitable for complex calculations that take a long time. If you try to add a large number of integers, your application may become stuck and unable to serve requests, because Node JS only uses one thread to handle all requests, and does not create a new thread for each request.
If the Event Loop takes too long to process a task, all current and new requests will not be processed. We need to find a solution quickly!
You can use the setImmediate
function to make sure that your blocking function is given priority in the event loop. This means that Node.js will prioritize the initialization, assignment, and registering of asynchronous codes before running the setImmediate function.
// other codes setImmediate(() => { processData(data); });
// other codes
The event loop does not immediately run the processData
function. Instead, it registers the set immediate function and allows other code to run first. If you need to do something complicated, the first approach is not the best choice. This is because it only uses the Event Loop, so it won't take advantage of multiple cores. The better solution is to use the Node JS worker_threads
module to create a separate thread to do those tasks.
Do not store a lot of information in one variable.
Variables are stored in RAM (Random Access Memory) , which makes it quick to save and access them. If you need to store a lot of data, it is best to put it in a database, because if the data is too large, it can slow down the server and your application.
Avoid
You should avoid doing things that are not the best option, even though it might seem humorous. Make sure to take these three points seriously.
- Do not use the version of the function that requires you to wait for it to finish before continuing, such as readFileSync. Instead, use the version that allows you to continue with other tasks while it is running, such as the callback or promise version.
- Don't put a lot of information in the request or response when you are sending data, because it can slow down the response time. If you need to send a lot of data, use a streaming method instead.
- Don't use a lot of data, like big JSON files, with Node.js require because it will slow down the main thread. Put your data in a database and only get what you need. If you have to store it in a file, use streaming to get parts of the data in an asynchronous way.
And Finally
As always, I hope you enjoyed this article and learned something new. Thank you and see you in the next articles!
If you liked this article, please give me a like and subscribe to support me. Thank you. 😊