How the buffer works in nodejs..
Khaleel Inchikkalayil
Module lead at Aspire Systems || React js || Node js || Golang || Mysql || linux|| javascript || Postgresql || Sequelize || GCP cloud || Redis || NestJs
In Node.js, buffering is a process of temporarily storing data in memory before it is processed or transferred to its final destination. This can be useful in cases where the data is being read or written in chunks or is coming from a slow source, such as a network socket or a file.
Here's an example of how to buffer work in Node.js:
const fs = require('fs');
const path = require('path');
// Define the path of the file to read
const filePath = path.join(__dirname, 'large-file.txt');
// Create a buffer to store the data
let buffer = Buffer.alloc(0);
// Create a read stream to read the file in chunks
const readStream = fs.createReadStream(filePath);
领英推荐
// Listen for the 'data' event, which is emitted whenever a chunk of data is read
readStream.on('data', (chunk) => {
?// Append the chunk to the buffer
?buffer = Buffer.concat([buffer, chunk]);
});
// Listen for the 'end' event, which is emitted when there is no more data to read
readStream.on('end', () => {
?console.log(`Read ${buffer.length} bytes from ${filePath}`);
?// Process the data in the buffer
?// ...
});
In this example, we first define the path of the file to read. We then create an empty buffer using the Buffer.alloc() method.
Next, we create a read stream using the fs.createReadStream() method, which reads the file in chunks. We listen for the data event, which is emitted whenever a chunk of data is read. When the event is triggered, we append the chunk to the buffer using the Buffer.concat() method.
Finally, we listen for the end event, which is emitted when there is no more data to read. When this event is triggered, we log the size of the buffer and process the data in it.
Note that buffering large amounts of data can consume a lot of memory, so it's important to use buffering judiciously and only when necessary.