Table of Contents
- Introduction
- Basic Questions
- 1. What is Node.js?
- 2. Who created Node.js?
- 3. What is the latest version of Node.js?
- 4. What language is Node.js written in?
- 5. What is event-driven programming in Node.js?
- 6. What is the Event Loop in Node.js?
- 7. How is Node.js different from traditional server-side scripting languages like PHP or Python?
- 8. What are the benefits of using Node.js?
- 9. What is NPM in Node.js?
- 10. What is the use of a package.json file in Node.js?
- 11. What are the types of API functions in Node.js?
- 12. What is Express.js and why is it used with Node.js?
- 13. What is a callback function in Node.js?
- 14. How can you handle exceptions in Node.js?
- 15. What is a Promise in Node.js?
- 16. What is a callback hell in Node.js?
- 17. What is the purpose of module.exports in Node.js?
- 18. What is a buffer in Node.js?
- 19. How can you create a server in Node.js?
- 20. What are streams in Node.js?
- 21. What are error-first callbacks in Node.js?
- 22. What is REPL in Node.js? give relevant code example
- 23. What is the purpose of the global object in Node.js?
- 24. What is middleware in Node.js?
- 25. What is the use of the ‘process’ object in Node.js?
- 26. What is the purpose of the __dirname variable in Node.js?
- 27. What is the event emitter in Node.js?
- 28. What is the use of the ‘request’ object in Node.js?
- 29. What is the use of the ‘response’ object in Node.js?
- 30. What is the purpose of routing in Node.js?
- Intermediate Questions
- 1. How does Node.js handle child threads?
- 2. What are the tasks that should be done asynchronously in Node.js?
- 3. Explain the concept of control flow function in Node.js.
- 4. What are some of the events fired by streams in Node.js?
- 5. What is the role of the package-lock.json file in a Node.js project?
- 6. How can you debug a Node.js application?
- 7. How is exception handling done in Node.js? Can exceptions be propagated up through the callback chain?
- 8. Explain how the cluster module in Node.js works.
- 9. Explain the role of the ‘util’ module in Node.js.
- 10. What is the significance of an error-first callback in Node.js?
- 11. How can you secure a Node.js web application?
- 12. What is the role of a Node.js http module?
- 13. What is a test pyramid in Node.js?
- 14. What is a stub? Explain using an example in Node.js.
- 15. What are some common use cases of Node.js EventEmitter?
- 16. What is piping in Node.js?
- 17. Explain the concept of ‘domain’ in error handling in Node.js.
- 18. How can we use the buffer class in Node.js for handling binary data?
- 19. Explain how routing is done in Node.js.
- 20. What is the use of the DNS module in Node.js?
- 21. Explain the concept of promise chaining in Node.js.
- 22. Explain the process object in Node.js.
- 23. What is the purpose of next() function in Node.js?
- 24. What is the role of Express.js Router?
- 25. What are some popular Node.js middleware libraries?
- 26. What is the purpose of the underscore prefix (like _read) in Node.js?
- 27. Explain session handling in a Node.js web application.
- 28. How can we perform form validation on the server side in Node.js?
- 29. Explain the role of Node.js ‘path’ module.
- 30. Explain the role of the ‘query string’ module in Node.js.
- Advanced Questions
- 1. How does Node.js handle uncaught exceptions?
- 2. Explain the working of the libuv library in Node.js.
- 3. How does the Event Loop work in Node.js?
- 4. What is the use of setImmediate() function?
- 5. What is the difference between process.nextTick() and setImmediate()?
- 6. How does Node.js handle long polling or WebSockets?
- 7. How would you go about handling server-side caching in Node.js?
- 8. Explain how to use async/await in Node.js?
- 9. How does error propagation work in Node.js callbacks and promises?
- 10. What are the differences between ‘fork’, ‘spawn’, and ‘exec’ methods in Node.js?
- 11. How does Node.js internally handle HTTP request methods like GET, POST?
- 12. How would you handle exceptions in async/await style code in Node.js?
- 13. Explain how streams work in Node.js and when to use them?
- 15. What are your strategies for writing an error handling middleware for a large-scale Node.js application?
- 16. How do you prevent your Node.js application from crashing due to unhandled exceptions? give relevant code examples
- 17. How can you avoid callback hell in Node.js?
- 18. What strategies can you use to handle race conditions in Node.js?
- 19. How would you manage sessions in scalable Node.js applications?
- 20. What strategies can you use to secure REST APIs in Node.js?
- 21. Explain how garbage collection works in Node.js.
- 22. How would you scale a Node.js application? Discuss different strategies.
- 23. How can you handle memory leaks in long-running Node.js processes?
- 24. How can you optimize the performance of a Node.js application? Discuss different techniques.
- 25. How can you handle file uploads in a Node.js application?
- 26. How do you handle logging in a Node.js application?
- 27. How do you deal with Asynchronous APIs in Node.js?
- 28. Explain how to do authentication in Node.js?
- 29. How do you perform unit testing in Node.js?
- 30. Explain how to perform error handling when using Promises in Node.js. give relevant code examples
- MCQ Questions
- 1. What is Node.js?
- 2. Which programming language is commonly used with Node.js?
- 3. What is the purpose of the Node Package Manager (NPM) in Node.js?
- 4. Which module is used to create a web server in Node.js?
- 5. What is the event-driven programming paradigm in Node.js?
- 6. What is the purpose of the “require” function in Node.js?
- 7. What is the file system module in Node.js used for?
- 8. What is the purpose of the “exports” object in Node.js?
- 9. Which module is used for handling streams in Node.js?
- 10. How can you handle errors in Node.js?
- 11. What is the purpose of the “cluster” module in Node.js?
- 12. What is the purpose of the “os” module in Node.js?
- 13. What is the purpose of the “crypto” module in Node.js?
- 14. What is the purpose of the “child_process” module in Node.js?
- 15. What is the purpose of the “url” module in Node.js?
- 16. Which of the following is NOT a built-in module in Node.js?
- 17. What is the purpose of the “net” module in Node.js?
- 18. What is the purpose of the “util” module in Node.js?
- 19. What is the purpose of the “dns” module in Node.js?
- 20. Which module is used for unit testing in Node.js?
- 21. What is the purpose of the `require()` function in Node.js?
- 22. Which of the following is NOT a built-in module in Node.js?
- 23. What is the purpose of the `process` object in Node.js?
- 24. Which of the following is the correct way to handle asynchronous operations in Node.js?
- 25. Which HTTP method is typically used to retrieve data from a server in Node.js?
- 26. What is the purpose of the `next()` function in Express.js middleware?
- 27. Which of the following is NOT a valid way to handle errors in Node.js?
- 28. Which of the following is a popular database framework for Node.js?
- 29. What is the purpose of the `npm` command in Node.js?
- 30. Which of the following is NOT a core module in Node.js?
Introduction
Node.js is a powerful and versatile JavaScript runtime environment that allows you to build scalable and efficient applications. It’s perfect for both server-side and networking applications, enabling you to handle concurrent connections with ease. With its event-driven architecture and non-blocking I/O operations, Node.js ensures high performance and responsiveness. You can create web servers, APIs, real-time chat applications, and more using JavaScript, a language familiar to many developers. Node.js’s extensive package ecosystem, known as npm, provides a wealth of pre-built modules to accelerate your development process. Get ready to dive into the world of Node.js and unleash your creativity!
Basic Questions
1. What is Node.js?
Node.js is a JavaScript runtime environment built on Chrome’s V8 JavaScript engine. It allows developers to run JavaScript code on the server-side, outside of a browser environment. Node.js provides an event-driven, non-blocking I/O model that makes it lightweight and efficient for building scalable network applications.
Here’s a simple example of a Node.js server that listens on port 3000 and responds with “Hello, World!” when accessed:
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!');
});
server.listen(3000, 'localhost', () => {
console.log('Server running at http://localhost:3000/');
});
2. Who created Node.js?
Node.js was created by Ryan Dahl, a software engineer, in 2009.
3. What is the latest version of Node.js?
As of my knowledge cutoff in September 2021, the latest version of Node.js was Node.js 16.5.0. However, please note that the version may have changed since then. It’s always recommended to refer to the official Node.js website or documentation for the most up-to-date information.
4. What language is Node.js written in?
Node.js itself is primarily written in C and JavaScript.
5. What is event-driven programming in Node.js?
Event-driven programming in Node.js is a programming paradigm where the flow of the program is determined by events that occur asynchronously. It revolves around the concept of event emitters and listeners. Event emitters emit events, and listeners respond to those events.
Here’s an example that demonstrates event-driven programming in Node.js using the built-in events
module:
const EventEmitter = require('events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
// Event listener
myEmitter.on('customEvent', (arg) => {
console.log(`Event occurred with argument: ${arg}`);
});
// Emitting the event
myEmitter.emit('customEvent', 'example argument');
In this example, we create a custom event emitter by extending the EventEmitter
class. We define an event listener for the custom event ‘customEvent’ and emit that event with an argument. When the event is emitted, the listener function is invoked and prints the argument to the console.
6. What is the Event Loop in Node.js?
The Event Loop is a crucial part of Node.js that enables its non-blocking I/O operations. It handles the execution of asynchronous callbacks and ensures that the program remains responsive. The Event Loop continuously checks for new I/O events and executes the associated callbacks.
Here’s a simplified representation of the Event Loop in Node.js:
while (eventLoop.isNotEmpty()) {
const event = eventLoop.getNextEvent();
const callbacks = eventLoop.getCallbacksForEvent(event);
for (const callback of callbacks) {
execute(callback);
}
}
In this simplified example, the Event Loop checks for events in the event loop and retrieves the associated callbacks. It then executes each callback sequentially. This allows Node.js to handle multiple concurrent operations efficiently.
7. How is Node.js different from traditional server-side scripting languages like PHP or Python?
Node.js differs from traditional server-side scripting languages like PHP or Python in a few ways:
- Event-driven and non-blocking: Node.js follows an event-driven, non-blocking I/O model, whereas PHP and Python typically use a blocking I/O model. Node.js can handle a large number of concurrent connections efficiently without blocking the execution of other operations.
- JavaScript-based: Node.js uses JavaScript as its primary language, which enables developers to use the same language on both the client-side (browser) and server-side (Node.js), resulting in easier code sharing and learning.
Here’s a comparison of a simple server implementation in Node.js and PHP:
Node.js:
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!');
});
server.listen(3000, 'localhost', () => {
console.log('Server running at http://localhost:3000/');
});
PHP:
$response = 'Hello, World!';
header('Content-Type: text/plain');
echo $response;
In the Node.js example, the server uses an event-driven model and does not block the execution of other operations. In the PHP example, the server follows a more traditional blocking model, where each request is processed sequentially.
8. What are the benefits of using Node.js?
Some of the benefits of using Node.js are:
- Scalability: Node.js is highly scalable due to its event-driven, non-blocking I/O model. It can handle a large number of concurrent connections efficiently, making it suitable for building scalable network applications.
- Fast and efficient: Node.js is built on Chrome’s V8 JavaScript engine, which compiles JavaScript code to machine code. This results in fast and efficient execution of JavaScript applications.
- Code sharing: Using JavaScript on both the client-side and server-side allows for code sharing, reducing duplication and increasing development speed.
- Large ecosystem: Node.js has a vast ecosystem of open-source packages available through the Node Package Manager (NPM). These packages provide ready-to-use solutions for various tasks, enabling developers to build applications quickly.
- Community support: Node.js has a large and active community of developers who contribute to its growth and provide support through forums, blogs, and online communities.
9. What is NPM in Node.js?
NPM (Node Package Manager) is the default package manager for Node.js. It is a command-line tool that allows developers to discover, install, and manage third-party packages or libraries.
Here’s an example of using NPM to install a package:
npm install package-name
This command installs the specified package and its dependencies in the current project. NPM automatically fetches the package from the NPM registry and installs it in the node_modules
directory.
10. What is the use of a package.json file in Node.js?
The package.json
file is a metadata file used to manage a Node.js project. It contains various details about the project, such as its name, version, dependencies, scripts, and more. It serves as the entry point for NPM to understand the project and its requirements.
Here’s an example of a simple package.json
file:
{
"name": "my-app",
"version": "1.0.0",
"description": "My Node.js application",
"dependencies": {
"express": "^4.17.1"
},
"scripts": {
"start": "node index.js"
}
}
In this example, the package.json
file specifies the project’s name, version, and description. It also lists a dependency on the Express package and defines a script named “start” to run the `index.jsfile using the
node` command.
11. What are the types of API functions in Node.js?
In Node.js, there are different types of API functions available:
- Asynchronous functions: These functions utilize callbacks, Promises, or async/await to handle asynchronous operations. They allow non-blocking I/O and enable efficient resource utilization. Example:
const fs = require('fs');
fs.readFile('file.txt', 'utf8', (err, data) => {
if (err) throw err;
console.log(data);
});
- Synchronous functions: These functions block the execution until the operation completes. They can simplify code but may cause performance issues if used for blocking I/O. Example:
const fs = require('fs');
const data = fs.readFileSync('file.txt', 'utf8');
console.log(data);
- Stream functions: These functions provide a way to handle large amounts of data in a chunked manner. They are useful for scenarios like reading a large file or streaming data over the network. Example:
const fs = require('fs');
const stream = fs.createReadStream('file.txt', 'utf8');
stream.on('data', (chunk) => {
console.log(chunk);
});
These are just a few examples of the types of API functions available in Node.js.
12. What is Express.js and why is it used with Node.js?
Express.js is a popular web application framework for Node.js. It provides a minimal and flexible set of features for building web applications and APIs. Express.js simplifies common tasks, such as routing, request handling, and middleware management.
Here’s an example of a simple Express.js server that responds with “Hello, World!”:
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello, World!');
});
app.listen(3000, () => {
console.log('Server running at http://localhost:3000/');
});
In this example, we create an Express.js application, define a route for the root URL (‘/’), and specify a callback function to handle the request. When the server receives a request to the root URL, it responds with “Hello, World!”.
13. What is a callback function in Node.js?
A callback function in Node.js is a function that is passed as an argument to another function and is executed asynchronously once a certain operation is completed or an event occurs.
Here’s an example of a callback function used with the setTimeout
function in Node.js:
function callback() {
console.log('Callback function executed!');
}
setTimeout(callback, 1000);
In this example, the setTimeout
function schedules the execution of the callback
function after a delay of 1000 milliseconds (1 second). When the delay expires, the callback function is called, and the message “Callback function executed!” is printed to the console.
14. How can you handle exceptions in Node.js?
Exceptions in Node.js can be handled using try-catch blocks. When an exception occurs within a try block, control is transferred to the catch block, allowing you to handle the exception gracefully.
Here’s an example that demonstrates exception handling in Node.js:
try {
// Code that may throw an exception
throw new Error('Something went wrong!');
} catch (error) {
// Handling the exception
console.error('An error occurred:', error.message);
}
In this example, the code within the try block throws an error using the throw
statement. The catch block then catches the error, and the error message is logged to the console.
15. What is a Promise in Node.js?
A Promise in Node.js is an object representing the eventual completion or failure of an asynchronous operation. It provides a cleaner and more structured way to handle asynchronous code compared to using callbacks directly.
Here’s an example of using Promises in Node.js:
function fetchData() {
return new Promise((resolve, reject) => {
// Simulating an asynchronous operation
setTimeout(() => {
const data = 'Data fetched successfully!';
resolve(data);
}, 2000);
});
}
fetchData()
.then((data) => {
console.log(data);
})
.catch((error) => {
console.error('An error occurred:', error);
});
In this example, the fetchData
function returns a Promise. Inside the Promise constructor, we simulate an asynchronous operation using setTimeout
. After the operation completes, we call resolve
to fulfill the Promise with the fetched data.
We can then use the then
method to handle the resolved Promise and the catch
method to handle any errors that occur during the Promise execution.
16. What is a callback hell in Node.js?
Callback hell, also known as the “Pyramid of Doom,” is a situation that arises when there are multiple levels of nested callbacks, leading to code that becomes difficult to read and maintain. It often occurs when dealing with asynchronous operations in a synchronous programming style.
Here’s an example that demonstrates callback hell in Node.js:
asyncOperation1((err, result1) => {
if (err) {
console.error('Error in operation 1:', err);
} else {
asyncOperation2(result1, (err, result2) => {
if (err) {
console.error('Error in operation 2:', err);
} else {
asyncOperation3(result2, (err, result3) => {
if (err) {
console.error('Error in operation 3:', err);
} else {
// More nested callbacks...
}
});
}
});
}
});
In this example, we have multiple levels of nested callbacks, making the code difficult to read and understand. As more asynchronous operations are added, the nesting can become deeper, leading to code that is hard to maintain.
To mitigate callback hell, there are various techniques available, such as using Promises, async/await syntax, or utilizing control flow libraries like Async.js or Bluebird. These approaches help to flatten the callback structure and improve code readability.
17. What is the purpose of module.exports in Node.js?
In Node.js, the module.exports
object is used to define the public API of a module. It allows us to selectively expose functionality or variables from a module to be used by other modules or applications.
Here’s an example that demonstrates the use of module.exports
in a Node.js module:
// math.js
const add = (a, b) => a + b;
const subtract = (a, b) => a - b;
module.exports = {
add,
subtract
};
In this example, the math.js
module exports the add
and subtract
functions using the module.exports
assignment. These functions can be accessed and used by other modules or applications that import this module.
// app.js
const math = require('./math');
console.log(math.add(2, 3)); // Output: 5
console.log(math.subtract(5, 2)); // Output: 3
In the app.js
file, we import the math
module and access its exported functions (add
and subtract
). The exported functions are now available for use within the application.
18. What is a buffer in Node.js?
A buffer in Node.js is a temporary storage area for raw binary data. It represents a fixed-size chunk of memory that can store and manipulate binary data efficiently. Buffers are commonly used to handle streams of data, such as reading or writing files, network communications, or manipulating binary data directly.
Buffers can be created in various ways, such as using the Buffer.alloc
method, which allocates a new buffer of a specific size, or by converting strings or other data types into buffers.
Here’s an example of creating and manipulating a buffer in Node.js:
const buffer = Buffer.alloc(8); // Create a buffer of size 8 bytes
buffer.writeUInt32LE(42, 0); // Write a 32-bit unsigned integer at offset 0
buffer.writeUInt32LE(123, 4); // Write another 32-bit unsigned integer at offset 4
console.log(buffer); // Output: <Buffer 2a 00 00 00 7b 00 00 00>
In this example, we create a buffer of size 8 bytes using Buffer.alloc
. We then write two 32-bit unsigned integers (42 and 123) to the buffer using the writeUInt32LE
method. The resulting buffer is printed to the console, showing the hexadecimal representation of the binary data.
19. How can you create a server in Node.js?
To create a server in Node.js, you can use the built-in http
module. The http
module provides functions and classes for creating HTTP servers and handling HTTP requests and responses.
Here’s an example of creating a basic HTTP server in Node.js:
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!');
});
server.listen(3000, 'localhost', () => {
console.log('Server running at http://localhost:3000/');
});
In this example, we use the createServer
method of the http
module to create an HTTP server. We provide a callback function that handles each incoming request (req
) and constructs the response (res
). The server listens on port 3000 of the localhost.
When a request is received, the callback function sets the status code, content type, and sends the response with the message “Hello, World!”.
20. What are streams in Node.js?
Streams in Node.js are a way to handle reading from or writing to a continuous flow of data in a chunked manner. They provide an efficient way to process large amounts of data without consuming excessive memory.
There are four types of streams in Node.js:
- Readable streams: These streams allow you to read data from a source, such as a file or network request. Example:
const fs = require('fs');
const readableStream = fs.createReadStream('file.txt', 'utf8');
readableStream.on('data', (chunk) => {
console.log(chunk);
});
- Writable streams: These streams allow you to write data to a destination, such as a file or network response. Example:
const fs = require('fs');
const writableStream = fs.createWriteStream('file.txt', 'utf8');
writableStream.write('Hello, World!');
writableStream.end();
- Duplex streams: These streams are both readable and writable. They can be used for scenarios like network sockets. Example:
const net = require('net');
const duplexStream = new net.Socket();
- Transform streams: These streams are a type of duplex stream that allow for data manipulation while reading or writing. Example:
const { Transform } = require('stream');
const transformStream = new Transform({
transform(chunk, encoding, callback) {
const transformedData = chunk.toString().toUpperCase();
this.push(transformedData);
callback();
}
});
transformStream.write('Hello, World!');
transformStream.on('data', (chunk) => {
console.log(chunk);
});
Streams provide a memory-efficient way to process data in chunks, making them particularly useful when working with large files or network data.
21. What are error-first callbacks in Node.js?
Error-first callbacks, also known as Node.js-style callbacks, are a convention used in Node.js for handling asynchronous operations. The callback function is passed as the last argument to an asynchronous function, and the convention is to have the first parameter of the callback reserved for an error object (if any).
Here’s an example of an error-first callback in Node.js:
function asyncOperation(callback) {
setTimeout(() => {
const error = null; // Set to an error object to simulate an error
const result = 'Operation completed successfully';
callback(error, result);
}, 2000);
}
asyncOperation((err, result) => {
if (err) {
console.error('An error occurred:', err);
} else {
console.log('Result:', result);
}
});
In this example, the asyncOperation
function takes a callback as an argument. Inside the function, we simulate an asynchronous operation using setTimeout
. After the operation completes, we call the callback with an error (set to null
for success) and a result.
22. What is REPL in Node.js? give relevant code example
REPL stands for Read-Eval-Print Loop. It is an interactive programming environment available in Node.js that allows you to enter and execute JavaScript code directly. It provides a way to experiment with Node.js features, test small code snippets, or perform quick calculations.
To access the Node.js REPL, you can open a terminal or command prompt and type node
without any arguments. This will start the REPL session, and you can begin entering JavaScript code interactively.
Here’s an example of using the Node.js REPL:
$ node
> const x = 5;
undefined
> const y = 3;
undefined
> x + y
8
> const greeting =
'Hello, World!';
undefined
> greeting.toUpperCase()
'HELLO, WORLD!'
> .exit
$
In this example, we start the Node.js REPL by typing node
in the terminal. We then enter JavaScript code interactively. We define variables (x
and y
), perform calculations, and use string methods. Finally, we exit the REPL by typing .exit
or pressing Ctrl + C (twice on Windows).
23. What is the purpose of the global object in Node.js?
In Node.js, the global object is an object that acts as the global scope for all JavaScript modules. It provides a set of global properties and functions that are available throughout the application.
The global object in Node.js can be accessed using the global
keyword. However, most of the global properties and functions can be accessed directly without using the global
keyword.
Some commonly used properties and functions of the global object include:
console
: Provides methods for logging messages to the console.setTimeout
andsetInterval
: Functions for scheduling asynchronous code execution.require
: Used to import modules and access their exported functionality.process
: Provides information and control over the current Node.js process.Buffer
: Allows working with binary data using buffers.__dirname
and__filename
: Store the directory and filename of the current module.
24. What is middleware in Node.js?
In Node.js, middleware refers to functions or modules that are used to extend the functionality of the Express.js framework. Middleware functions are executed in the order they are defined and can perform various tasks such as modifying request or response objects, handling errors, or enabling additional functionality.
Middleware functions have access to the request (req
), response (res
), and the next
function, which allows them to modify the request and response objects or pass control to the next middleware in the stack.
Here’s an example of a simple middleware function in Express.js:
const express = require('express');
const app = express();
// Middleware function
const logger = (req, res, next) => {
console.log(`[${new Date().toISOString()}] ${req.method} ${req.url}`);
next();
};
// Using the middleware
app.use(logger);
// Route handler
app.get('/', (req, res) => {
res.send('Hello, World!');
});
app.listen(3000, () => {
console.log('Server running at http://localhost:3000/');
});
In this example, we define a middleware function named logger
. It logs the current timestamp, HTTP method, and URL of each incoming request. The next
function is called to pass control to the next middleware in the stack.
We use the app.use
method to register the logger
middleware. This ensures that the middleware is executed for every incoming request. Finally, we define a route handler for the root URL (‘/’) to send a response.
25. What is the use of the ‘process’ object in Node.js?
The process
object in Node.js provides information and control over the current Node.js process. It is a global object that can be accessed without requiring it explicitly.
Some commonly used properties and methods of the process
object include:
process.argv
: An array containing the command-line arguments passed to the Node.js process.process.env
: An object containing the user environment variables.process.exit()
: A method used to exit the current Node.js process with an optional exit code.process.cwd()
: A method that returns the current working directory of the Node.js process.process.pid
: A property that returns the process ID of the Node.js process.
26. What is the purpose of the __dirname
variable in Node.js?
The __dirname
variable in Node.js represents the directory name of the current module. It provides the absolute path of the directory containing the current JavaScript file.
Here’s an example that demonstrates the use of __dirname
in Node.js:
console.log(__dirname);
When this code is executed, it prints the absolute path of the directory containing the JavaScript file to the console.
The __dirname
variable is particularly useful when working with file paths or when referencing files within the same module or project structure. It ensures that file paths are resolved correctly regardless of the current working directory.
27. What is the event emitter in Node.js?
The event emitter in Node.js is a class provided by the events
module. It allows objects to emit named events and handle those events asynchronously using listeners. The event emitter follows the publish-subscribe pattern, where objects that emit events are called publishers, and objects that listen to events are called subscribers.
Here’s an example that demonstrates the use of the event emitter in Node.js:
const EventEmitter = require('events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
// Event listener
myEmitter.on('customEvent', (arg) => {
console.log(`Event occurred with argument: ${arg}`);
});
// Emitting the event
myEmitter.emit('customEvent', 'example argument');
In this example, we create a custom event emitter by extending the EventEmitter
class. We define an event listener for the custom event ‘customEvent’ using the on
method. When the event is emitted using the emit
method, the listener function is invoked, and the provided argument is printed to the console.
28. What is the use of the ‘request’ object in Node.js?
In Node.js, the request
object represents the HTTP request made by a client to a server. It contains information about the request, such as the request method, URL, headers, and query parameters. The request
object is typically provided as an argument to request handlers in frameworks like Express.js.
Here’s an example of accessing properties of the request
object in an Express.js route handler:
app.get('/api/users', (req, res) => {
const userId = req.query.id;
console.log(`User ID: ${userId}`);
// ... handle the request
});
In this example, we define a route handler for the ‘/api/users’ URL. The req
object represents the incoming request. We access the query parameters using the query
property, in this case, getting the value of the ‘id’ parameter. We then log the user ID to the console before proceeding to handle the request.
29. What is the use of the ‘response’ object in Node.js?
In Node.js, the response
object represents the HTTP response sent by a server to a client. It is responsible for sending the response headers, data, and status codes back to the client. The response
object is typically available as an argument in request handlers of frameworks like Express.js.
Here’s an example of using the response
object in an Express.js route handler:
app.get('/', (req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.send('Hello, World!');
});
In this example, we define a route handler for the root URL (‘/’). The res
object represents the response. We set the status code to 200 using the statusCode
property and set the content type to plain text using the setHeader
method. Finally, we send the response data (‘Hello, World!’) using the send
method.
30. What is the purpose of routing in Node.js?
Routing in Node.js refers to the process of determining how an application responds to a client request based on the requested URL and HTTP method. It allows you to define different routes for handling different requests and mapping them to specific request handlers.
Here’s an example of routing using Express.js:
const express = require('express');
const app = express();
// Route definition
app.get('/', (req, res) => {
res.send('Home Page');
});
app.get('/about', (req, res) => {
res.send('About Page');
});
app.post('/api/users', (req, res) => {
res.send('Create User');
});
app.put('/api/users/:id', (req, res) => {
const userId = req.params.id;
res.send(`Update User: ${userId}`);
});
// ... more routes
app.listen(3000, () => {
console.log('Server running at http://localhost:3000/');
});
In this example, we define different routes using the app.get
, app.post
, and app.put
methods. Each method corresponds to an HTTP method (GET
, POST
, PUT
) and takes a URL pattern and a request handler function.
For example, when a client requests the root URL (‘/’), the corresponding request handler sends the response with the message ‘Home Page’. When a client sends a POST request to ‘/api/users’, the request handler responds with ‘Create User’. The URL pattern can also include parameters, as shown in the ‘/api/users/:id’ route, where req.params.id
retrieves the user ID from the URL.
Intermediate Questions
1. How does Node.js handle child threads?
Node.js follows a single-threaded, event-driven model, but it does provide the child_process
module to create and handle child threads or processes. This module allows executing external commands or scripts as separate processes, which can run in parallel with the main Node.js process.
Here’s an example of how Node.js handles child threads using the child_process
module:
const { spawn } = require('child_process');
// Spawn a child process
const child = spawn('ls', ['-l', '-a']);
// Listen for data events from the child process
child.stdout.on('data', (data) => {
console.log(`Child process output:\n${data}`);
});
// Listen for the child process to exit
child.on('close', (code) => {
console.log(`Child process exited with code ${code}`);
});
// Send input to the child process
child.stdin.write('Some input');
child.stdin.end();
In the example above, we spawn a child process using the spawn()
function, passing the command 'ls'
and its arguments ['-l', '-a']
. We then listen for the child process’s data
event to capture its output. Finally, we handle the close
event to determine when the child process has exited.
2. What are the tasks that should be done asynchronously in Node.js?
In Node.js, long-running or blocking operations should be performed asynchronously to avoid blocking the event loop and keeping the application responsive. Examples of such tasks include file I/O, network requests, database queries, and CPU-intensive computations.
Here’s an example of performing file I/O asynchronously in Node.js:
const fs = require('fs');
// Read a file asynchronously
fs.readFile('example.txt', 'utf8', (err, data) => {
if (err) {
console.error(err);
return;
}
console.log(data);
});
// Perform other tasks while file is being read
console.log('Async file reading in progress...');
In this example, the readFile()
function from the fs
module is used to read the contents of a file asynchronously. The function takes a callback as the last argument, which is executed when the file reading operation is complete. Meanwhile, other tasks can be performed, as shown by the console log statement.
3. Explain the concept of control flow function in Node.js.
Control flow functions in Node.js are used to manage the execution flow of asynchronous operations, allowing developers to handle dependencies and ensure proper sequencing of tasks.
One popular control flow library for Node.js is async
. Here’s an example of using the async.series()
function to execute functions in a series:
const async = require('async');
// Functions to be executed in series
const tasks = [
(callback) => {
setTimeout(() => {
console.log('Task 1');
callback(null, 'Result 1');
}, 2000);
},
(callback) => {
setTimeout(() => {
console.log('Task 2');
callback(null, 'Result 2');
}, 1000);
},
(callback) => {
setTimeout(() => {
console.log('Task 3');
callback(null, 'Result 3');
}, 1500);
}
];
// Execute tasks in series
async.series(tasks, (err, results) => {
if (err) {
console.error(err);
return;
}
console.log('All tasks completed');
console.log(results);
});
In the example above, an array of functions (tasks
) is defined, each representing an asynchronous task. The async.series()
function is used to execute these tasks in series, ensuring that each task completes before the next one starts. The final callback is called when all tasks have completed, providing any errors and the results of each task.
4. What are some of the events fired by streams in Node.js?
Streams in Node.js are instances of the EventEmitter
class and emit several events during their lifecycle. Some of the commonly fired events by streams include:
data
: Fired when data is available to be read from a readable stream.end
: Fired when there is no more data to be read from a readable stream.error
: Fired when an error occurs while reading or writing to a stream.finish
: Fired when all data has been flushed to the underlying system in a writable stream.close
: Fired when a stream is closed, indicating that it is no longer available for reading or writing.
5. What is the role of the package-lock.json file in a Node.js project?
The package-lock.json
file in a Node.js project serves as a lockfile, providing a detailed description of the exact versions of dependencies installed. It ensures that subsequent installations using the same package.json
and package-lock.json
result in the same dependency tree, preventing potential inconsistencies between different environments.
Here’s an example of a package-lock.json
file:
{
"name": "my-app",
"version": "1.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
// Dependency information
},
"dependencies": {
// Dependency tree with resolved versions
}
}
The package-lock.json
file is automatically generated and updated by npm or Yarn whenever there is a change in the project’s dependencies. It ensures deterministic dependency resolution, making the installation process more reliable and reproducible.
6. How can you debug a Node.js application?
Node.js provides several built-in debugging techniques to help developers identify and resolve issues in their applications. One common method is using the --inspect
flag along with the Chrome DevTools or an IDE that supports Node.js debugging.
Here’s an example of debugging a Node.js application using the Chrome DevTools:
- Start the Node.js application in debug mode:
node --inspect index.js
- Open Google Chrome and navigate to
chrome://inspect
. - Under the “Remote Target” section, click the “inspect” link for the Node.js application.
- The DevTools will open, allowing you to set breakpoints, inspect variables, and step through the code.
You can also use the --inspect-brk
flag to break execution on the first line, making it easier to set breakpoints before the application starts running.
7. How is exception handling done in Node.js? Can exceptions be propagated up through the callback chain?
Exception handling in Node.js follows the standard JavaScript approach using try-catch
blocks. Exceptions that occur in synchronous code can be caught and handled within the same function. However, exceptions thrown in asynchronous code cannot be directly caught in the calling function.
Here’s an example to illustrate exception handling in Node.js:
function synchronousFunction() {
try {
// Synchronous code that may throw an exception
throw new Error('Something went wrong!');
} catch (err) {
// Catch the exception and handle it
console.error('Caught exception:', err.message);
}
}
function asynchronousFunction(callback) {
setTimeout(() => {
try {
// Asynchronous code that may throw an exception
throw new Error('Something went wrong!');
} catch (err) {
// Cannot catch the exception here, it will be unhandled
console.error('Uncaught exception:', err.message);
}
callback();
}, 1000);
}
asynchronousFunction(() => {
console.log('Callback executed');
});
In the example above, the synchronousFunction()
throws an exception that is caught and handled within the same function. On the other hand, the asynchronousFunction()
throws an exception within the callback passed to setTimeout()
, but it cannot be caught in the calling function. Instead, the exception will be unhandled and may crash the application.
8. Explain how the cluster module in Node.js works.
The cluster module in Node.js allows for the creation of multiple worker processes, each running on a separate CPU core. It enables applications to take advantage of multi-core systems and scale the processing capabilities.
Here’s an example of how the cluster module works in Node.js:
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log(`Master process ID: ${process.pid}`);
// Fork worker processes
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
// Handle worker process exits
cluster.on('exit', (worker, code, signal) => {
console.log(`Worker process ${worker.process.pid} exited`);
console.log(`Starting a new worker process...`);
cluster.fork();
});
} else {
// Worker process
console.log(`Worker process ID: ${process.pid}`);
// Create an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello, World!');
}).listen(3000);
}
In this example, the cluster module is used to create multiple worker processes. The master process is identified using cluster.isMaster
, and it forks worker processes using cluster.fork()
. Each worker process runs the HTTP server and listens for incoming requests.
9. Explain the role of the ‘util’ module in Node.js.
The util
module in Node.js provides utility functions that are commonly used when working with JavaScript or Node.js. It offers various helper functions for debugging, error handling, and other miscellaneous operations.
Here’s an example that demonstrates the usage of the util
module:
const util = require('util');
// Inherit from a base object
function Base() {
this.name = 'Base';
}
Base.prototype.greet = function() {
console.log(`Hello from ${this.name}!`);
};
function Derived() {
this.name = 'Derived';
}
// Inherit from Base
util.inherits(Derived, Base);
const derived = new Derived();
derived.greet(); // Output: Hello from Derived!
In this example, the util.inherits()
function is used to achieve prototypical inheritance between two constructor functions (Derived
and Base
). It allows the Derived
object to inherit the properties and methods defined in the Base
object.
10. What is the significance of an error-first callback in Node.js?
In Node.js, an error-first callback is a common pattern used for asynchronous functions. It ensures consistent error handling by following a convention where the first argument of the callback is reserved for an error object. If an error occurs during the asynchronous operation, it is passed as the first argument to the callback.
Here’s an example that illustrates the significance of an error-first callback:
function divide(num1, num2, callback) {
if (num2 === 0) {
const error = new Error('Division by zero');
callback(error);
return;
}
const result = num1 / num2;
callback(null, result);
}
divide(10, 2, (err, result) => {
if (err) {
console.error('Error:', err.message);
return;
}
console.log('Result:', result);
});
divide(10, 0, (err, result) => {
if (err) {
console.error('Error:', err.message);
return;
}
console.log('Result:', result);
});
In this example, the divide()
function performs division and invokes the callback with the result or an error object, depending on the inputs. The calling code can check the first argument of the callback to determine if an error occurred. This approach helps maintain consistent error handling across asynchronous operations and allows for proper error propagation and handling in Node.js applications.
11. How can you secure a Node.js web application?
Securing a Node.js web application involves implementing various measures to protect it from common security vulnerabilities. Here are some approaches to enhance the security of a Node.js web application:
- Input validation: Validate and sanitize user input to prevent injection attacks and cross-site scripting (XSS) vulnerabilities. Libraries like
express-validator
can be used for input validation. - Authentication and authorization: Implement secure authentication mechanisms such as bcrypt for password hashing and JWT (JSON Web Tokens) for token-based authentication. Use authorization middleware to control access to different routes and resources.
- Secure session management: Store session data securely and configure secure session cookies with proper settings, including secure and HttpOnly flags. Use libraries like
express-session
with session stores like Redis or MongoDB. - Secure communication: Enforce HTTPS for secure communication between clients and the server. Use libraries like
helmet
to set security-related HTTP headers and prevent common vulnerabilities. - Protect against cross-site request forgery (CSRF): Implement CSRF protection by using tokens, checking the referrer header, or using libraries like
csurf
. - Database security: Apply proper access control and authentication mechanisms to the database. Avoid SQL injection vulnerabilities by using parameterized queries or an ORM library with built-in protection.
- Rate limiting: Implement rate limiting to protect against brute-force attacks and DoS (Denial of Service) attacks. Libraries like
express-rate-limit
can help in implementing rate limiting. - Secure dependencies: Regularly update and monitor dependencies for security vulnerabilities. Use tools like
npm audit
or package management solutions like Snyk to identify and fix vulnerabilities. - Error handling: Implement proper error handling mechanisms to avoid exposing sensitive information to users. Use a centralized error handling middleware to catch and handle errors gracefully.
- Security testing: Perform regular security testing, including penetration testing and vulnerability scanning, to identify and fix potential security issues in the application.
12. What is the role of a Node.js http module?
The http
module in Node.js provides functionality to create an HTTP server or make HTTP requests. It allows developers to handle incoming HTTP requests and send HTTP responses.
Here’s an example that demonstrates the usage of the http
module to create an HTTP server:
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!');
});
server.listen(3000, 'localhost', () => {
console.log('Server is running on http://localhost:3000');
});
In this example, the http.createServer()
method is used to create an HTTP server. It takes a callback function that is executed for each incoming request. Inside the callback, the response headers and body are set using the res
object, and the response is sent using res.end()
.
13. What is a test pyramid in Node.js?
The test pyramid is a concept in software testing that represents the ideal distribution of different types of tests in a testing strategy. It emphasizes having a larger number of low-level unit tests, followed by a moderate number of integration tests, and a smaller number of high-level end-to-end tests. The pyramid shape represents the proportion of tests at each level.
In the context of Node.js, the test pyramid can be applied as follows:
- Unit tests: These are low-level tests that focus on testing individual functions or units of code in isolation. Unit tests help ensure that each component of the system works correctly. Tools like
Mocha
,Jest
, orJasmine
can be used for writing unit tests in Node.js. - Integration tests: These tests verify the interaction and integration between different components or modules in the system. Integration tests ensure that the components work together as expected. Tools like
Supertest
orChai HTTP
can be used for writing integration tests in Node.js. - End-to-end (E2E) tests: These are high-level tests that simulate user interactions and test the entire system from end to end. E2E tests validate the system’s behavior as a whole and cover multiple components or services. Tools like
Puppeteer
,Cypress
, orWebDriverIO
can be used for writing E2E tests in Node.js.
14. What is a stub? Explain using an example in Node.js.
In software testing, a stub is a test double that replaces a real component or function with a simplified version. Stubs are used to isolate the code being tested from its dependencies, allowing for more controlled and predictable test scenarios.
Here’s an example of using a stub in Node.js:
// Original function to be stubbed
function fetchDataFromExternalAPI(url, callback) {
// Perform API request and get data asynchronously
// ...
const data = { message: 'Hello, World!' };
callback(data);
}
// Function that depends on the fetchDataFromExternalAPI function
function processData() {
fetchDataFromExternalAPI('https://example.com/api', (data) => {
console.log(data.message);
});
}
// Test scenario using a stub
function testProcessDataWithStub() {
const fetchDataStub = (url, callback) => {
const data = { message: 'Stubbed data' };
callback(data);
};
// Replace the original function with the stub during the test
const originalFetchData = fetchDataFromExternalAPI;
fetchDataFromExternalAPI = fetchDataStub;
// Run the code that depends on the fetchDataFromExternalAPI function
processData();
// Restore the original function after the test
fetchDataFromExternalAPI = originalFetchData;
}
testProcessDataWithStub();
In the example above, we have a function fetchDataFromExternalAPI()
that fetches data from an external API asynchronously and passes it to a callback function. The processData()
function depends on this external API function to retrieve and process data.
During testing, we can use a stub to replace the original fetchDataFromExternalAPI()
function with a simplified version (fetchDataStub()
). The stub provides predetermined data to the callback, allowing us to control the test scenario and avoid making actual API requests. After the test, the original function is restored to its normal behavior.
15. What are some common use cases of Node.js EventEmitter?
The EventEmitter
is a core module in Node.js that provides an implementation of the observer pattern. It is widely used for event-driven programming in Node.js and has various use cases, including:
- Custom events: The
EventEmitter
allows developers to define and emit custom events within their applications. This can be used to create a communication channel between different components or modules, allowing them to react to and handle events. - Asynchronous communication: Node.js applications often use asynchronous patterns, and the
EventEmitter
facilitates asynchronous communication between different parts of the application. Events can be emitted and listened to asynchronously, enabling components to react to events as they occur. - Notification systems: The
EventEmitter
can be used to implement notification systems where subscribers can listen to specific events and receive notifications when those events are emitted. This is useful for building real-time applications or systems that require event-driven updates. - Inter-process communication: In distributed systems or multi-process applications, the
EventEmitter
can be used for inter-process communication. Different processes or instances can emit and listen to events, enabling communication and coordination between them. - Plugins and extensions: The
EventEmitter
pattern is often used in frameworks or libraries to allow developers to extend or customize their functionality. By emitting events at specific points in the code, frameworks/libraries provide extension points where developers can plug in their own code and add additional behavior.
16. What is piping in Node.js?
Piping in Node.js refers to the process of connecting the output of one stream to the input of another stream, creating a pipeline for data to flow through. It simplifies handling data streams by automatically managing the data transfer between streams.
Here’s an example that demonstrates piping in Node.js:
const fs = require('fs');
const readableStream = fs.createReadStream('input.txt');
const writableStream = fs.createWriteStream('output.txt');
readableStream.pipe(writableStream);
In this example, we create a readable stream (readableStream
) from a file input.txt
and a writable stream (writableStream
) to a file output.txt
. The pipe()
method is then used to pipe the output of the readable stream to the input of the writable stream. This automatically transfers the data from the readable stream to the writable stream.
17. Explain the concept of ‘domain’ in error handling in Node.js.
The domain
module in Node.js provides a way to handle uncaught exceptions and errors that occur in asynchronous code. It allows encapsulating a set of related asynchronous operations and handling errors within the context of that domain.
Here’s an example that demonstrates the concept of a domain in Node.js error handling:
const domain = require('domain');
const server = require('http').createServer();
// Create a domain for handling requests
const requestDomain = domain.create();
// Attach the domain to the server
server.on('request', (req, res) => {
requestDomain.run(() => {
// Request handling code within the domain
handleRequest(req, res);
});
});
// Error handling within the domain
requestDomain.on('error', (err) => {
console.error('Error caught in the request domain:', err);
});
// Function that performs asynchronous operations within the domain
function handleRequest(req, res) {
// Asynchronous code that may throw an error
setTimeout(() => {
const error = new Error('Simulated error');
requestDomain.emit('error', error);
}, 1000);
// Other asynchronous operations within the domain
}
In this example, we create a domain (requestDomain
) to encapsulate the request handling code. When a request is received by the server, the domain is activated using requestDomain.run()
. Any errors thrown within the domain are caught by the error
event listener attached to the domain (requestDomain.on('error', ...)
) and can be handled appropriately.
18. How can we use the buffer class in Node.js for handling binary data?
In Node.js, the Buffer
class is used to handle binary data, allowing for manipulation and transformation of raw binary data. It provides a way to work with data at the byte level.
Here’s an example that demonstrates the usage of the Buffer
class for handling binary data:
// Creating a buffer from a string
const buffer1 = Buffer.from('Hello, World!', 'utf8');
console.log(buffer1); // Output: <Buffer 48 65 6c 6c 6f 2c 20 57 6f 72 6c 64 21>
// Creating an empty buffer
const buffer2 = Buffer.alloc(8);
console.log(buffer2); // Output: <Buffer 00 00 00 00 00 00 00 00>
// Modifying a buffer
buffer2.writeUInt32LE(123456789, 0);
console.log(buffer2); // Output: <Buffer 15 cd 5b 07 00 00 00 00>
In this example, we create a Buffer
object using Buffer.from()
to convert a string into a buffer with the specified encoding (utf8
in this case). We can then access and manipulate the individual bytes of the buffer.
We also use Buffer.alloc()
to create an empty buffer with a specified length and Buffer.writeUInt32LE()
to write an unsigned 32-bit integer into the buffer at a specific offset.
19. Explain how routing is done in Node.js.
Routing in Node.js refers to the process of mapping incoming HTTP requests to specific handlers or controllers that are responsible for handling those requests. It allows developers to define different routes for different URLs and methods, enabling the application to respond differently based on the requested resource.
Here’s an example of routing in Node.js using the express
framework:
const express = require('express');
const app = express();
// Define a route for the root URL
app.get('/', (req, res) => {
res.send('Hello, World!');
});
// Define a route for '/users' URL
app.get('/users', (req, res) => {
res.send('List of users');
});
// Define a route for '/users/:id' URL
app.get('/users/:id', (req, res) => {
const userId = req.params.id;
res.send(`User ID: ${userId}`);
});
// Start the server
app.listen(3000, () => {
console.log('Server is running on http://
localhost:3000');
});
In this example, the express
framework is used to define routes. The app.get()
method is used to define a route for a specific URL and HTTP method (GET
in this case). When a request matches a defined route, the corresponding callback function is executed, and the response can be sent back to the client.
The second route /users
matches requests to /users
URL and sends a response with a message. The third route /users/:id
uses a parameter :id
in the URL to capture the value and send it back in the response.
20. What is the use of the DNS module in Node.js?
The dns
module in Node.js provides functionality for working with DNS (Domain Name System) to perform domain-related operations, such as hostname resolution, DNS lookups, and reverse IP address lookups.
Here’s an example that demonstrates the usage of the dns
module in Node.js:
const dns = require('dns');
const domain = 'example.com';
// Resolve the IP address of a domain
dns.resolve4(domain, (err, addresses) => {
if (err) {
console.error(err);
return;
}
console.log(`IP addresses for ${domain}:`, addresses);
});
// Reverse lookup of an IP address
const ip = '93.184.216.34';
dns.reverse(ip, (err, hostnames) => {
if (err) {
console.error(err);
return;
}
console.log(`Hostnames for ${ip}:`, hostnames);
});
In this example, the dns.resolve4()
function is used to resolve the IPv4 addresses associated with a domain (example.com
in this case). The resolved addresses are passed as an array to the callback function.
The dns.reverse()
function is used to perform a reverse lookup of an IP address (93.184.216.34
in this case), which returns an array of hostnames associated with that IP address.
21. Explain the concept of promise chaining in Node.js.
Promise chaining in Node.js allows for sequential execution of asynchronous operations by chaining multiple promises together. Each promise represents a potentially asynchronous task, and the chaining syntax ensures that subsequent tasks wait for the previous ones to complete before executing.
Here’s an example that demonstrates promise chaining in Node.js:
function asyncTask1() {
return new Promise((resolve, reject) => {
setTimeout(() => {
console.log('Async Task 1');
resolve('Task 1 Result');
}, 2000);
});
}
function asyncTask2(data) {
return new Promise((resolve, reject) => {
setTimeout(() => {
console.log('Async Task 2');
resolve('Task 2 Result');
}, 1000);
});
}
function asyncTask3(data) {
return new Promise((resolve, reject) => {
setTimeout(() => {
console.log('Async Task 3');
resolve('Task 3 Result');
}, 1500);
});
}
asyncTask1()
.then((result) => asyncTask2(result))
.then((result) => asyncTask3(result))
.then((result) => {
console.log('Final Result:', result);
})
.catch((error) => {
console.error('Error:', error);
});
In this example, three asynchronous tasks (asyncTask1
, asyncTask2
, and asyncTask3
) are defined, each returning a promise that resolves with a result. By chaining the promises using .then()
, the tasks are executed sequentially, with each task waiting for the previous one to complete.
The final .then()
block receives the result from the last task in the chain and performs the desired action. If any of the promises in the chain reject, the control flows to the .catch()
block to handle the error.
22. Explain the process object in Node.js.
The process
object in Node.js is a global object that provides information and control over the current Node.js process. It allows interaction with the operating system, access to command-line arguments, environment variables, and provides event emitters for process-related events.
Here’s an example that demonstrates the usage of the process
object in Node.js:
// Accessing command-line arguments
console.log('Command-line arguments:', process.argv);
// Accessing environment variables
console.log('Environment variables:', process.env);
// Handling uncaught exceptions
process.on('uncaughtException', (err) => {
console.error('Uncaught Exception:', err);
process.exit(1);
});
// Emitting a custom event
process.emit('customEvent', 'Custom event data');
// Handling termination signals
process.on('SIGINT', () => {
console.log('Received SIGINT signal');
process.exit(0);
});
In this example, we access command-line arguments using process.argv
, which is an array containing the command-line arguments passed to the Node.js process.
The process.env
object provides access to environment variables.
The process.on()
function is used to listen for process-related events. In this example, we handle the uncaughtException
event to catch unhandled exceptions and prevent the process from crashing. We also listen for the SIGINT
signal to gracefully handle termination requests.
The process.emit()
function allows emitting custom events, which can be useful for inter-process communication or custom event-driven architectures.
23. What is the purpose of next() function in Node.js?
In Node.js middleware functions, the next()
function is a callback parameter that is used to pass control to the next middleware function in the chain. It allows the application to move to the next middleware function or route handler, ensuring that each middleware function in the stack is executed in sequence.
Here’s an example that demonstrates the usage of the next()
function in Node.js middleware:
const express = require('express');
const app = express();
// Middleware function 1
app.use((req, res, next) => {
console.log('Middleware 1');
next();
});
// Middleware function 2
app.use((req, res, next) => {
console.log('Middleware 2');
next();
});
// Route handler
app.get('/', (req, res) => {
res.send('Hello, World!');
});
// Start the server
app.listen(3000, () => {
console.log('Server is running on http://localhost:3000');
});
In this example, we define two middleware functions using app.use()
. Each middleware function takes three parameters: req
(request), res
(response), and next
(callback function). The next()
function is called within each middleware function to pass control to the next middleware in the chain.
The next()
function is crucial for the proper execution of middleware functions. It allows the application to move to the next middleware or the route handler (app.get()
in this case).
When a request is made to the server, the middleware functions are executed sequentially, and the output will be:
Middleware 1
Middleware 2
After the middleware functions, the route handler is executed, and the response is sent back to the client.
24. What is the role of Express.js Router?
The Express.js Router
is a middleware function that provides a modular way of defining routes and group related route handlers together. It allows developers to create separate instances of the router for different parts of the application and organize routes in a more structured manner.
Here’s an example that demonstrates the usage of the Express.js Router
:
const express = require('express');
const app = express();
// Create an instance of the router
const router = express.Router();
// Define routes using the router
router.get('/', (req, res) => {
res.send('Home Page');
});
router.get('/about', (req, res) => {
res.send('About Page');
});
router.get('/contact', (req, res) => {
res.send('Contact Page');
});
// Mount the router
app.use('/pages', router);
// Start the server
app.listen(3000, () => {
console.log('Server is running on http://localhost:3000');
});
In this example, we create an instance of the Router
using express.Router()
. We then define routes using the router, similar to how routes are defined on the main app
instance. The router’s routes are relative to the path specified when mounting the router with app.use()
.
In this case, the router is mounted at the /pages
path, so the defined routes become /pages
, /pages/about
, and /pages/contact
.
25. What are some popular Node.js middleware libraries?
Node.js middleware libraries are widely used to extend the functionality of web applications by adding additional features, handling common tasks, and enhancing the request-response flow. Some popular middleware libraries for Node.js are:
- Express.js: A minimalist web framework for Node.js that provides a robust set of middleware functions, including routing, error handling, logging, and more. It is widely used for building web applications and APIs.
- Body-parser: A middleware for parsing request bodies in different formats, such as JSON, URL-encoded, or multipart form data. It simplifies handling of incoming data and populates
req.body
with the parsed data. - Morgan: A logging middleware that provides customizable request logging. It logs HTTP requests, including details like request method, URL, response status, and response time. It is often used for debugging and monitoring purposes.
- Helmet: A middleware that helps secure Express.js applications by setting various HTTP headers, including security-related headers like Content Security Policy (CSP), Strict-Transport-Security (HSTS), and more.
- Compression: A middleware that enables gzip compression of HTTP responses, reducing the size of the response body and improving network performance.
- Passport: A popular authentication middleware for Node.js that provides a flexible and modular approach to handle authentication strategies, including local authentication, OAuth, OpenID, and more.
- Multer: A middleware for handling multipart/form-data, primarily used for file uploads. It simplifies handling file uploads, including handling of file storage, limits, and renaming.
- Cors: A middleware that handles Cross-Origin Resource Sharing (CORS) headers, enabling controlled access to resources from different domains. It allows configuring CORS policies for better security and compatibility.
26. What is the purpose of the underscore prefix (like _read) in Node.js?
In Node.js, the underscore prefix (e.g., _read
, _write
) is commonly used as a convention to indicate that a method or property is intended for internal use and should not be called directly by user code. It is used to denote private or internal methods that are not part of the public API.
Here’s an example that demonstrates the usage of the underscore prefix in Node.js:
class MyClass {
_privateMethod() {
// Private method logic
}
publicMethod() {
// Public method logic
this._privateMethod(); // Internal method called within the class
}
}
const instance = new MyClass();
instance.publicMethod();
instance._privateMethod(); // Directly calling internal method (not recommended)
In this example, _privateMethod()
is intended for internal use within the MyClass
class and should not be called directly from outside the class. The publicMethod()
can call _privateMethod()
internally, but user code should only interact with the public methods of the class.
27. Explain session handling in a Node.js web application.
Session handling in a Node.js web application involves managing user sessions and maintaining user-specific data across multiple requests. It enables the server to identify and authenticate users and store session-related data securely.
Here’s an example that demonstrates session handling in a Node.js web application using the express-session
middleware:
const express = require('express');
const session = require('express-session');
const app = express();
// Configure session middleware
app.use(
session({
secret: 'mysecret', // Secret used to sign the session ID cookie
resave: false, // Whether to save the session for each request
saveUninitialized: true, // Whether to save uninitialized sessions
cookie: {
secure: true, // Whether the cookie should be sent over HTTPS only
httpOnly: true, // Whether the cookie should be accessible only through the HTTP(S) protocol
maxAge: 3600000, // Session duration in milliseconds
},
})
);
// Routes
app.get('/', (req, res) => {
// Store a value in the session
req.session.username = 'John';
res.send('Session set');
});
app.get('/user', (req, res) => {
// Access the value stored in the session
const username = req.session.username;
res.send(`Hello, ${username}`);
});
// Start the server
app.listen(3000, () => {
console.log('Server is running on http://localhost:3000');
});
In this example, we use the express-session
middleware to handle sessions. The middleware is configured with options such as secret
(used to sign the session ID cookie), resave
(whether to save the session for each request), saveUninitialized
(whether to save uninitialized sessions), and cookie
options for session duration and security.
When a user visits the root route (/
), a value is stored in the session using req.session
. In the /user
route, the stored value is accessed and displayed in the response.
The express-session
middleware handles creating and managing the session for each user. It automatically sets a session ID cookie in the client’s browser, which is used to identify subsequent requests from the same user.
28. How can we perform form validation on the server side in Node.js?
Performing form validation on the server side in Node.js involves validating the submitted form data sent from the client to ensure it meets the required criteria and is safe to process. This validation process helps prevent potential security vulnerabilities and ensures the integrity and correctness of the data.
Here’s a general approach to perform form validation on the server side in Node.js:
- Create validation rules: Define the validation rules that the form inputs must satisfy. This can include checking for required fields, data types, length limits, pattern matching, and more.
- Validate the form data: In the server-side code that handles the form submission, validate the submitted form data using the defined validation rules. This can be done manually or using a validation library/package such as
express-validator
orjoi
. - Handle validation errors: If any validation errors occur, respond to the client with appropriate error messages or status codes. This can be done by returning an error response or redirecting the user back to the form page with error messages.
Here’s an example using the express-validator
middleware to perform form validation in Node.js:
const express = require('express');
const { body, validationResult } = require('express-validator');
const app = express();
app.use(express.urlencoded({ extended: true }));
app.post(
'/signup',
[
body('username').notEmpty().withMessage('Username is required'),
body('email').isEmail().withMessage('Invalid email address'),
body('password').isLength({ min: 6 }).withMessage('Password must be at least 6 characters long'),
],
(req, res) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
// Process the valid form data
// ...
}
);
app.listen(3000, () => {
console.log('Server is running on http://localhost:3000');
});
In this example, the express-validator
middleware is used to define validation rules for the form inputs in the /signup
route. The body()
function is used to define validation rules for individual form fields. If any validation errors occur, the errors are captured using validationResult(req)
, and an appropriate response is sent back to the client.
29. Explain the role of Node.js ‘path’ module.
The path
module in Node.js provides utilities for working with file paths and directory paths. It offers methods for manipulating and transforming file and directory paths, resolving absolute and relative paths, extracting file extensions, and more.
Here’s an example that demonstrates the usage of the path
module in Node.js:
const path = require('path');
const filePath = '/path/to/file.txt';
// Get the base name of the file
const basename = path.basename(filePath);
console.log('File basename:', basename); // Output: file.txt
// Get the directory name of the file
const dirname = path.dirname(filePath);
console.log('File directory:', dirname); // Output: /path/to
// Get the file extension
const extension = path.extname(filePath);
console.log('File extension:', extension); // Output: .txt
// Resolve an absolute path
const absolutePath = path.resolve('dir', 'file.txt');
console.log('Absolute path:', absolutePath); // Output: /path/to/current/dir/file.txt
In this example, we require the path
module and use its various methods:
path.basename()
: Retrieves the base name (last component) of a file path.path.dirname()
: Retrieves the directory name of a file path.path.extname()
: Retrieves the file extension from a file path.path.resolve()
: Resolves a sequence of paths or path segments into an absolute path.
30. Explain the role of the ‘query string’ module in Node.js.
The querystring
module in Node.js provides utilities for working with query strings. It allows parsing and formatting URL query strings, encoding and decoding special characters, and manipulating query parameters.
Here’s an example that demonstrates the usage of the querystring
module in Node.js:
const querystring = require('querystring');
const params = {
name: 'John Doe',
age: 30,
};
// Convert an object to a query string
const queryString = querystring.stringify(params);
console.log('Query string:', queryString); // Output: name=John%20Doe&age=30
// Parse a query string into an object
const parsedParams = querystring.parse(queryString);
console.log('Parsed parameters:', parsedParams); // Output: { name: 'John Doe', age: '30' }
// Encode and decode special characters
const encodedString = querystring.escape('Hello, World!');
console.log('Encoded string:', encoded
String); // Output: Hello%2C%20World%21
const decodedString = querystring.unescape('Hello%2C%20World%21');
console.log('Decoded string:', decodedString); // Output: Hello, World!
In this example, we require the querystring
module and use its methods:
querystring.stringify()
: Converts an object into a query string representation.querystring.parse()
: Parses a query string into an object.querystring.escape()
: Encodes special characters in a string.querystring.unescape()
: Decodes encoded characters in a string.
Advanced Questions
1. How does Node.js handle uncaught exceptions?
Node.js provides a mechanism to handle uncaught exceptions using the process
object’s uncaughtException
event. By registering a listener for this event, you can perform custom error handling and prevent your application from crashing.
Here’s an example:
process.on('uncaughtException', (err) => {
console.error('Uncaught Exception:', err);
// Perform any necessary cleanup or logging
process.exit(1); // Exit the process with a non-zero code
});
// Example code with intentional uncaught exception
setTimeout(() => {
throw new Error('Uncaught Exception');
}, 1000);
In the above example, the uncaughtException
event listener is registered to log the error and exit the process with a non-zero code. This prevents the application from continuing after an unhandled exception.
2. Explain the working of the libuv library in Node.js.
The libuv library is a key component of Node.js that provides an abstraction layer for handling I/O operations and managing the event loop. It is responsible for enabling Node.js to be asynchronous and event-driven.
libuv abstracts various system-specific operations such as file system access, networking, and threading, and provides a unified interface for Node.js to interact with the underlying operating system.
Here’s an example that demonstrates the usage of libuv for asynchronous file I/O:
const fs = require('fs');
fs.readFile('file.txt', 'utf8', (err, data) => {
if (err) {
console.error('Error reading file:', err);
return;
}
console.log('File content:', data);
});
In the above example, the fs.readFile
function is provided by Node.js’s fs
module, which internally uses libuv to perform asynchronous file I/O. When the file reading operation is complete, the provided callback function is invoked, either with an error or the file content.
3. How does the Event Loop work in Node.js?
The Event Loop is a central component of Node.js that allows it to handle asynchronous operations efficiently. It is responsible for processing and executing callbacks in a non-blocking manner.
Here’s a simplified explanation of how the Event Loop works:
- The Event Loop continuously checks for pending callbacks or events.
- If there are no pending events, the Event Loop will wait for new events to occur.
- When an event occurs (such as I/O completion, timer expiration, or an incoming request), a callback associated with that event is added to a queue.
- The Event Loop will start processing the callbacks in the queue, one at a time, in a round-robin manner.
- Each callback is executed until it completes or yields to allow other pending callbacks to run.
- Once the callback completes, its associated resources are released, and the Event Loop moves on to the next callback in the queue.
- This process repeats until all callbacks in the queue have been processed.
Here’s a simple example that demonstrates the Event Loop behavior:
console.log('Start');
setTimeout(() => {
console.log('Timeout 1');
}, 1000);
setTimeout(() => {
console.log('Timeout 2');
}, 500);
setImmediate(() => {
console.log('Immediate');
});
console.log('End');
In the above example, the output will be as follows:
Start
End
Immediate
Timeout 2
Timeout 1
The event loop allows callbacks scheduled by setTimeout
and setImmediate
to be executed asynchronously. The setTimeout
callbacks are executed in the order specified by the timeout values, while the setImmediate
callback is executed immediately after the current phase of the event loop.
4. What is the use of setImmediate() function?
The setImmediate
function in Node.js is used to schedule a callback function to be executed after the current phase of the event loop. It allows you to defer the execution of a callback until the I/O events have been processed.
Here’s an example that demonstrates the usage of setImmediate
:
console.log('Start');
setImmediate(() => {
console.log('Immediate callback');
});
console.log('End');
The output of the above example will be:
Start
End
Immediate callback
In this example, the setImmediate
callback is scheduled to run in the next iteration of the event loop. It is executed after the synchronous code (console.log('End')
) has finished executing.
5. What is the difference between process.nextTick() and setImmediate()?
Both process.nextTick()
and setImmediate()
are used to defer the execution of a callback function, but there are some differences in their behavior and usage.
process.nextTick()
:
- The
process.nextTick()
function defers the execution of a callback until the next iteration of the event loop, immediately after the current operation completes. - It has a higher priority than
setTimeout()
andsetImmediate()
. Callbacks scheduled usingprocess.nextTick()
are executed before any I/O events or timers. - It allows you to “jump the queue” and execute a callback immediately after the current code block. Here’s an example that demonstrates the usage of
process.nextTick()
:
console.log('Start');
process.nextTick(() => {
console.log('nextTick callback');
});
console.log('End');
The output of the above example will be:
Start
End
nextTick callback
In this example, the process.nextTick()
callback is executed immediately after the current code block, before moving on to the next iteration of the event loop.
setImmediate()
:
- The
setImmediate()
function defers the execution of a callback until the next iteration of the event loop, immediately after the I/O events are processed. - It has a lower priority than
process.nextTick()
. Callbacks scheduled usingsetImmediate()
are executed after any I/O events and before any timers. - It allows you to schedule a callback to be executed asynchronously. Here’s an example that demonstrates the usage of
setImmediate()
:
console.log('Start');
setImmediate(() => {
console.log('Immediate callback');
});
console.log('End');
The output of the above example will be:
Start
End
Immediate callback
In this example, the setImmediate()
callback is scheduled to run in the next iteration of the event loop, after the current code block and any pending I/O events.
6. How does Node.js handle long polling or WebSockets?
Node.js can handle long polling and WebSockets by utilizing its event-driven, non-blocking architecture. Here’s an overview of how it works for each approach:
- Long Polling:
- With long polling, the client sends a request to the server and keeps the connection open until the server has new data to send back.
- Node.js can handle this by keeping the request open and periodically checking for new data or updates.
- When new data is available, the server responds to the client, and the process repeats.
- This allows real-time communication between the server and the client without continuously sending requests.
- Libraries like
socket.io
provide abstractions for handling long polling and managing WebSocket fallbacks in Node.jrovide full-duplex communication channels between the server and the client.
WebSockets:
- WebSockets provide full-duplex communication channels between the server and the client.
- Node.js can handle WebSockets by using WebSocket libraries such as
ws
orsocket.io
. - These libraries establish a WebSocket connection between the server and the client.
- Once the connection is established, both the server and the client can send data to each other in real-time.
- Node.js’s event-driven nature allows it to handle WebSocket messages asynchronously, enabling efficient bidirectional communication.
7. How would you go about handling server-side caching in Node.js?
Handling server-side caching in Node.js involves caching frequently accessed data or resources to improve performance and reduce the load on the server. Here are a few approaches you can use:
In-memory caching:
- Use an in-memory cache like
node-cache
ormemory-cache
to store key-value pairs in memory. - Cache the results of expensive database queries or computed data to avoid recomputation.
- Example using
memory-cache
:
const cache = require('memory-cache');
function getDataFromCache(key) {
const data = cache.get(key);
if (data) {
// Data found in cache, return it
return data;
}
// Data not found in cache, fetch it and store in cache
const fetchedData = fetchDataFromDatabase();
cache.put(key, fetchedData, 10000); // Cache for 10 seconds
return fetchedData;
}
Caching with Redis:
- Use Redis, an in-memory data store, for distributed caching across multiple Node.js instances or servers.
- Store frequently accessed data in Redis and retrieve it from there instead of making expensive queries.
- Example using
redis
package:
const redis = require('redis');
const client = redis.createClient();
function getDataFromCache(key) {
return new Promise((resolve, reject) => {
client.get(key, (err, data) => {
if (err) {
reject(err);
return;
}
if (data) {
// Data found in cache, return it
resolve(JSON.parse(data));
return;
}
// Data not found in cache, fetch it and store in cache
fetchDataFromDatabase()
.then((fetchedData) => {
client.setex(key, 10, JSON.stringify(fetchedData)); // Cache for 10 seconds
resolve(fetchedData);
})
.catch(reject);
});
});
}
Response caching with HTTP headers:
- Set appropriate cache-related HTTP headers, such as
Cache-Control
andExpires
, to enable client-side caching. - Example:
function handleRequest(req, res) {
// Check if the response is already in the client's cache
const cachedResponse = getCachedResponse(req.url);
if (cachedResponse) {
res.setHeader('X-Cached', 'true');
res.send(cachedResponse);
} else {
// Process the request and generate the response
const response = generateResponse(req);
// Cache the response on the server and set appropriate headers
cacheResponse(req.url, response);
res.setHeader('Cache-Control', 'public, max-age=3600'); // Cache for 1 hour
res.setHeader('Expires', new Date(Date.now() + 3600000).toUTCString());
res.send(response);
}
}
8. Explain how to use async/await in Node.js?
Async/await is a modern JavaScript feature that simplifies asynchronous code execution and error handling in Node.js. It allows you to write asynchronous code in a more synchronous style, making it easier to understand and maintain. Here’s how to use async/await in Node.js:
- Declare an async function using the
async
keyword:
async function fetchData() {
// Asynchronous code goes here
}
- Inside the async function, use the
await
keyword to pause the execution and wait for a Promise to resolve:
async function fetchData() {
const result = await someAsyncOperation();
// Process the result
}
- Use try/catch blocks to handle errors:
async function fetchData() {
try {
const result = await someAsyncOperation();
// Process the result
} catch (error) {
// Handle the error
}
}
9. How does error propagation work in Node.js callbacks and promises?
In Node.js, error propagation works differently between Callbacks and Promises:
Callbacks:
- In traditional Node.js callbacks, errors are typically propagated as the first argument to the callback function.
- The convention is to check for errors and handle them within the callback function.
- Example with callback error propagation:
function readFileWithCallback(filename, callback) {
fs.readFile(filename, 'utf8', (err, data) => {
if (err) {
callback(err); // Pass the error to the callback
return;
}
callback(null, data); // Pass the result to the callback
});
}
// Usage
readFileWithCallback('file.txt', (err, data) => {
if (err) {
console.error('Error reading file:', err);
return;
}
console.log('File content:', data);
});
Promises:
- With Promises, error propagation is achieved by chaining
.catch()
handlers to catch and handle any errors that occur in the Promise chain. - If any Promise in the chain rejects, the control flows directly to the nearest
.catch()
handler. - Example with Promise error propagation:
function readFileWithPromise(filename) {
return new Promise((resolve, reject) => {
fs.readFile(filename, 'utf8', (err, data) => {
if (err) {
reject(err); // Reject the Promise with the error
return;
}
resolve(data); // Resolve the Promise with the result
});
});
}
// Usage
readFileWithPromise('file.txt')
.then((data) => {
console.log('File content:', data);
})
.catch((err) => {
console.error('Error reading file:', err);
});
10. What are the differences between ‘fork’, ‘spawn’, and ‘exec’ methods in Node.js?
Here’s a comparison between the fork
, spawn
, and exec
methods in Node.js:
Method | Purpose | Communication | Shell Execution | Child Process Type | Event Emitter |
---|---|---|---|---|---|
fork | Used to create a new Node.js process | Inter-process | No | Node.js | Yes |
spawn | Used to execute external commands | Streams | Yes | Any | No |
exec | Used to execute external commands with a shell | Streams | Yes | Any | No |
fork
:
- Creates a new Node.js process using the
child_process.fork()
method. - Communication between the parent and child process is achieved through inter-process communication (IPC).
- Enables event-based communication with the child process using the
message
event andsend()
method. - The child process runs Node.js code, making it useful for creating child worker processes.
spawn
:
- Executes external commands or processes using the
child_process.spawn()
method. - Provides communication between the parent and child process through streams (stdin, stdout, stderr).
- It does not use a shell to execute the command, making it more efficient for executing commands directly.
- The child process can be any executable program or script, not limited to Node.js.
exec
:
- Executes external commands or processes using the
child_process.exec()
method. - Supports shell execution, which allows for the use of shell features and pipes.
- Provides communication between the parent and child process through streams (stdin, stdout, stderr).
- The child process can be any executable program or script, not limited to Node.js.
11. How does Node.js internally handle HTTP request methods like GET, POST?
Node.js handles HTTP request methods like GET, POST, and others using its built-in http
module. Here’s an overview of how it works:
Creating an HTTP server:
- You create an HTTP server using the
http.createServer()
method and provide a callback function that gets invoked for each incoming request. - The callback function receives the request and response objects as arguments and allows you to handle the request.
Handling GET requests:
- For GET requests, the server’s callback function is executed when a client sends a GET request to the server.
- You can access the URL parameters, query parameters, headers, and other request information from the
request
object. - Example:
const http = require('http');
const server = http.createServer((req, res) => {
// Handle GET request
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, GET request!');
});
server.listen(3000, () => {
console.log('Server is listening on port 3000');
});
Handling POST requests:
- For POST requests, you need to handle the request data and any payload sent by the client.
- Node.js provides event-based handling for the request’s
data
andend
events. - You collect the request data in chunks in the
data
event and process it when theend
event is emitted. - Example:
const http = require('http');
const server = http.createServer((req, res) => {
// Handle POST request
let body = '';
req.on('data', (chunk) => {
body += chunk;
});
req.on('end', () => {
// Process the request body
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end(`Received POST data: ${body}`);
});
});
server.listen(3000, () => {
console.log('Server is listening on port 3000');
});
12. How would you handle exceptions in async/await style code in Node.js?
When using async/await in Node.js, you can handle exceptions using a combination of try/catch
blocks and Promise rejections. Here’s how you handle exceptions in async/await style code:
- Wrap the code within an async function.
- Use
try/catch
blocks to catch synchronous exceptions. - Use
try/catch
blocks or.catch()
to catch and handle asynchronous exceptions.
Here’s an example that demonstrates exception handling with async/await:
async function fetchData() {
try {
const result = await someAsyncOperation();
// Process the result
} catch (error) {
// Handle the error
console.error('Error:', error);
}
}
// Usage
fetchData();
In the above example, the fetchData()
function is an async function that awaits an asynchronous operation (someAsyncOperation()
) and processes the result. If an exception occurs during the awaited operation, it will be caught in the catch
block, allowing you to handle the error appropriately.
Alternatively, you can use Promise rejections to handle exceptions:
async function fetchData() {
const result = await someAsyncOperation().catch((error) => {
// Handle the error
console.error('Error:', error);
throw error; // Re-throw the error to propagate it further
});
// Process the result
}
In this example, the catch
block is used to handle the rejected Promise. The error can be logged, and if needed, re-thrown to propagate it further up the call stack.
13. Explain how streams work in Node.js and when to use them?
Streams are a fundamental concept in Node.js that provide an efficient way to handle data flows, especially for large or continuous data processing. Streams allow you to read or write data in chunks, rather than loading it all into memory at once. Here’s an overview of how streams work:
- Streams represent a continuous flow of data divided into chunks, which can be read from or written to asynchronously.
- Streams implement the EventEmitter interface, allowing you to listen for events such as
data
,end
, anderror
. - Node.js provides four types of streams: Readable, Writable, Duplex, and Transform.
- Readable streams allow you to read data from a source.
- Writable streams allow you to write data to a destination.
- Duplex streams can be read from and written to simultaneously.
- Transform streams modify or transform data as it passes through them.
Here’s an example that demonstrates reading data from a file using a Readable stream and piping it to a Writable stream:
const fs = require('fs');
const readableStream = fs.createReadStream('input.txt');
const writableStream = fs.createWriteStream('output.txt');
readableStream.pipe(writableStream);
In this example, the createReadStream()
function creates a Readable stream to read data from the ‘input.txt’ file. The createWriteStream()
function creates a Writable stream to write data to the ‘output.txt’ file. The pipe()
method connects the two streams, allowing data to flow from the Readable stream to the Writable stream.
Streams are beneficial in the following scenarios:
- Processing large files or continuous data where loading everything into memory is not feasible.
- Network communication, such as handling HTTP requests and responses.
- Real-time data processing, such as logging, data transformation, or data aggregation.
15. What are your strategies for writing an error handling middleware for a large-scale Node.js application?
When writing an error handling middleware for a large-scale Node.js application, it’s crucial to have a robust and centralized error handling mechanism. Here are some strategies to consider:
Define an error handling middleware:
- Create an error handling middleware function that has four parameters:
err
,req
,res
, andnext
. - Use the
err
parameter to capture the error. - Send an appropriate error response to the client based on the error type and status code.
- Example:
function errorHandler(err, req, res, next) {
console.error('Error:', err);
const statusCode = err.statusCode || 500;
res.status(statusCode).json({ error: err.message });
}
Centralize error handling:
- Register the error handling middleware as the last middleware in the middleware chain, after all other routes and middleware.
- This ensures that any unhandled errors in the previous middleware or routes are caught by the centralized error handler.
- Example:
app.use(errorHandler);
Use custom error classes:
- Create custom error classes that inherit from the
Error
class to represent specific types of errors in your application. - Add custom properties and methods to these classes to provide additional context and behavior.
- Example:
class NotFoundError extends Error {
constructor(message) {
super(message);
this.name = 'NotFoundError';
this.statusCode = 404;
}
}
Use next(err)
to pass errors to the error handling middleware:
- In your route handlers or middleware, pass any encountered errors to the next middleware or error handling middleware using
next(err)
. - This allows errors to propagate to the centralized error handler.
- Example:
app.get('/api/user/:id', (req, res, next) => {
const userId = req.params.id;
if (!userId) {
const err = new NotFoundError('User not found');
return next(err);
}
// Process the request
});
16. How do you prevent your Node.js application from crashing due to unhandled exceptions? give relevant code examples
To prevent a Node.js application from crashing due to unhandled exceptions, you can utilize the process
object’s uncaughtException
event and the unhandledRejection
event. Here’s how you can handle them:
Handling uncaught exceptions:
- Register a listener for the
uncaughtException
event usingprocess.on('uncaughtException', handler)
. - In the event listener, log the error and perform any necessary cleanup or graceful shutdown actions.
- Example
process.on('uncaughtException', (err) => {
console.error('Uncaught Exception:', err);
// Perform cleanup or graceful shutdown
process.exit(1);
});
Handling unhandled promise rejections:
- Register a listener for the
unhandledRejection
event usingprocess.on('unhandledRejection', handler)
. - In the event listener, log the error and handle it appropriately. You can either ignore it, log it, or perform any necessary error recovery.
- Example:
process.on('unhandledRejection', (err) => {
console.error('Unhandled Rejection:', err);
// Perform error handling or recovery
});
17. How can you avoid callback hell in Node.js?
Callback hell refers to the situation where deeply nested callbacks make the code difficult to read and maintain. You can avoid callback hell in Node.js by using techniques such as modularization, Promises, and async/await. Here are some examples:
Modularization and separation of concerns:
- Split your code into smaller functions with a single responsibility.
- Use named functions or separate modules to handle specific tasks.
- Example:
function fetchData(callback) {
// ...
}
function processData(data, callback) {
// ...
}
function displayResult(result) {
// ...
}
fetchData((err, data) => {
if (err) {
console.error('Error:', err);
return;
}
processData(data, (err, result) => {
if (err) {
console.error('Error:', err);
return;
}
displayResult(result);
});
});
Promises:
- Convert your callback-based functions into Promises using
util.promisify
or libraries likebluebird
. - Chain Promises using
.then()
and.catch()
to achieve a more readable and sequential flow. - Example using Promises
const util = require('util');
const fs = require('fs');
const readFile = util.promisify(fs.readFile);
readFile('file.txt', 'utf8')
.then((data) => processData(data))
.then((result) => displayResult(result))
.catch((err) => console.error('Error:', err));
Async/await:
- Use async/await to write asynchronous code in a more synchronous style.
- Wrap your code in an async function and use
await
to wait for Promise resolutions. - Example using async/await
async function main() {
try {
const data = await readFile('file.txt', 'utf8');
const result = await processData(data);
displayResult(result);
} catch (err) {
console.error('Error:', err);
}
}
main();
18. What strategies can you use to handle race conditions in Node.js?
Race conditions can occur when multiple asynchronous operations compete for shared resources, leading to unpredictable and undesirable results. Here are some strategies to handle race conditions in Node.js:
Synchronization with locks:
- Use locks or mutual exclusion mechanisms to ensure that only one process or thread can access a shared resource at a time.
- Node.js provides synchronization primitives like
Mutex
andSemaphore
through libraries likeasync-mutex
orasync-lock
. - Example with async-mutex:
const { Mutex } = require('async-mutex');
const mutex = new Mutex();
async function accessSharedResource() {
const release = await mutex.acquire();
try {
// Access the shared resource
}
finally {
release();
}
}
Atomic operations:
- Use atomic operations provided by databases or other external services to ensure that read and write operations on shared resources are performed atomically.
- Atomic operations guarantee that the operation is completed as a single, indivisible unit.
- Examples include atomic update operations in databases or optimistic concurrency control mechanisms.
Use appropriate data structures:
- Choose data structures that are designed to handle concurrent access safely.
- For example, using a concurrent hash map or a concurrent queue can help mitigate race conditions.
- Libraries like
node-cache
provide built-in support for concurrent operations on in-memory caches.
Proper error handling and fallback mechanisms:
- Implement proper error handling and fallback mechanisms to handle exceptions or failures that may occur during concurrent operations.
- This ensures that errors are caught and handled gracefully, preventing race conditions from causing unexpected behavior.
Event-driven programming and message passing:
- Leverage event-driven programming and message passing mechanisms to coordinate concurrent operations.
- Use events or message queues to ensure that operations on shared resources are executed in a controlled and synchronized manner.
19. How would you manage sessions in scalable Node.js applications?
Managing sessions in scalable Node.js applications typically involves using session middleware and a session store. Here’s how you can manage sessions:
- Install and configure session middleware:
- Use a session middleware like
express-session
to handle session management in your Node.js application. - Configure the session middleware with options such as a secret key, cookie settings, and session store. Example using
express-session
:
const session = require('express-session');
const RedisStore = require('connect-redis')(session);
app.use(
session({
secret: 'your-secret-key',
resave: false,
saveUninitialized: false,
store: new RedisStore({ url: 'redis://localhost:6379' }),
cookie: { secure: true },
})
);
- Configure a session store:
- Choose a session store that suits your scalability requirements.
- Popular session stores include Redis, MongoDB, or in-memory stores like
memorystore
. - A session store allows session data to be stored and retrieved across multiple instances or servers in a distributed environment. Example using Redis as the session store:
const RedisStore = require('connect-redis')(session);
app.use(
session({
// ...
store: new RedisStore({ url: 'redis://localhost:6379' }),
// ...
})
);
- Use session data:
- Once the session middleware is configured, you can access session data in your routes or middleware using
req.session
. - Example:
app.get('/dashboard', (req, res) => {
const user = req.session.user;
// ...
});
20. What strategies can you use to secure REST APIs in Node.js?
Securing REST APIs in Node.js involves implementing various security measures to protect against common vulnerabilities. Here are some strategies to secure REST APIs:
- Authentication and authorization:
- Implement user authentication mechanisms such as JWT (JSON Web Tokens), OAuth, or session-based authentication.
- Verify user credentials or tokens before allowing access to protected endpoints.
- Example using JWT authentication:
const jwt = require('jsonwebtoken');
app.post('/login', (req, res) => {
// Validate user credentials
const user = { id: 1, username: 'john' };
const token = jwt.sign(user, 'your-secret-key');
res.json({ token });
});
app.get('/protected', authenticateToken, (req, res) => {
// Handle the request for protected endpoint
res.send('Protected content');
});
function authenticateToken(req, res, next) {
const authHeader = req.headers.authorization;
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.sendStatus(401);
}
jwt.verify(token, 'your-secret-key', (err, user) => {
if (err) {
return res.sendStatus(403);
}
req.user = user;
next();
});
}
- Input validation and sanitization:
- Validate and sanitize all user input to prevent injection attacks, cross-site scripting (XSS), or other vulnerabilities.
- Use libraries like
express-validator
to validate and sanitize request parameters, query parameters, and request bodies. Example usingexpress-validator
:
const { body, validationResult } = require('express-validator');
app.post(
'/users',
[
// Validate and sanitize input fields
body('name').isLength({ min: 3 }).trim().escape(),
body('email').isEmail().normalizeEmail(),
// ...
],
(req, res) => {
// Handle the request
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
// ...
}
);
- Rate limiting:
- Implement rate limiting to prevent abuse or excessive requests to your API.
- Use middleware like
express-rate-limit
to set limits on the number of requests per IP address or user. Example usingexpress-rate-limit
:
const rateLimit = require('express-rate-limit');
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // Max requests per window
});
app.use('/api/', apiLimiter);
- HTTPS and SSL/TLS:
- Always use HTTPS for secure communication and transmit sensitive data over encrypted channels.
- Obtain and configure SSL/TLS certificates to enable secure HTTPS connections.
- Use libraries like
https
or reverse proxies like Nginx to enable HTTPS. Example usinghttps
module:
const https = require('https');
const fs = require('fs');
const options = {
key: fs.readFileSync('privatekey.pem'),
cert: fs.readFileSync('certificate.pem'),
};
https.createServer(options, app).listen(443);
21. Explain how garbage collection works in Node.js.
Garbage collection in Node.js is the automatic process of reclaiming memory that is no longer in use by the application. It helps manage memory efficiently and avoid memory leaks. Here’s how garbage collection works in Node.js:
Memory Allocation:
- When Node.js executes code, it allocates memory for objects and variables as needed.
- Objects are stored in the heap, while function calls and local variables are stored in the stack.
Mark and Sweep:
- Node.js uses a garbage collector called the “Mark and Sweep” algorithm to identify and collect unreachable objects.
- The garbage collector starts by marking all objects in memory as “reachable” from the root objects, such as global variables and active function calls.
- It then traverses the object graph, marking all objects reachable from the root objects.
- Any objects not marked as reachable are considered garbage.
Memory Reclamation:
- Once the garbage collector has identified the garbage objects, it reclaims the memory occupied by those objects.
- The garbage collector frees the memory by updating the memory management data structures and making the memory available for future allocations.
Event-based and Incremental Collection:
- Node.js performs garbage collection asynchronously and in an incremental manner to minimize interruptions to the application’s execution.
- The garbage collector runs concurrently with the application and periodically interrupts the JavaScript execution to perform garbage collection.
- It attempts to avoid long pauses by performing garbage collection incrementally and distributing the work over multiple event loops.
22. How would you scale a Node.js application? Discuss different strategies.
Scaling a Node.js application involves managing increased load and ensuring high availability as the application grows. Here are different strategies to scale a Node.js application:
Horizontal Scaling:
- Add more machines or servers to distribute the workload across multiple instances.
- Use a load balancer to distribute incoming requests evenly among the instances.
- Horizontal scaling helps increase the application’s capacity and handle more concurrent requests.
Vertical Scaling:
- Upgrade the hardware resources of the server hosting the Node.js application.
- Increase CPU, memory, or storage capacity to handle higher loads.
- Vertical scaling allows the application to handle more requests on a single machine.
Caching:
- Implement caching mechanisms to store frequently accessed data in memory or a distributed cache.
- Use a caching layer like Redis or Memcached to cache database query results or expensive computations.
- Caching reduces the load on backend systems and improves response times.
Database Scaling:
- Scale the database layer to handle increased read and write operations.
- Implement database replication, sharding, or clustering to distribute the database workload across multiple servers.
- Use techniques like read replicas or database partitioning to improve database performance and handle more concurrent requests.
Asynchronous and Non-blocking I/O:
- Leverage Node.js’s event-driven, non-blocking I/O model to handle concurrent requests efficiently.
- Utilize asynchronous APIs and libraries to avoid blocking the event loop and make optimal use of system resources.
- Design the application to handle I/O-intensive operations asynchronously, enabling better scalability.
Microservices Architecture:
- Decompose the application into smaller, independent services that can be scaled individually.
- Use message queues, event-driven architectures, or service orchestration to coordinate communication between microservices.
- Microservices allow independent scaling of different components and help isolate failures.
Containerization and Orchestration:
- Containerize the application using technologies like Docker.
- Use container orchestration platforms like Kubernetes to manage and scale the application across multiple containers or pods.
- Containerization provides flexibility, scalability, and easier deployment of the application.
23. How can you handle memory leaks in long-running Node.js processes?
Memory leaks in long-running Node.js processes can lead to excessive memory consumption and degrade the application’s performance over time. Here are some strategies to handle memory leaks:
Identify and analyze memory leaks:
- Use memory profiling tools like Node.js’s built-in
--inspect
flag, Chrome DevTools, or third-party profilers (e.g., heapdump, clinic) to identify memory leaks. - Analyze memory snapshots and identify objects that are not being garbage collected as expected.
Review code and third-party dependencies:
- Review your application’s code and third-party dependencies for potential memory leaks.
- Look for places where resources are not released, event listeners are not properly removed, or circular references are created.
Use proper cleanup and disposal mechanisms:
- Make sure to release resources, remove event listeners, and cancel timers when they are no longer needed.
- Use mechanisms like
clearInterval()
,clearTimeout()
,removeListener()
, or other cleanup functions to prevent resource leaks.
Manage streams and buffers:
- Ensure that streams and buffers are properly closed and released after use.
- Leaking streams or buffers can accumulate memory over time.
Monitor and restart long-running processes:
- Monitor the memory usage of long-running Node.js processes.
- Implement monitoring solutions that can send alerts or automatically restart the process when memory usage exceeds certain thresholds.
Employ memory leak detection libraries:
- Use libraries like
leakage
ormemwatch-next
to detect and track memory leaks in your Node.js application. - These libraries can help identify leaking objects or unexpected memory growth patterns.
Implement periodic restarts:
- For long-running processes, consider implementing periodic restarts to free up accumulated memory.
- Set up automated processes or scripts to restart the Node.js application at specific intervals or based on memory usage patterns.
24. How can you optimize the performance of a Node.js application? Discuss different techniques.
Optimizing the performance of a Node.js application involves identifying bottlenecks and applying various techniques to improve efficiency and speed. Here are different techniques to optimize Node.js application performance:
Code optimization:
- Review your code for inefficiencies, unnecessary computations, or synchronous blocking operations.
- Optimize critical sections of code using algorithms or data structures with better time complexity.
- Use techniques like memoization or caching to avoid redundant calculations.
Asynchronous programming:
- Utilize Node.js’s non-blocking, event-driven architecture to handle I/O operations asynchronously.
- Use async/await or Promises to write asynchronous code in a more synchronous style.
- Leverage async libraries like
async
orbluebird
for advanced control flow management.
Stream processing:
- Utilize streams for efficient and scalable data processing.
- Process data in chunks using readable and writable streams, rather than loading the entire data into memory.
- Piping streams can minimize memory usage and improve performance.
Connection pooling:
- Establish a connection pool for database connections or external API calls.
- Reusing connections from a pool reduces connection overhead and improves response times.
Caching:
- Implement caching mechanisms to store frequently accessed data in memory or a distributed cache.
- Use in-memory caches like Redis or Memcached to cache database query results or expensive computations.
- Caching can significantly reduce response times and lower the load on backend systems.
Load balancing:
- Distribute incoming requests across multiple instances or servers using a load balancer.
- Load balancing helps distribute the workload and improves response times under heavy traffic.
Proper error handling:
- Implement proper error handling and error recovery mechanisms to prevent crashes or stalled processes.
- Handle errors gracefully and provide meaningful error messages to users.
- Implement error logging and monitoring to identify and fix recurring errors.
Performance monitoring and profiling:
- Use monitoring tools and profilers to identify performance bottlenecks.
- Analyze CPU usage, memory usage, event loop delays, and other metrics to pinpoint areas for improvement.
- Tools like Node.js’s
--inspect
flag, Chrome DevTools, or third-party profilers (e.g., clinic.js, New Relic) can assist in performance analysis.
Scaling and load testing:
- Scale your application horizontally or vertically to handle increased loads.
- Conduct load tests to simulate real-world scenarios and identify performance limitations.
- Monitor the application’s performance during load tests to identify areas for improvement.
25. How can you handle file uploads in a Node.js application?
Handling file uploads in a Node.js application involves receiving files from clients, storing them on the server, and processing them as needed. Here’s how you can handle file uploads:
- Form-based file upload:
- Create an HTML form with an input element of type “file” to allow users to select files.
- On the server side, use a middleware like
multer
to handle file uploads and store the files in a specified directory. - Example using
multer
middleware:
const express = require('express');
const multer = require('multer');
const app = express();
const upload = multer({ dest: 'uploads/' });
app.post('/upload', upload.single('file'), (req, res) => {
// File is uploaded and accessible via req.file
res.send('File uploaded successfully');
});
app.listen(3000, () => {
console.log('Server started on port 3000');
});
- Streaming file upload:
- Handle file uploads using the streaming approach, which is suitable for large files.
- Use libraries like
busboy
orformidable
to parse and handle file uploads as streams. - Example using
formidable
:
const http = require('http');
const formidable = require('formidable');
const server = http.createServer((req, res) => {
if (req.url === '/upload' && req.method === 'POST') {
const form = formidable({ multiples: true });
form.parse(req, (err, fields, files) => {
if (err) {
console.error('Error:', err);
return;
}
// Files are accessible via files object
res.end('File uploaded successfully');
});
return;
}
// Render HTML form for file upload
res.writeHead(200, { 'Content-Type': 'text/html' });
res.write('<form action="/upload" method="post" enctype="multipart/form-data">');
res.write('<input type="file" name="file"><br>');
res.write('<input type="submit">');
res.write('</form>');
res.end();
});
server.listen(3000, () => {
console.log('Server started on port 3000');
});
- Direct file upload to cloud storage:
- Instead of storing files on the server, you can upload them directly to cloud storage services like AWS S3 or Google Cloud Storage.
- Clients can upload files directly to the cloud storage, and your Node.js application receives a callback or notification after the upload is complete. Example using AWS S3:
const AWS = require('aws-sdk');
const express = require('express');
const multer = require('multer');
const multerS3 = require('multer-s3');
const app = express();
const s3 = new AWS.S3({ /* AWS configuration */ });
const upload = multer({
storage: multerS3({
s3: s3,
bucket: 'your-bucket-name',
key: (req, file, cb) => {
cb(null, file.originalname);
},
}),
});
app.post('/upload', upload.single('file'), (req, res) => {
// File uploaded to AWS S3
res.send('File uploaded successfully');
});
app.listen(3000, () => {
console.log('Server started on port 3000');
});
These examples demonstrate different approaches to handle file uploads in a Node.js application. You can choose the approach that best suits your requirements, whether it’s storing files on the server or directly uploading them to cloud storage.
26. How do you handle logging in a Node.js application?
Logging is crucial for debugging, monitoring, and understanding the behavior of a Node.js application. Here’s how you can handle logging:
Using the built-in console
module:
- The simplest way to log messages in Node.js is by using the
console
module. - Use
console.log()
for general logging,console.error()
for error messages, andconsole.warn()
for warning messages. - Example:
console.log('This is a log message');
console.error('This is an error message');
console.warn('This is a warning message');
Using third-party logging libraries:
- Third-party logging libraries like
winston
,pino
, orbunyan
offer more advanced logging capabilities and flexibility. - These libraries provide features such as log levels, custom formatting, log rotation, and integration with external services.
- Example using Winston:
- Logging with log levels:
- Log messages with different levels of severity to provide more context and prioritize log entries.
- Common log levels include
info
,debug
,warn
,error
, andfatal
. - Example using Winston:
javascript logger.debug('Debugging information'); logger.warn('Warning: Potential issue');
- Structured logging:
- Use structured logging formats like JSON or key-value pairs to provide additional data with each log entry.
- This helps in aggregating and analyzing logs efficiently using log analysis tools.
- Example using Winston with JSON format:
const winston = require('winston'); const logger = winston.createLogger({ level: 'info', format: winston.format.json(), transports: [new winston.transports.Console()], }); logger.info('User logged in', { username: 'john', role: 'admin' });
- Log rotation and retention:
- Implement log rotation mechanisms to manage log file sizes and prevent them from growing indefinitely.
- Rotate logs based on file size, time intervals, or a combination of both.
- Example using Winston with log rotation:
const winston = require('winston'); const logger = winston.createLogger({ level: 'info', format: winston.format.json(), transports: [ new winston.transports.Console(), new winston.transports.File({ filename: 'logs.log', maxsize: 1000000, maxFiles: 5 }), ], });
27. How do you deal with Asynchronous APIs in Node.js?
Dealing with asynchronous APIs is a core aspect of programming in Node.js. Here’s how you can handle asynchronous APIs:
Callbacks:
- The traditional approach in Node.js is to use callbacks to handle asynchronous operations.
- Pass a callback function as an argument to an asynchronous function, which will be called when the operation is complete.
- Example:
function fetchData(callback) {
// Simulating an asynchronous operation
setTimeout(() => {
const data = 'Some data';
callback(null, data);
}, 1000);
}
fetchData((err, data) => {
if (err) {
console.error('Error:', err);
return;
}
console.log('Data:', data);
});
Promises:
- Promises provide a more structured and intuitive way to handle asynchronous operations.
- Wrap asynchronous functions with a Promise, which resolves with the result or rejects with an error.
- Use
.then()
to handle successful resolution and.catch()
to handle errors. - Example:
function fetchData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
const data = 'Some data';
resolve(data);
}, 1000);
});
}
fetchData()
.then((data) => {
console.log('Data:', data);
})
.catch((err) => {
console.error('Error:', err);
});
Async/await:
- Async/await provides a more synchronous-style approach to handle asynchronous APIs introduced in ECMAScript 2017.
- Mark the containing function as
async
and use theawait
keyword before an asynchronous function call to wait for its resolution. - Example:
async function fetchData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
const data = 'Some data';
resolve(data);
}, 1000);
});
}
async function process() {
try {
const data = await fetchData();
console.log('Data:', data);
} catch (err) {
console.error('Error:', err);
}
}
process();
28. Explain how to do authentication in Node.js?
Authentication is the process of verifying the identity of a user or system. Here’s how you can implement authentication in Node.js:
- User-based authentication:
- Store user credentials securely, such as hashed passwords, in a database or user store.
- Implement a login route that validates user credentials and issues authentication tokens.
- Example using Express and JWT (JSON Web Tokens):
const express = require('express');
const jwt = require('jsonwebtoken');
const app = express();
// Login route
app.post('/login', (req, res) => {
// Validate user credentials
const user = { id: 1, username: 'john' };
const token = jwt.sign(user, 'your-secret-key');
res.json({ token });
});
// Protected route
app.get('/protected', authenticateToken, (req, res) => {
// Handle the request for protected endpoint
res.send('Protected content');
});
// Middleware to authenticate token
function authenticateToken(req, res, next) {
const authHeader = req.headers.authorization;
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.sendStatus(401);
}
jwt.verify(token, 'your-secret-key', (err, user) => {
if (err) {
return res.sendStatus(403);
}
req.user = user;
next();
});
}
app.listen(3000, () => {
console.log('Server started on port 3000');
});
- Session-based authentication:
- Use session-based authentication with cookies or session tokens.
- Store session data on the server or in a distributed session store like Redis or MongoDB.
- Example using Express and
express-session
middleware:
const express = require('express');
const session = require('express-session');
const app = express();
app.use(
session({
secret: 'your-secret-key',
resave: false,
saveUninitialized: false,
})
);
// Login route
app.post('/login', (req, res) => {
// Validate user credentials
req.session.user = { id: 1, username: 'john' };
res.send('Login successful');
});
// Protected route
app.get('/protected', (req, res) => {
if (req.session.user) {
// Handle the request for protected endpoint
res.send('Protected content');
} else {
res.sendStatus(401);
}
});
app.listen(3000, () => {
console.log('Server started on port 3000');
});
29. How do you perform unit testing in Node.js?
Unit testing is a crucial part of ensuring the reliability and correctness of Node.js applications. Here’s how you can perform unit testing in Node.js:
- Choose a testing framework:
- Select a testing framework like
Mocha
,Jest
, orAVA
to write and execute tests. - These frameworks provide features such as test runners, assertion libraries, and test reporting.
2. Write test cases:
- Create test cases that cover different scenarios and edge cases for your application’s functions or modules.
- Use assertions to validate expected outputs and behavior.
- Example using Mocha and Chai:
const { expect } = require('chai');
const { add } = require('./math');
describe('Math', () => {
it('should add two numbers', () => {
const result = add(2, 3);
expect(result).to.equal(5);
});
it('should handle negative numbers', () => {
const result = add(-2, 3);
expect(result).to.equal(1);
});
});
- Set up test runners and test scripts:
- Configure the testing framework’s test runner to execute the test cases.
- Set up test scripts in your project’s
package.json
file. - Example using Mocha:
{
"scripts": {
"test": "mocha"
}
}
- Run tests:
- Run the test command using the test runner or testing framework of your choice.
- Example using Mocha:
npm test
- Mocking and stubbing dependencies:
- Use mocking or stubbing techniques to isolate units of code and test them in isolation.
- Mock external dependencies or functions using libraries like
sinon
orjest
to control their behavior during testing. Example using Sinon for function stubbing:
const { expect } = require('chai');
const sinon = require('sinon');
const { calculateTotal } = require('./cart');
const db = require('./db');
describe('Cart', () => {
it('should calculate the total price', () => {
const products = [
{ id: 1, name: 'Product 1', price: 10 },
{ id: 2, name: 'Product 2', price: 20 },
];
// Stub the `fetchProducts` function
sinon.stub(db, 'fetchProducts').returns(products);
const result = calculateTotal();
expect(result).to.equal(30);
// Restore the original function
db.fetchProducts.restore();
});
});
30. Explain how to perform error handling when using Promises in Node.js. give relevant code examples
When using Promises in Node.js, error handling is crucial to handle any rejected Promises and ensure proper control flow. Here’s how you can perform error handling with Promises:
Using .then()
and .catch()
:
- Chain
.then()
to handle the resolved Promise and provide a callback function that handles successful outcomes. - Chain
.catch()
to handle any rejected Promises and provide a callback function that handles errors. - Example:
fetchData()
.then((data) => {
console.log('Data:', data);
})
.catch((err) => {
console.error('Error:', err);
});
Using try/catch
with async/await
:
- Wrap the Promise-based function call inside an
async
function. - Use
try/catch
to handle errors that may occur during theawait
operation. - Example:
async function process() {
try {
const data = await fetchData();
console.log('Data:', data);
} catch (err) {
console.error('Error:', err);
}
}
process();
Throwing and propagating errors:
- Within a Promise or
async
function, you can use thethrow
statement to reject the Promise or propagate the error. - The error can then be caught using
.catch()
or a surroundingtry/catch
block. - Example:
function fetchData() {
return new Promise((resolve, reject) => {
// Simulating an error
if (somethingWentWrong) {
reject(new Error('Something went wrong'));
return;
}
resolve('Some data');
});
}
fetchData()
.then((data) => {
console.log('Data:', data);
})
.catch((err) => {
console.error('Error:', err);
});
Handling errors in Promise chains:
- When working with multiple Promises in a chain, you can handle errors using multiple
.catch()
blocks or by returning a rejected Promise. - Example:
class MyCustomError extends Error {
constructor(message) {
super(message);
this.name = 'MyCustomError';
}
}
fetchData()
.then((data) => {
if (!data) {
throw new MyCustomError('Data not found');
}
console.log('Data:', data);
})
.catch((err) => {
if (err instanceof MyCustomError) {
console.error('Custom Error:', err.message);
} else {
console.error('Error:', err);
}
});
MCQ Questions
1. What is Node.js?
a) A web browser
b) A server-side runtime environment
c) A programming language
d) A database management system
Answer: b) A server-side runtime environment
2. Which programming language is commonly used with Node.js?
a) Java
b) Python
c) JavaScript
d) C++
Answer: c) JavaScript
3. What is the purpose of the Node Package Manager (NPM) in Node.js?
a) To manage and install Node.js packages and dependencies
b) To compile JavaScript code
c) To execute Node.js applications
d) To debug Node.js applications
Answer: a) To manage and install Node.js packages and dependencies
4. Which module is used to create a web server in Node.js?
a) fs
b) path
c) http
d) url
Answer: c) http
5. What is the event-driven programming paradigm in Node.js?
a) A programming paradigm based on objects and classes
b) A programming paradigm based on functions and callbacks
c) A programming paradigm based on static typing
d) A programming paradigm based on threads and processes
Answer: b) A programming paradigm based on functions and callbacks
6. What is the purpose of the “require” function in Node.js?
a) To import modules and dependencies
b) To create a new instance of a class
c) To define a function
d) To execute a callback function
Answer: a) To import modules and dependencies
7. What is the file system module in Node.js used for?
a) Managing network connections
b) Handling HTTP requests and responses
c) Manipulating files and directories
d) Executing shell commands
Answer: c) Manipulating files and directories
8. What is the purpose of the “exports” object in Node.js?
a) To define global variables
b) To define and expose functions or objects to be used in other modules
c) To handle errors and exceptions
d) To create HTTP servers
Answer: b) To define and expose functions or objects to be used in other modules
9. Which module is used for handling streams in Node.js?
a) net
b) http
c) fs
d) stream
Answer: d) stream
10. How can you handle errors in Node.js?
a) Using try-catch statements
b) Using if-else statements
c) Using switch statements
d) Using while loops
Answer: a) Using try-catch statements
11. What is the purpose of the “cluster” module in Node.js?
a) To manage child processes and create a cluster of servers
b) To encrypt and decrypt data
c) To handle HTTP requests and responses
d) To parse and manipulate JSON data
Answer: a) To manage child processes and create a cluster of servers
12. What is the purpose of the “os” module in Node.js?
a) To manage files and directories
b) To handle network connections
c) To manipulate URLs
d) To interact with the operating system
Answer: d) To interact with the operating system
13. What is the purpose of the “crypto” module in Node.js?
a) To create web servers
b) To handle HTTP requests and responses
c) To encrypt and decrypt data
d) To manipulate files and directories
Answer: c) To encrypt and decrypt data
14. What is the purpose of the “child_process” module in Node.js?
a) To manage child processes
b) To handle network connections
c) To create web servers
d) To manipulate files and directories
Answer: a) To manage child processes
15. What is the purpose of the “url” module in Node.js?
a) To handle HTTP requests and responses
b) To create web servers
c) To manipulate URLs
d) To interact with the operating system
Answer: c) To manipulate URLs
16. Which of the following is NOT a built-in module in Node.js?
a) http
b) fs
c) request
d) path
Answer: c) request
17. What is the purpose of the “net” module in Node.js?
a) To handle network connections
b) To create web servers
c) To manipulate files and directories
d) To interact with the operating system
Answer: a) To handle network connections
18. What is the purpose of the “util” module in Node.js?
a) To handle HTTP requests and responses
b) To manipulate files and directories
c) To interact with the operating system
d) To provide utility functions for debugging and formatting
Answer: d) To provide utility functions for debugging and formatting
19. What is the purpose of the “dns” module in Node.js?
a) To handle network connections
b) To create web servers
c) To manipulate URLs
d) To resolve domain names
Answer: d) To resolve domain names
20. Which module is used for unit testing in Node.js?
a) assert
b) http
c) fs
d) net
Answer: a) assert
21. What is the purpose of the `require()` function in Node.js?
- [ ] A. It is used to import Node.js modules.
- [ ] B. It is used to export functions and variables from a Node.js module.
- [ ] C. It is used to define a new module in Node.js.
- [ ] D. It is used to execute JavaScript code in Node.js.
Answer: A. It is used to import Node.js modules.
22. Which of the following is NOT a built-in module in Node.js?
- [ ] A.
fs
- [ ] B.
http
- [ ] C.
path
- [ ] D.
mongoDB
Answer: D. mongoDB
23. What is the purpose of the `process` object in Node.js?
- [ ] A. It is used to manage child processes in Node.js.
- [ ] B. It is used to handle HTTP requests and responses in Node.js.
- [ ] C. It provides information about the current Node.js process and allows interaction with it.
- [ ] D. It is used to manipulate and query databases in Node.js.
Answer: C. It provides information about the current Node.js process and allows interaction with it.
24. Which of the following is the correct way to handle asynchronous operations in Node.js?
- [ ] A. Using callbacks
- [ ] B. Using promises
- [ ] C. Using async/await
- [ ] D. All of the above
Answer: D. All of the above
25. Which HTTP method is typically used to retrieve data from a server in Node.js?
- [ ] A. GET
- [ ] B. POST
- [ ] C. PUT
- [ ] D. DELETE
Answer: A. GET
26. What is the purpose of the `next()` function in Express.js middleware?
- [ ] A. It is used to pass control to the next middleware function.
- [ ] B. It is used to terminate the request and send a response.
- [ ] C. It is used to handle errors in the middleware stack.
- [ ] D. It is used to retrieve request parameters.
Answer: A. It is used to pass control to the next middleware function.
27. Which of the following is NOT a valid way to handle errors in Node.js?
- [ ] A. Using try/catch blocks
- [ ] B. Using error-first callbacks
- [ ] C. Using error events
- [ ] D. Using the
finally
statement
Answer: D. Using the finally
statement
28. Which of the following is a popular database framework for Node.js?
- [ ] A. MongoDB
- [ ] B. Express.js
- [ ] C. Angular.js
- [ ] D. React.js
Answer: A. MongoDB
29. What is the purpose of the `npm` command in Node.js?
- [ ] A. It is used to install Node.js packages.
- [ ] B. It is used to manage Node.js versions.
- [ ] C. It is used to execute JavaScript code.
- [ ] D. It is used to start a Node.js server.
Answer: A. It is used to install Node.js packages.
30. Which of the following is NOT a core module in Node.js?
- A.
fs
- B. `http`
- C.
os
- D.
express
Answer: D. express