Apache Storm operation modes are used to represent the deployment of topology. There are two types of operation modes supported by Apache Storm.

1. Local Operation Mode

The local operation mode is used for testing and debugging of topology. In local operation mode, a single JVM is used to run Storm topology and using local operation mode we can regulate parameters which in turn show us the working topology on the different types of Storm environments.


2. Production Operation Mode

In Production operation Mode, the user submits topology on a cluster of Storm nodes that contains all necessary code to execute code to run the topology. Once the code is submitted that Nimbus distributes code on required nodes for further execution.


Apache Storm Workflow

Apache Storm is a distributed and real-time processing system that consists of one Nimbus node and one or more Supervisors nodes. After these nodes, there is another type of node called Apache Zookeeper that is used to maintain the communication between the Nimbus node and Supervisors nodes.

Let us see the steps of the Apache Storm workflow.

Step 1:

At the first step, the Storm Nimbus node keeps checking whether a Storm topology is submitted or now.

Step 2:

When a Storm topology is submitted, Nimbus node fetches all task detail which is required to be executed.

Step 3:

After this Nimbus node divides all gathered tasks into Supervisor nodes.

Step 4:

All Supervisor nodes keep sending heartbeat to the Nimbus node to inform that they are alive.

Step 5:

In case if a Supervisor node is down and not able to send a heartbeat to the Nimbus node, that case, Nimbus node assigns the task to another Supervisor node.

Step 6:

In case if Nimbus node is down, then there won’t be any impact on already assigned tasks to Supervisor nodes as they will keep working on it.

Step 7:

Once all tasks are completed by Supervisor nodes, they wait for the new task, and till the time down Nimbus node gets started automatically by SM(Service Monitoring) tool.

Step 8:

Once all tasks are completed by Supervisor nodes, they wait for the new task, and till the time down Nimbus node gets started automatically by SM(Service Monitoring), that Nimbus node starts from the same point where it was failed and starts sending other tasks if it has to Supervisor nodes.