Saturday, December 6, 2025

Understanding the Java Collections Framework: A Complete Guide for Developers

 When working with Java, one of the most essential toolkits you’ll encounter is the Java Collections Framework (JCF). Whether you're building enterprise applications, handling large datasets, or preparing for interviews, having a solid grasp of JCF gives you a huge advantage.

In this post, we’ll break down the fundamentals of Collections, the differences between its main interfaces, and dive deep into how HashMap works internally—one of the most common interview topics in Java roles.


1. What is the Java Collections Framework?

The Java Collections Framework (JCF) is a unified architecture for storing, retrieving, and manipulating groups of objects efficiently. Instead of building data structures from scratch, JCF provides a complete set of reusable interfaces and classes.

What it includes:

➤ Interfaces

  • List

  • Set

  • Map

  • Queue

These define how data should behave.

➤ Implementations

  • ArrayList, LinkedList

  • HashSet, TreeSet

  • HashMap, TreeMap

These are ready-to-use data structures.

➤ Algorithms

  • Searching

  • Sorting

  • Shuffling

These are provided through the Collections utility class.

➤ Utility Classes

  • Collections – for algorithms

  • Arrays – for array operations

Why does JCF exist?

To save developers from reinventing the wheel. It offers:

  • Optimized performance

  • Standardized architecture

  • Well-tested implementations

  • Cleaner and more maintainable code


2. Difference Between List, Set, and Map

Different collection types serve different purposes. Understanding how they differ is crucial when choosing which one to use.

Comparison Table

FeatureListSetMap
StoresOrdered collection of elementsUnordered (or rule-ordered) unique elementsKey-value pairs
Allows duplicates✔ Yes✖ NoKeys: No, Values: Yes
Access orderIndexed (can get by index)No indexNo index
Common implementationsArrayList, LinkedListHashSet, LinkedHashSet, TreeSetHashMap, TreeMap, LinkedHashMap

Summary

  • List: Ordered, indexed, duplicates allowed

  • Set: Unique elements only, no duplicates

  • Map: Stores key → value pairs

Choosing the right collection ensures better performance and cleaner data modeling.


3. HashMap vs ConcurrentHashMap

When working with Java applications—especially multi-threaded ones—understanding these two Map implementations is critical.

HashMap

  • Not thread-safe

  • Allows one null key and multiple null values

  • Race conditions may occur in multi-threaded environments

  • Uses a single underlying array with no synchronization

ConcurrentHashMap

  • Fully thread-safe

  • Does not allow null keys or values

  • Uses fine-grained locking (bucket-level or segment-level)

  • Designed for high performance under concurrency

  • Much faster than a synchronized HashMap



Side-by-Side Comparison
AspectHashMap    ConcurrentHashMap
Thread-safe?❌ No    ✔ Yes
Null keys/values✔ Allowed    ❌ Not allowed
LockingNo locks    Bucket-level locking
Performance under concurrencyUnsafe    High performance

4. How HashMap Works Internally

Understanding HashMap’s internal mechanics is one of the most frequently asked Java interview topics. Here’s how it operates under the hood.

Step-by-Step Workflow

  1. The key’s hash code is computed.

  2. That hash is processed into a bucket index:

  1. index = hash(key.hashCode()) & (n - 1)

    where n is the array size.

  2. The key-value pair is stored in the bucket (array slot).


๐Ÿงฉ How HashMap Handles Collisions

A collision happens when two different keys map to the same bucket index.

Before Java 8

  • Collisions were handled with a linked list.

  • New entries were inserted at the head of the list.

  • Worst-case time complexity: O(n)

Java 8 and Later

  • If a bucket becomes too large (default threshold = 8),
    the linked list is converted into a Red-Black Tree.

  • Tree operations reduce lookup time to O(log n).

Collision Flow

  1. Compute bucket index

  2. If bucket is empty → store entry

  3. If bucket has entries:

    • Compare hash codes

    • If equal, compare keys using equals()

    • If key exists → update value

    • Otherwise → add to list/tree

  4. If entries exceed threshold → convert to tree


๐Ÿง  Rehashing: When HashMap Grows

HashMap uses a load factor (default = 0.75).
When exceeded:

  • The array size doubles

  • All entries are redistributed (rehash)

  • This is computationally expensive

Rehashing keeps the HashMap efficient but can affect performance during resizing.


Final Thoughts

The Java Collections Framework is one of the most powerful and widely used components of the Java ecosystem. Understanding how Lists, Sets, and Maps differ—and how HashMap works internally—can dramatically improve your development skills and help you perform better in technical interviews.

If you want a follow-up post on:

✔ HashSet vs TreeSet
✔ ArrayList vs LinkedList
✔ Concurrent collections
✔ Big-O analysis of common operations

Just let me know!

Wednesday, November 12, 2025

๐Ÿง  Git Merging Branches Tutorial — Step-by-Step Guide

 Working with branches in Git is one of the most powerful ways to manage your code. In this tutorial, we’ll walk through how to create a repository, add branches, make commits, and finally merge multiple branches into your main branch.


๐Ÿชด Step 1: Create a New Git Repository

Let’s start by initializing a new Git repository in your local project folder:

git init

This creates a hidden .git folder — the home of all your version history.

Next, connect your local repository to a remote one (for example, on GitHub):

git remote add origin https://github.com/unicornautomata/test_app.git

Now fetch existing data (if any) and pull the latest changes from the remote repository:

git fetch origin

git pull origin main

his ensures your local main branch is up to date.


๐ŸŒฟ Step 2: Create and Work on Branch AA

Let’s create a new branch named AA and switch to it:

git checkout -b AA

Now, add a new file to this branch:

git add git_tut.txt

git commit -m "Add git_tut.txt file"

git push -u origin AA

This pushes branch AA to your GitHub repository.


๐ŸŒณ Step 3: Create and Work on Branch BB

Go back to the main branch, then create another branch named BB:

git checkout main

git checkout -b BB

Now add two files — New.txt and git_tut.txt — then commit and push your changes:

git add New.txt git_tut.txt

git commit -m "Add New.txt and update git_tut.txt"

git push -u origin BB

๐ŸŒผ Step 4: Create a Merge Branch (CC)

Now, we’ll merge both AA and BB into a new branch called CC.

Switch back to main and create CC:

git checkout main

git checkout -b CC

Then merge the other two branches:

git merge AA

git merge BB

If everything merges cleanly, push CC to the remote repository:

git push -u origin CC

If a conflict occurs — meaning there’s a change in branch BB that overlaps with the current version of the file in CC — simply open the file in your code editor (in my case, Atom). You’ll see sections labeled something like “your changes” and “their changes”, often with buttons to choose between them.

If you want to keep only one version, click the corresponding button. If you prefer to keep both changes, just remove the conflict markers (your changes and their changes labels), adjust the content as needed, and save the file.

Finally, push branch CC to the remote repository:

git push -u origin CC


๐ŸŒป Step 5: Merge Everything Back to Main

Finally, let’s bring all your updates back into the main branch.

git checkout main

git merge CC

git push -u origin main

 Your main branch now contains all the changes from AA, BB, and CC.


๐ŸŽ‰ Step 6: Verify Everything

You can check your branches and commit history to verify your merges:

git log --oneline --graph --all

This shows a visual representation of your branches and merge history.

๐Ÿงฉ Summary

Here’s what we accomplished:

  • Initialized a Git repository and connected it to GitHub

  • Created branches (AA, BB, CC)

  • Added and committed changes on each branch

  • Merged all branches cleanly back into main

You’ve just completed a practical hands-on Git merging tutorial! ๐Ÿš€

Wednesday, October 15, 2025

๐Ÿ›ก️ Building a Web Application Firewall (WAF) Microservice in Java Spring Boot

 In today’s cloud-native microservice architectures, security is no longer just about firewalls or SSL certificates. Attackers exploit vulnerabilities like SQL Injection, Cross-Site Scripting (XSS), and Command Injection directly through application endpoints.

This is where a Web Application Firewall (WAF) comes in — acting as the first line of defense for your microservices.

In this post, we’ll build a lightweight, extensible WAF microservice using Java Spring Boot that inspects incoming HTTP traffic and blocks malicious requests before they reach your core application logic.


๐Ÿš€ What We'll Build

We’ll create a WAF microservice that:

  • Intercepts all incoming requests.

  • Scans parameters, headers, and body content for malicious patterns (SQLi, XSS, etc).

  • Logs and blocks suspicious requests with a 403 Forbidden.

  • Can be deployed in front of any backend microservice (as an API Gateway or Spring Cloud Gateway filter).


๐Ÿ—️ Architecture Overview

[Client Request][WAF Service] - Input Inspection - Threat Detection - Logging & Decision ↓ [Target Microservice] (e.g., Blog API, Auth, Product Service)

Key Components:

  • WafFilter — a Spring Boot filter that analyzes incoming requests.

  • ThreatPatternRegistry — holds regex-based threat detection rules.

  • RequestAnalyzer — performs scanning and risk scoring.

  • WafController — optional endpoint to check health, metrics, or add dynamic rules.


๐Ÿงฉ Project Setup

Create a new Spring Boot project.

๐Ÿ” Step 1: Define Threat Patterns

Create a class to register known malicious signatures

๐Ÿง  Step 2: Request Analyzer

Create a service that scans the request for these patterns

๐Ÿšฆ Step 3: WAF Filter

Add a Spring filter that blocks malicious requests before they reach controllers.

⚙️ Step 4: Application Entry Point

๐Ÿงพ Step 5: Configuration

Add this to application.yml

๐Ÿงช Step 6: Test Your WAF

Run your WAF service.

☁️ Step 7: Deploying with Other Services

In a microservice ecosystem:

  • Deploy waf-microservice as a reverse proxy or API gateway filter in front of others (e.g., Auth, Blog, Payment).

  • Configure Kubernetes ingress or Docker Compose to route requests through WAF first.

Example:

[Client][WAF:9090][Auth:8081][Blog:8083][User:8085]

๐Ÿงฉ Optional Enhancements

  • Add JWT validation to allow/block by role.

  • Integrate with Redis to track repeated attackers (IP blocking).

  • Add Prometheus metrics for blocked requests.

  • Add an admin UI for live rule management.

  • Integrate with Kafka to publish threat logs to SIEM systems.


✅ Conclusion

With just a few classes, we built a microservice WAF capable of catching basic OWASP Top 10 threats and protecting your backend APIs.
While not a replacement for enterprise-grade firewalls, this pattern is perfect for internal microservice environments, staging pipelines, or API-level protection.

Security should always be layered — combine this WAF with:

  • Proper input validation and sanitization.

  • HTTPS/TLS encryption.

  • SAST and DAST tools (SonarQube, OWASP ZAP, Fortify).

  • Cloud WAF (e.g., AWS WAF, Cloudflare) for production-grade traffic.


๐Ÿ”— Related Reading

Monday, October 13, 2025

๐Ÿš€ Building a Zero Trust, Cloud-Native, Polyglot Microservice Ecosystem

 

๐ŸŒ Introduction

In the modern enterprise, agility and security must coexist. As organizations shift from monoliths to microservices, the challenge is not only scaling services but doing so under a Zero Trust architecture, ensuring that every component—from APIs to databases—authenticates, verifies, and monitors every interaction.

In this article, we’ll explore how to architect a Zero Trust, Cloud-Native, Polyglot Microservice Ecosystem that’s secure, scalable, observable, and language-agnostic.


Here is the System Architecure Diagram:

                +----------------------------+
                |   React / React Native     |
                |    (Web & Mobile UIs)      |
                +-------------+--------------+
                              |
                              
                   +----------------------+
                   |  Spring Cloud API    |
                   |      Gateway         |
                   +----------+-----------+
                              |
         +------------------------------------------------+
         |                Load Balancer                   |
         +------------------------------------------------+
             |          |           |          |         |
                                                     
     +----------+ +-----------+ +----------+ +----------+ +-----------+
     | Auth Svc | |  User Svc | | OrderSvc | | StockSvc | | Python AI |
     | (OAuth2) | |  (Spring) | | (Spring) | | (Spring) | |  (Flask)  |
     +----------+ +-----------+ +----------+ +----------+ +-----------+
             |          |           |          |         |
                                                     
          Redis      PostgreSQL   MySQL     Kafka      Object Store
          (Cache)    (UserData)   (Orders)  (Events)   (Images/ML)

๐Ÿงฑ Core Architectural Principles

  1. Zero Trust by Design – Never trust, always verify.
    Every service call, even internal ones, must be authenticated and authorized.

  2. Cloud-Native Infrastructure – Leverage containers, orchestration, and immutable deployments.

  3. Polyglot Flexibility – Different teams can build services in different languages (Java, Python, Go, etc.).

  4. Observable Everything – Metrics, logs, traces, and dashboards at every level.

  5. Self-Healing & Auto-Scaling – Kubernetes ensures services recover and scale automatically.

  6. Stateless Microservices – Maintain scalability and resilience via decoupled data layers.


๐Ÿ—️ The Ecosystem Overview

๐ŸŸข 1. API Gateway (Spring Cloud Gateway)

  • Acts as the entry point to all services.

  • Enforces authentication (JWT, OAuth2), CSRF, and rate-limiting.

  • Provides service routing via lb:// URIs and integrates with Spring Cloud LoadBalancer.

  • Adds response caching using Redis for efficiency.

⚖️ 2. Service Discovery (Eureka or Kubernetes DNS)

  • Each microservice registers itself dynamically.

  • Enables client-side load balancing and service failover.

  • Kubernetes can natively handle service discovery via internal DNS if Eureka is not desired.

๐Ÿ” 3. Authentication & Authorization Service

  • Stateless Spring Boot or Keycloak-based service.

  • Issues signed JWT tokens with short lifetimes.

  • Enforces role-based access control (RBAC) and supports federated identity (Google, Azure AD, etc.).


๐Ÿ’ก Example Microservices

ServiceLanguagePurpose
Auth-ServiceJava (Spring Boot)Issues and validates JWT tokens
User-ServiceJava (Spring Boot)Manages user profiles and credentials
Order-ServiceJavaHandles product orders, transactions
Analytics-ServicePython (Flask/Django)Consumes Kafka events, performs ML predictions
Notification-ServiceNode.jsSends real-time notifications (WebSocket + Redis pub/sub)

๐Ÿงฉ Supporting Infrastructure

๐Ÿ”„ Redis

  • Used as a session store, cache, and rate limiter.

  • Also supports pub/sub patterns for real-time event communication.

๐Ÿ“ฌ Kafka

  • The backbone for event-driven architecture.

  • Services communicate asynchronously through topics.

  • Enables decoupled microservice interactions and real-time stream processing.

๐Ÿงฐ Databases

  • Each service has its own database (PostgreSQL, MySQL, MongoDB, etc.).

  • Avoids tight coupling between services and prevents schema conflicts.

  • Can use Debezium + Kafka for change data capture (CDC).


๐Ÿง  Observability Stack

๐Ÿ“Š Prometheus

  • Collects metrics from Spring Boot Actuator endpoints (/actuator/prometheus).

  • Scrapes metrics across all containers and pods in Kubernetes.

๐Ÿ“ˆ Grafana

  • Visualizes service health, latency, and throughput in real time.

  • Provides dashboards for DevOps and business analytics.

๐Ÿงพ Elasticsearch + Fluentd + Kibana (EFK)

  • Fluentd collects logs from all containers.

  • Elasticsearch stores and indexes them.

  • Kibana visualizes logs and error trends.

๐Ÿ” Tracing

  • Jaeger or Zipkin used for distributed tracing.

  • Allows tracing requests across multiple microservices.


๐Ÿงฑ Security & Zero Trust Enforcement

๐Ÿ”’ Perimeterless Security

  • No implicit trust within the network.

  • All microservices require signed tokens even for inter-service calls.

๐Ÿงพ API Gateway Rules

  • Verifies JWTs before forwarding requests.

  • Applies request throttling and rate limits per IP or user ID.

  • Supports CSRF protection for browser-based requests.

๐Ÿ”‘ mTLS (Mutual TLS)

  • Encrypted communication between microservices.

  • Certificates issued via internal PKI or service mesh (e.g., Istio).


๐Ÿณ Deployment & Orchestration

๐Ÿณ Docker

Each microservice has its own Dockerfile:

FROM openjdk:17-jdk-slim COPY target/*.jar app.jar ENTRYPOINT ["java","-jar","/app.jar"]

☸️ Kubernetes

  • Uses Deployments, Services, and Ingress.

  • Helm charts can automate deployment and configuration.

  • Supports horizontal pod autoscaling (HPA) based on CPU/memory metrics.


๐Ÿงฐ Developer Tools Integration

ToolPurpose
Swagger / OpenAPI    Auto-generates REST documentation for each service
Postman / Insomnia    API testing
Grafana Loki    Log aggregation
Prometheus Alertmanager    Incident alerting
K9s / Lens    Kubernetes monitoring tools

๐Ÿ’ป Frontends

PlatformFrameworkDescription
Web    React.js        SPA communicating via API Gateway
Mobile    React Native    Cross-platform mobile apps
Desktop    Java Swing / Python PyQt    Native enterprise desktop control panels

๐ŸŒ End-to-End Flow

  1. User logs in through React frontend → hits API Gateway.

  2. Gateway authenticates via Auth Service and issues JWT.

  3. User’s request routes to Order Service → queries Redis cache or database.

  4. Events emitted to Kafka, consumed by Analytics Service (Python).

  5. Metrics scraped by Prometheus, visualized in Grafana.

  6. Logs stored in Elasticsearch, searchable in Kibana.


๐Ÿงญ Conclusion

By integrating Spring Cloud Gateway, Kafka, Redis, Prometheus, Grafana, and Kubernetes, you create a self-healing, zero-trust, polyglot ecosystem ready for enterprise workloads.
This architecture promotes independent service evolution, scalable deployments, and continuous observability — the very essence of cloud-native excellence.


⚙️ Tech Stack Summary

LayerTechnology
API Gateway        Spring Cloud Gateway
Service Discovery        Eureka / Kubernetes DNS
Communication        Kafka, REST, mTLS
Load Balancing        Spring Cloud LoadBalancer / K8s
Caching        Redis
Databases        PostgreSQL, MySQL
Monitoring        Prometheus + Grafana
Logging        Elasticsearch + Fluentd + Kibana
Security        JWT, OAuth2, CSRF, mTLS
Frontends        React, React Native, Java Swing, Python PyQt

Tuesday, October 7, 2025

๐Ÿง  How to Upgrade Your Spring Boot Login for Full OWASP Protection (XSS, CSRF, HttpOnly JWT)

 Modern web apps often use localStorage for JWTs — but that’s risky.

localStorage is accessible to JavaScript, so an XSS attack can easily steal your token.
The proper way: use HttpOnly cookies + CSRF tokens.



Here’s how to transform your existing /login endpoint securely without breaking Kafka, Redis caching, or rate limiting.


๐Ÿชœ Step-by-Step Migration Plan

Step 1: Switch from LocalStorage to HttpOnly Secure Cookie

  • Instead of returning the JWT in the response body, set it as an HttpOnly, Secure, SameSite=Strict (or Lax) cookie.

  • These cookies can’t be accessed by JavaScript — protecting against XSS.

  • No change is needed in your Kafka or Redis logic — they’ll continue working because you’re just changing how the token is delivered, not the backend authentication logic.

๐Ÿ’ก Kafka login notifications and Redis login limiters will remain unaffected, since they trigger before token issuance.


 


Step 2: Introduce a CSRF Token

  • When a user logs in, generate a CSRF token (a random UUID).

  • Store this token:

    • Option 1: in Redis (recommended if you already use Redis)

    • Option 2: in an SQL table (csrf_tokens)

  • Send this token as a non-HttpOnly cookie or via a header (so frontend can read it).

  • Frontend includes this token in every state-changing request header:

X-CSRF-TOKEN: <token>

๐Ÿ›ก️ The backend will reject any POST/PUT/DELETE without a valid CSRF token that matches the user’s session or Redis entry.


Step 3: Secure Cookie Configuration

Update your application.properties:

server.servlet.session.cookie.http-only=true

server.servlet.session.cookie.secure=true

server.servlet.session.cookie.same-site=Strict

If your frontend and backend are on different domains:

server.servlet.session.cookie.domain=.yourdomain.com

Step 4: CORS Configuration (Critical)

When using cookies for auth:

  • You must enable credentials and disable wildcards (*).

Step 5: Frontend Adjustments

  • Remove localStorage usage for JWTs.

  • ✅ Use fetch or axios

  • Store only the CSRF token in memory or sessionStorage.

  • For POST/PUT/DELETE requests

  • Handle 403 (CSRF error) responses gracefully — show a message like “Session expired, please refresh or re-login.”

Step 6: Optional — Add Session Mapping (for Admin Panels or Token Revocation)

If you want to track or revoke tokens:

  • Add a session_id column in DB or Redis mapping:

session_id -> jwt_id
  • On logout or admin disable, revoke by session ID.


Step 7: Test OWASP Protections

Verify:

  • No JWT in localStorage or sessionStorage

  • ✅ Cookies have HttpOnly, Secure, and SameSite flags

  • ✅ CSRF token mismatch returns 403

  • ✅ XSS payloads can’t read cookies

  • ✅ Rate limiter still blocks excessive login attempts

  • ✅ Kafka still receives login notifications


⚙️ Additional Considerations

๐Ÿ—„️ Database Changes

  • Optional: Add csrf_tokens table or store in Redis (csrf:{sessionId} → token).

๐Ÿ”ง Config Updates

  • Add cookie + CORS settings to application.properties.

  • Ensure backend sends cookies via ResponseCookie.from() in Spring.

๐Ÿ’ป Frontend

  • Remove token storage logic.

  • Add withCredentials: true in requests.

  • Always attach X-CSRF-TOKEN header on write requests.


✅ Summary

Protection Mechanism Mitigates
HttpOnly cookie     JWT in secure cookie         XSS
CSRF token     Separate token validation         CSRF
Input sanitization (already using Jsoup)     Clean username/password         Injection
Rate limiting (already in place)     IP-based limiter         Brute force
Kafka login events     Audit trail         Security monitoring

Monday, October 6, 2025

PenTesting:⚠️ The Hidden Danger of Storing Roles in Local Storage in React

 When building a React app, it’s common to store simple user data — like names or themes — in localStorage. It’s convenient, persistent, and easy to use:

localStorage.setItem("role", "ADMIN");

But here’s the problem: anything stored in localStorage is 100% visible and editable by anyone who opens your website.



๐Ÿง  What Is localStorage?

localStorage is a small key-value database built into every browser. It’s meant for client-side storage, not secure data.

You can see and change it anytime using the browser’s DevTools → Application → Local Storage tab.


๐Ÿ”“ The Common Mistake

Many developers (especially beginners) use localStorage to store login-related info, such as:

localStorage.setItem("role", "ADMIN");

Then they rely on it in React to control access:

onst role = localStorage.getItem("role");

if (role === "ADMIN") {

  // show admin dashboard

}

At first glance, this looks fine. But anyone can open the browser console and simply run:

localStorage.setItem("role", "ADMIN");

location.reload();

What if the hacker does not the items in the localstorage?  The answer is there is a way to view all items in the localstorage. Just type:

console.table(Object.entries(localStorage));


 

 

Boom ๐Ÿ’ฅ — the app now thinks they’re an admin.
All admin menus, buttons, or “restricted” pages might appear instantly.


๐Ÿงฑ But Wait — Does That Mean They’re Actually Admin?

It depends on your backend.

  • ❌ If your backend does not verify roles (e.g., it trusts the frontend), your data is wide open.

  • ✅ If your backend does verify permissions using tokens or sessions, the fake “admin” role on the frontend only affects the UI — the hacker still can’t perform real admin actions.


๐Ÿ›ก️ How to Do It Safely

  1. Never trust localStorage for authorization.
    It’s fine for cosmetic data (like theme = "dark"), but not for anything related to user identity or privileges.

  2. Use JWTs or sessions for secure authentication.
    Store a signed token (like a JWT) that your backend verifies with every request.
    The token itself should contain the user’s role — and the backend should confirm it before granting access.

  3. Let the backend decide what data to send.
    For example, instead of giving all users /admin/data, make the server check the token and respond with “403 Forbidden” if the user isn’t authorized.

  4. Hide UI elements based on real backend responses.
    Don’t just rely on what’s in localStorage — render admin features only after verifying the user’s role from the backend.


๐Ÿ•ต️‍♂️ The Bottom Line

If your app’s security depends on a value stored in localStorage, then your app is not secure.

localStorage should be treated like a sticky note on a public computer — convenient, but visible (and editable) to anyone.


✅ TL;DR

WhatSafe?Notes
Theme preferenceCosmetic only
Username⚠️Not sensitive but editable
Access token (JWT)⚠️OK if short-lived and validated by backend
User role / admin flagNever store as plain text
Passwords / secrets๐ŸšซNever, ever store here

Saturday, October 4, 2025

Implementing Real-Time Admin Notifications with Kafka in Spring Boot Microservices

 

Introduction

In modern microservices architectures, real-time event-driven communication is essential for building responsive applications. This post walks through implementing Kafka-based admin notifications across two independent Spring Boot microservices: a Todo App (handling user management) and a Blog service (handling content and comments).

Architecture Overview

Our setup consists of:

  • Todo App Microservice (Port: 8081) - Manages users and authentication
  • Blog Microservice (Port: 8083) - Manages blog posts and comments
  • Kafka - Message broker for inter-service communication
  • WebSocket - Real-time push notifications to frontend
  • React Frontend - Displays toast and bell notifications

Both services run independently but communicate through Kafka topics to deliver real-time notifications to administrators.

Use Cases

  1. Todo App: Admin receives notification when any user logs in
  2. Blog Service: Admin receives notification when any user posts a comment (since comments require moderation)

Implementation Steps

Step 1: Create BlogProducer.java

The producer is responsible for publishing events to Kafka when significant actions occur in your application.

Step 2: Create BlogConsumer.java

The consumer listens to Kafka messages and forwards them to WebSocket clients in real-time.

Step 3: Create WebSocketConfig.java

Configure WebSocket endpoints to enable real-time communication with the frontend.

Step 4: Update CommentController/Service

Integrate the producer into your existing comment creation logic.

Step 5: Update Frontend - AdminNotifications.js

Connect to the WebSocket endpoint and display notifications.

Step 6: Required Dependencies

Add  dependencies to your pom.xml.

Add to configure application.properties.

The Complete Flow

  1. User posts a comment on a blog post
  2. CommentController saves the comment and calls BlogProducer
  3. BlogProducer publishes message to Kafka topic blog-notifications
  4. BlogConsumer (listening to Kafka) receives the message
  5. BlogConsumer forwards message to WebSocket endpoint /topic/admin-notifications
  6. Frontend (connected via WebSocket) receives the notification
  7. Toast notification appears on screen
  8. Bell icon updates with unread count

Testing Your Implementation

  1. Start Kafka and Zookeeper
  2. Start both microservices (Todo App and Blog)
  3. Login as an admin user
  4. Open browser console to verify WebSocket connection
  5. Post a comment (either as admin or regular user)
  6. Verify toast notification appears
  7. Verify bell icon shows unread count

Benefits of This Architecture

  • Decoupled Services: Todo and Blog services remain independent
  • Scalability: Kafka handles high message throughput
  • Real-time Updates: WebSocket provides instant notifications
  • Event-Driven: Easy to add more notification types
  • Fault Tolerance: Kafka persists messages if consumers are temporarily down

Extending the System

You can easily extend this pattern for additional notifications:

  • User registration events
  • Post publication notifications
  • Comment approval/rejection alerts
  • System-wide announcements

Simply create new Kafka topics and add corresponding producers/consumers for each event type.

Conclusion

By combining Kafka for inter-service communication and WebSocket for real-time frontend updates, we've built a robust notification system across independent microservices. This architecture scales well and provides the foundation for building more complex event-driven features in your application.

Understanding the Java Collections Framework: A Complete Guide for Developers

 When working with Java, one of the most essential toolkits you’ll encounter is the Java Collections Framework (JCF) . Whether you're bu...