My interview experience for Software Architect — Java position.

Recently I gave interview for Software Architect — Java role in one of the leading service based companies. I am sharing here list of questions asked to me.

  1. What is difference between ClassNotFoundException and NoClassDefFoundError?

ClassNotFoundException occurs when you try to load a class at runtime using Class.forName() or loadClass() methods and requested classes not found in classpath. This error occurs when you do not have corresponding Jar on classpath. It is a checked exception.

NoClassDefFoundError occurs when the class was present during compile time and the program was compiled successfully but the class was not present during runtime. It is an Error. May be compile your project and then deleted one of the .class file. It will give NoClassDefFoundError.

2. What are different ways to create Thread in Java?

There are multiple options to create a Thread — Using Runnable or Callable Interface, Extending Thread class, Using Executor Framework/Thread Pool. Interviewer further asked is there any other way to create Thread? The answer is using Reactive programming. We can use Spring Web Flux library and implement reactive code. Please share in comment if you know any other way to create Thread.

3. Explain different types of Encryption?

There are basically two types of Encryption. Symmetric Encryption — Single key is used for encryption and decryption. It is used when you can securely share key between sender and recipient. Symmetric Encryption is used when you want to encrypt large amount of data, encrypt data at rest. Common Symmetric Encryption method — AES, DES

Asymmetric Encryption: Two different keys are used — Private Key and Public Key. Data encrypted with public key and it can only be decrypted with corresponding private key. A website’s SSL/TLS certificate which is available publicly contains the Public Key. Common Asymmetric Encryption method— RCA

4. How ConcurrentHashMap is different from HashMap?

ConcurrentHashMap divides the map into segments and only locks particular segment when writing. So this leads to better concurrency by permitting multiple threads to write different segments simultaneously.

Reads in ConcurrentHashMap are non-blocking and do not require locks. This means multiple threads can safely read from the map simultaneously without waiting on locks, which significantly improves performance in read-heavy scenarios. The volatile mechanism ensures memory visibility, meaning when a value is written by one thread, it becomes visible to all other threads immediately. This allows read operations to always see the most recent value.

5. When you open an URL/website in browser, what all steps the request go through? Please explain step by step.

I request you to go through different blogs available on internet.

A brief explanation of what happens when you type a URL into a web browser

When you type a URL like “https://www.google.

www.linkedin.com

Putting here very brief detail:

URL parsing
DNS resolution — Find Server address from domain name
TLS handshake (if HTTPS) → It encrypts traffic
Send HTTP request
Server processes the request → It may involve application load balancer, gateway
Receive HTTP response
Parse and render HTML, CSS, and JavaScript
Fetch additional resources (CSS, JS, images)
Display the final webpage to the user.

6. What is difference between 2PC and SAGA design pattern?

2PC is for immediate transactions. You can commit or rollback entire transaction seamlessly. If all parties agree, the transaction is committed. If any single party disagree, the transaction is rolled back. This can be used when limited number of services involved in given scenario. It is less fault tolerant due to single point of failure.

On the contrary, SAGA pattern can be used for long running transaction spanning many different services. Here for each action, there is a compensating action. Example — Create Order, the compensating action is — Revert Order. It is more scalable as well as fault tolerant compare to 2PC as it does not require global lock. For example there are three services — Order > Inventory > Payment. Here let say Payment request failed, then it will revert inventory and revert order. It splits the transaction into multiple steps.

You may go through https://medium.com/javarevisited/difference-between-saga-pattern-and-2-phase-commit-in-microservices-e1d814e12a5a for more detail.

7 What is CQRS design pattern.

CQRS (Command Query Responsibility Segregation) is a design pattern used to separate the read and write operations in a system. It splits the responsibilities of handling commands (operations that change the state of an application) from queries (operations that retrieve data without modifying it).

It is very subjective how to implement it depends based on requirement. For example — On Command Service side there will be event like Create Product, Update Product Quantity, Update Product Price, Delete Product. It will publish this event to Kafka. The Query service will consume these events and will have its own store (May be Elastic Search) and store data in required form that be queries based on business requirement. Since its event base, It is possible to time travel and see what changes are done over the time. It also helps to revert changes easily. Please refer this article for more detail.

CQRS Design Pattern in Microservices Architectures

In this article, we are going to talk about Design Patterns of Microservices architecture which is The CQRS Design…

medium.com

8. What are different design patterns used in Distributed System?

Service Discovery and Registration, Load Balancing, Client Side Load Balancing, Circuit Breaker, API Gateway, Synchronous Communication (REST), Asynchronous Communication (Messaging — Example Kafka, ActiveMQ), Rate Limiting. You should also be aware about each of this design pattern so that you can answer questions on it.

9. I have Employee Class with 200+ different fields inside it. I want to serialize just 10 fields out of this 200 fields when serializing this object. What is best way to achieve this?

Use Externalizable interface. It gives more control where we can write logic for Serialization. Here we just need to serialize 10 fields that is needed.

import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;

public class Employee implements Externalizable {
// 50 fields (for simplicity, we show only 6)
private String name;
private int age;
private String department;
private String address;
private double salary;
private String nonSerializedField; // Field not to be serialized

// Constructors
public Employee() {
// No-arg constructor required by Externalizable
}

public Employee(String name, int age, String department, String address, double salary, String nonSerializedField) {
this.name = name;
this.age = age;
this.department = department;
this.address = address;
this.salary = salary;
this.nonSerializedField = nonSerializedField;
}

// Getter and Setter Methods
public String getName() {
return name;
}

public int getAge() {
return age;
}

public String getDepartment() {
return department;
}

public String getAddress() {
return address;
}

public double getSalary() {
return salary;
}

public String getNonSerializedField() {
return nonSerializedField;
}

// Externalizable methods
@Override
public void writeExternal(ObjectOutput out) throws IOException {
// Manually write only the 5 selected fields
out.writeObject(name);
out.writeInt(age);
out.writeObject(department);
out.writeObject(address);
out.writeDouble(salary);
}

@Override
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
// Manually read only the 5 selected fields
name = (String) in.readObject();
age = in.readInt();
department = (String) in.readObject();
address = (String) in.readObject();
salary = in.readDouble();
}

@Override
public String toString() {
return "Employee{" +
"name='" + name + '\'' +
", age=" + age +
", department='" + department + '\'' +
", address='" + address + '\'' +
", salary=" + salary +
", nonSerializedField='" + nonSerializedField + '\'' +
'}';
}
}

10. What all points should be considered while implementing Disaster Recovery?

This is very subjective question. Please go through different articles on internet. Putting here brief detail.

Decide on key metrics:

  • Recovery Point Objective (RPO): Defines how much data you can afford to lose. It refers to the maximum time gap between the last data backup and the point of failure (e.g., if RPO is 4 hours, you’re willing to lose 4 hours of data).
  • Recovery Time Objective (RTO): Defines how quickly you need to restore operations after a failure. For example, if the RTO is 2 hours, the system must be up and running within 2 hours.

Regular backup/snapshot of database. Multiple Replicas. Different Cloud providers provides such manages DB services.

Spread your systems and data across different geographic regions to ensure that even if a natural disaster affects one region, services remain available in another region.

Identify possible major failure and test your system for such failure.

Failover Mechanism:

Failover ensures that if one system or data center fails, another one takes over with minimal downtime.

  • Cold Failover: Backup systems remain offline and only come online when a failure occurs. The recovery process takes time as services and systems need to be started manually.
  • Warm Failover: Backup systems are running with minimal services enabled, and they are prepared to take over quickly with some manual intervention.
  • Hot Failover: Backup systems run simultaneously with the primary systems. Failover happens automatically and almost instantaneously.
  • Use cases: Active-active or active-passive database replication, load balancers.

Automation and Orchestration:

Disaster recovery must be automated to minimize human intervention and reduce recovery time. Tools like Infrastructure as Code (IaC) and orchestration platforms ensure that your disaster recovery processes are automatically triggered.

  • Infrastructure as Code (IaC): Use IaC tools like Terraform, CloudFormation, or Ansible to define your infrastructure, so it can be recreated quickly in case of disaster.
  • Automated Failover Testing: Regularly test your disaster recovery processes by simulating failures using tools like AWS Fault Injection Simulator to ensure that failover works as expected.
  • Automated Recovery Playbooks: Use automation platforms like Runbooks (AWS Systems Manager), or custom scripts to automate the failover, restore, and rollback operations.

11. How will you build modern CICD pipeline for your project?

Build Image: We can use GitHub action to run build when new code is merged in target branch, This GitHub action will auto start a Job which we can utilize to build docker image.

Push Image: Push the image to your private docker repository (For example — AWS ECR) from the Github Action runner Job.

Environment wise Configuration: Helm templates are used to dynamically generate Kubernetes manifest files based on parameterized input values. This makes it easy to customize deployments for different environments (like development, staging, and production) without needing to manually edit configuration files each time. Helm allows you to version your charts, making it easier to manage changes and rollbacks.

Argo CD: Argo CD is a powerful tool for managing Kubernetes applications using the GitOps approach. It streamlines the deployment process, enhances visibility, and improves the reliability of application updates. It also provide nice UI so that you will get visibility over various components deployment status, you can even view logs.

12. What is difference between TCP and UDP?

TCP (Transmission Control Protocol) is a connection-oriented protocol that provides reliable data transmission by ensuring packets are delivered in order, with error-checking and retransmission if needed. This makes TCP slower but suitable for applications where accuracy is critical, like file transfers (FTP), email (SMTP) and web browsing (HTTP).

UDP (User Datagram Protocol), on the other hand, is a connectionless protocol that sends packets without establishing a connection, making it faster but less reliable since it doesn’t guarantee delivery or order. UDP is ideal for applications where speed is essential and minor data loss is acceptable, such as video streaming, VoIP, and online gaming.

I have shared major technical questions asked to me in the interview. I hope it will help you to prepare better for your next interview.

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Rolar para cima