Quantcast
Channel: Edureka
Viewing all 1753 articles
Browse latest View live

Object Oriented Programming – Java OOPs Concepts With Examples

$
0
0

Object Oriented programming is a programming style which is associated with the concepts like class, object, Inheritance, Encapsulation, Abstraction, Polymorphism. Most popular programming languages like Java, C++, C#, Ruby, etc. follow an object-oriented programming paradigm. As Java being the most sought-after skill, we will talk about object-oriented programming concepts in Java. An object-based application in Java is based on declaring classes, creating objects from them and interacting between these objects. I have discussed Java Classes and Objects which is also a part of object-oriented programming concepts, in my previous blog

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

In this blog, we will understand the below core concepts of Object oriented Programming in the following sequence:

  1. Inheritance
  2. Encapsulation
  3. Abstraction
  4. Polymorphism

Let’s get started with the first Object Oriented Programming concept i.e. Inheritance.

Object Oriented Programming : Inheritance

In OOP, computer programs are designed in such a way where everything is an object that interact with one another. Inheritance is one such concept where the properties of one class can be inherited by the other. It helps to reuse the code and establish a relationship between different classes.

Inheritance - object oriented programming - Edureka

 

As we can see in the image, a child inherits the properties from his father. Similarly, in Java, there are two classes:

1. Parent class ( Super or Base class)

2. Child class (Subclass or Derived class )

A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class 

 

Inheritance is further classified into 4 types:

Inheritance - object oriented programming - Edureka

 

So let’s begin with the first type of inheritance i.e. Single Inheritance:

 

  1. Single Inheritance:

SingleInheritance - object oriented programming - Edureka

In single inheritance, one class inherits the properties of another. It enables a derived class to inherit the properties and behavior from a single parent class. This will in turn enable code reusability as well as add new features to the existing code.

Here, Class A is your parent class and Class B is your child class which inherits the properties and behavior of the parent class.

 

Let’s see the syntax for single inheritance:


Class A
{
---
}
Class B extends A {
---
}

2. Multilevel Inheritance:

MultilevelInheritance - object oriented programming - EdurekaWhen a class is derived from a class which is also derived from another class, i.e. a class having more than one parent class but at different levels, such type of inheritance is called Multilevel Inheritance.

If we talk about the flowchart, class B inherits the properties and behavior of class A and class C inherits the properties of class B. Here A is the parent class for B and class B is the parent class for C. So in this case class C implicitly inherits the properties and methods of class A along with Class B. That’s what is multilevel inheritance.

 

Let’s see the syntax for multilevel inheritance in Java:

Class A{
---
}
Class B extends A{
---
}
Class C extends B{
---
}

3. Hierarchical Inheritance:

HierarchicalInheritance - object oriented programming - EdurekaWhen a class has more than one child classes (sub classes) or in other words, more than one child classes have the same parent class, then such kind of inheritance is known as hierarchical.

If we talk about the flowchart, Class B and C are the child classes which are inheriting from the parent class i.e Class A.

Let’s see the syntax for hierarchical inheritance in Java:

Class A{
---
}
Class B extends A{
---
}
Class C extends A{
---
}
  1. Hybrid Inheritance:

HybridInheritance - object oriented programming - EdurekaHybrid inheritance is a combination of multiple inheritance and multilevel inheritance. Since multiple inheritance is not supported in Java as it leads to ambiguity, so this type of inheritance can only be achieved through the use of the interfaces. 

If we talk about the flowchart, class A is a parent class for class B and C, whereas Class B and C are the parent class of D which is the only child class of B and C.

 

Now we have learned about inheritance and their different types. Let’s switch to another object oriented programming concept i.e Encapsulation.

Object Oriented Programming : Encapsulation

Encapsulation is a mechanism where you bind your data and code together as a single unit. It also means to hide your data in order to make it safe from any modification. What does this mean? The best way to understand encapsulation is to look at the example of a medical capsule, where the drug is always safe inside the capsule. Similarly, through encapsulation the methods and variables of a class are well hidden and safe.

Encapsulation - Java Tutorial - Edureka

We can achieve encapsulation in Java by:

  • Declaring the variables of a class as private.
  • Providing public setter and getter methods to modify and view the variables values.

Let us look at the code below to get a better understanding of encapsulation:

public class Employee {
 private String name;
 public String getName() {
 return name;
 }
 public void setName(String name) {
 this.name = name;
 }
 public static void main(String[] args) {
 }
}

Let us try to understand the above code. I have created a class Employee which has a private variable name. We have then created a getter and setter methods through which we can get and set the name of an employee. Through these methods, any class which wishes to access the name variable has to do it using these getter and setter methods.

Let’s move forward to our third Object-oriented programming concept i.e. Abstraction.

 

Object Oriented Programming : Abstraction

Mobile - Object oriented programming - Edureka

Abstraction refers to the quality of dealing with ideas rather than events. It basically deals with hiding the details and showing the essential things to the user. If you look at the image here, whenever we get a call, we get an option to either pick it up or just reject it. But in reality, there is a lot of code that runs in the background. So you don’t know the internal processing of how a call is generated, that’s the beauty of abstraction. Therefore, abstraction helps to reduce complexity. You can achieve abstraction in two ways:

a) Abstract Class

b) Interface

 

Let’s understand these concepts in more detail.

Abstract class: Abstract class in Java contains the ‘abstract’ keyword. Now what does the abstract keyword mean? If a class is declared abstract, it cannot be instantiated, which means you cannot create an object of an abstract class. Also, an abstract class can contain abstract as well as concrete methods.
Note: You can achieve 0-100% abstraction using abstract class.

To use an abstract class, you have to inherit it from another class where you have to provide implementations for the abstract methods there itself, else it will also become an abstract class.

Let’s look at the syntax of an abstract class:

Abstract class Mobile {   // abstract class mobile
Abstract void run();      // abstract method

Interface: Interface in Java is a blueprint of a class or you can say it is a collection of abstract methods and static constants. In an interface, each method is public and abstract but it does not contain any constructor. Along with abstraction, interface also helps to achieve multiple inheritance in Java.
Note: You can achieve 100% abstraction using interfaces.

So an interface basically is a group of related methods with empty bodies. Let us understand interfaces better by taking an example of a ‘ParentCar’ interface with its related methods.


public interface ParentCar {
public void changeGear( int newValue);
public void speedUp(int increment);
public void applyBrakes(int decrement);
}

These methods need be present for every car, right? But their working is going to be different.

Let’s say you are working with manual car, there you have to increment the gear one by one, but if you are working with an automatic car, that time your system decides how to change gear with respect to speed. Therefore, not all my subclasses have the same logic written for change gear. The same case is for speedup, now let’s say when you press an accelerator, it speeds up at the rate of 10kms or 15kms. But suppose, someone else is driving a super car, where it increment by 30kms or 50kms. Again the logic varies. Similarly for applybrakes, where one person may have powerful brakes, other may not.

Since all the functionalities are common with all my subclasses, I have created an interface ‘ParentCar’ where all the functions are present. After that, I will create a child class which implements this interface, where the definition to all these method varies.

Next, let’s look into the functionality as to how you can implement this interface.
So to implement this interface, the name of your class would change to any particular brand  of a Car, let’s say I’ll take an “Audi”. To implement the class interface, I will use the ‘implement’ keyword as seen below:

public class Audi implements ParentCar {
int speed=0;
int gear=1;
public void changeGear( int value){
gear=value;
}
public void speedUp( int increment)
{
speed=speed+increment;
}
public void applyBrakes(int decrement)
{
speed=speed-decrement;
}
void printStates(){
System.out.println("speed:"+speed+"gear:"+gear);
}
public static void main(String[] args) {
// TODO Auto-generated method stub
Audi A6= new Audi();
A6.speedUp(50);
A6.printStates();
A6.changeGear(4);
A6.SpeedUp(100);
A6.printStates();
}
}

Here as you can see, I have provided functionalities to the different methods I have declared in my interface class. Implementing an interface allows a class to become more formal about the behavior it promises to provide. You can create another class as well, say for example BMW class which can inherit the same interface ‘car’ with different functionalities.

So I hope you guys are clear with the interface and how you can achieve abstraction using it.

Finally, the last Object oriented programming concept is Polymorphism.

Object Oriented Programming : Polymorphism

Polymorphism means taking many forms, where ‘poly’ means many and ‘morph’ means forms. It is the ability of a variable, function or object to take on multiple forms. In other words, polymorphism allows you define one interface or method and have multiple implementations.

Let’s understand this by taking a real-life example and how this concept fits into Object oriented programming.

Polymorphism - object oriented programming - Edureka

Let’s consider this real world scenario in cricket, we know that there are different types of bowlers i.e. Fast bowlers, Medium pace bowlers and spinners. As you can see in the above figure, there is a parent class- BowlerClass and it has three child classes: FastPacer, MediumPacer and Spinner. Bowler class has bowlingMethod() where all the child classes are inheriting this method. As we all know that a fast bowler will going to bowl differently as compared to medium pacer and spinner in terms of bowling speed, long run up and way of bowling, etc. Similarly a medium pacer’s implementation of bowlingMethod() is also going to be different as compared to other bowlers. And same happens with spinner class.
The point of above discussion is simply that a same name tends to multiple forms. All the three classes above inherited the bowlingMethod() but their implementation is totally different from one another.

Polymorphism in Java is of two types: 

  1. Run time polymorphism
  2. Compile time polymorphism

Run time polymorphism: In Java, runtime polymorphism refers to a process in which a call to an overridden method is resolved at runtime rather than at compile-time. In this, a reference variable is used to call an overridden method of a superclass at run time. Method overriding is an example of run time polymorphism. Let us look  the following code to understand how the method overriding works:


public Class BowlerClass{
void bowlingMethod()
{
System.out.println(" bowler ");
}
public Class FastPacer{
void bowlingMethod()
{
System.out.println(" fast bowler ");
}
Public static void main(String[] args)
{
FastPacer obj= new FastPacer();
obj.bowlingMethod();
}
}

Compile time polymorphism: In Java, compile time polymorphism refers to a process in which a call to an overloaded method is resolved at compile time rather than at run time. Method overloading is an example of compile time polymorphism. Method Overloading is a feature that allows a class to have two or more methods having the same name but the arguments passed to the methods are different. Unlike method overriding, arguments can differ in:

  1. Number of parameters passed to a method
  2. Datatype of parameters
  3. Sequence of datatypes when passed to a method.

Let us look at the following code to understand how the method overloading works:

class Adder {
Static int add(int a, int b)
{
return a+b;
}
static double add( double a, double b)
{
return a+b;
}

public static void main(String args[])
{
System.out.println(Adder.add(11,11));
System.out.println(Adder.add(12.3,12.6));
}
}

I hope you guys are clear with all the object oriented programming concepts that we have discussed above i.e inheritance, encapsulation, abstraction and polymorphism. Now you can make your Java application more secure, simple and re-usable using Java OOPs concepts. Do read my next blog on Java String where I will be explaining all about Strings and its various methods and interfaces.

Now that you have understood the Object Oriented Programming concepts in Java, check out the Java training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. Edureka’s Java J2EE and SOA training and certification course is designed for students and professionals who want to be a Java Developer. The course is designed to give you a head start into Java programming and train you for both core and advanced Java concepts along with various Java frameworks like Hibernate & Spring.

Got a question for us? Please mention it in the comments section of this “Object Oriented Programming” blog  and we will get back to you as soon as possible.


DevOps Tutorial : Introduction To DevOps

$
0
0

Out of keen interest in DevOps, I thought of coming up with a series of blogs that will educate you about the new culture being adopted in Software Development and help you understand what is it all about.

The DevOps Tutorial

This DevOps tutorial blog series will familiarize you with DevOps methodology & industry-wide used tools, required for DevOps CertificationIn this blog, I will take you through the following things, which will be the base of the upcoming blogs:

  • What led DevOps to come into existence
  • Introduction of DevOps

Waterfall Model 

Let’s consider developing software in a traditional way using a Waterfall Model.

Waterfall Model - DevOps Tutorial - Edureka

In the above diagram you will see the phases it will involve:

  • ­In phase 1 – Complete Requirement is gathered and SRS is developed
  • In phase 2 – This System is Planned and Designed using the SRS
  • In phase 3 – Implementation of the System takes place
  • In phase 4 – System is tested and its quality is assured
  • In phase 5 – System is deployed to the end users
  • In phase 6 – Regular Maintenance of the system is done

Get Certified With Industry Level Projects & Fast Track Your Career

Waterfall Model Challenges

The Water-fall model worked fine and served well for many years however it had some challenges. In the following diagram the challenges of Waterfall Model are highlighted.

 Waterfall Model Challenges - DevOps Tutorial - Edureka

In the above diagram you can see that both Development and Operations had challenges in the Waterfall Model.  From Developers point of view there were majorly two challenges:

1 After Development, the code deployment time was huge.

2Pressure of work on old, pending and new code was high because development and deployment time was high.

On the other hand, Operations was also not completely satisfied. There were four major challenges they faced as per the above diagram:

1It was difficult to maintain ~100% uptime of the production environment.

2Infrastructure Automation tools were not very affective.

3Number of severs to be monitored keeps on increasing with time and hence the complexity.

4It was very difficult to provide feedback and diagnose issue in the product.

In the following diagram proposed solution to the challenges of Waterfall Model are highlighted.

Possible Solutions To Address The Challenges Faced With Waterfall Model - DevOps Tutorial - Edureka

 

In the above diagram, Probable Solutions for the issues faced by Developers and Operations are highlighted in blue. This sets the guidelines for an Ideal Software Development strategy.

From Developers point of view:

1A system which enables code deployment without any delay or wait time.

2A system where work happens on the current code itself i.e. development sprints are short and well planned.

From Operations point of view:

1System should have at-least 99% uptime.

2Tools & systems are there in place for easy administration.

3Effective monitoring and feedbacks system should be there.

4Better Collaboration between Development & Operations and is common requirement for Developers and Operations team.

I guess it’s time we explore what is DevOps and how it overcomes these challenges. Check out the below video on What is DevOps before you go ahead. 

What is DevOps? | DevOps Training – DevOps Introduction & Tools | DevOps Tutorial | Edureka

 

 

DevOps integrates developers and operations team to improve collaboration and productivity. 

Devops Lifeycle - DevOps Tutorial - Edureka
Devops Lifeycle – DevOps Tutorial – Edureka

According to the DevOps culture, a single group of Engineers (developers, system admins, QA’s. Testers etc turned into DevOps Engineers) has end to end responsibility of the Application (Software) right from gathering the requirement to development, to testing, to infrastructure deployment, to application deployment and finally monitoring & gathering feedback from the end users, then again implementing the changes.

This is a never ending cycle and the logo of DevOps makes perfect sense to me. Just look at the above diagram – What could have been a better symbol than infinity to symbolize DevOps?

Now let us see how DevOps takes care of the challenges faced by Development and Operations. Below table describes how DevOps addresses Dev Challenges. 

DevOps Addressing Dev Challenges - DevOps Tutorial - Edureka

DevOps Tutorial Table 1 – Above table states how DevOps solves Dev Challenges

Going further, below table describes how DevOps addresses Ops Challenges.

DevOps Addressing Ops Challenges - DevOps Tutorial - Edureka

DevOps Tutorial Table 2 – Above table states how DevOps solves Ops Challenges

However, you would still be wondering, how to implement DevOps. To expedite and actualize DevOps process apart from culturally accepting it, one also needs various DevOps tools like Puppet, Jenkins, GIT, Chef, Docker, Selenium, AWS etc to achieve automation at various stages which helps in achieving Continuous Development, Continuous Integration, Continuous Testing, Continuous Deployment, Continuous Monitoring to deliver a quality software to the customer at a very fast pace.  

Now take a look at the below DevOps diagram with various DevOps Tools closely and try to decode it.

DevOps Tools - DevOps Tutorial - Edureka

These tools has been categorized into various stages of DevOps. Hence it is important that I first tell you about DevOps stages and then talk more about DevOps Tools.

DevOps Lifecycle can be broadly broken down into the below DevOps Stages:

  • Continuous Development
  • Continuous Integration
  • Continuous Testing
  • Continuous Monitoring
  • Virtualization and Containerization

These stages are the building blocks to achieve DevOps as a whole.

This is the end of the first blog of – The DevOps Tutorial Series. 

To know more on DevOps Stages, Click here to visit the second blog – What is DevOps and Its Stages ?

Got a question for us? Please mention it in the comments section and we will get back to you.

Top 50 Salesforce Interview Questions And Answers You Must Prepare In 2019

$
0
0

As of Jan 2019, Salesforce is the world’s leading CRM service provider. They have more than 40% market share in the Cloud CRM space and dominates the overall CRM space with a market share of 19.7%. They were rated the world’s #1 CRM for two consecutive years and if the projected growth of Salesforce is anything to go by, the need for professionals with Salesforce training is only going to exponentially increase. This is where Salesforce enters the picture, and that is what has prompted me to write a blog on the most frequently asked Salesforce interview questions.

Thanks to the knowledge and wisdom shared by some of our experts from the industry, I have shortlisted this definitive list of the Top 50 Salesforce interview questions which will help you breeze through your interview. Hopefully this helps you land a top-notch job in the domain of your passion. In case you attended a Salesforce interview recently, we urge you to post any question you have faced. Our experts will be happy to answer them for you.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

Top 50 Salesforce Interview Questions And Answers

This list of Salesforce interview questions is divided into 9 sections, each for different aspects of Salesforce.

  1. Salesforce fundamentals
  2. Declarative features
  3. Audit & reporting features
  4. Data modelling and data management
  5. Logic & process automation
  6. Software testing
  7. Debug & deployment tools
  8. Integration features
  9. Programmatic features

A. Salesforce Fundamentals – Salesforce Interview Questions

1. Can two users have the same profile? Can two profiles be assigned to the same user?

Profiles determine the level of access a user can have in a Salesforce org.

As far as the first part of the question is concerned, Yes. One profile can be assigned to any number of users. Take the example of a Sales or Service team in a company. The entire team will be assigned the same profile. The admin can create one profile: Sales Profile, which will have access to the Leads, Opportunities, Campaigns, Contacts and other objects deemed necessary by the company.

In this way, many users can be assigned the same profile. In case the team lead or manager need access to additional records/ objects then it can be done by assigning permission sets only for those users.

Answering the second part of the question, each user can only be assigned 1 profile.

2. What are Governor Limits in Salesforce?

In Salesforce, it is the Governor Limits which controls how much data or how many records you can store in the shared databases. Why? Because Salesforce is based on the concept of multi-tenant architecture. In simpler words, Salesforce uses a single database to store the data of multiple clients/ customers. The below image will help you relate to this concept.

multi tenant architecture - salesforce interview questions

To make sure no single client monopolizes the shared resources, Salesforce introduced the concept of Governor Limits which is strictly enforced by the Apex run-time engine.

Governor Limits are a Salesforce developer’s biggest challenge. That is because if the Apex code ever exceeds the limit, the expected governor issues a run-time exception that cannot be handled. Hence as a Salesforce developer, you have to be very careful while developing your application.

Different Governor Limits in Salesforce are:

  • Per-Transaction Apex Limits
  • Force.com Platform Apex Limits
  • Static Apex Limits
  • Size-Specific Apex Limits
  • Miscellaneous Apex Limits
  • Email Limits
  • Push Notification Limits

3. What is a sandbox org? What are the different types of sandboxes in Salesforce?

A sandbox is a copy of the production environment/ org, used for testing and development purposes. It’s useful because it allows development on Apex programming without disturbing the production environment.

When can you use it?
You can use it when you want to test a newly developed Force.com application or Visualforce page. You can develop and test it in the Sandbox org instead of doing it directly in production.

This way, you can develop the application without any hassle and then migrate the metadata and data (if applicable) to the production environment. Doing this in a non-production environment allows developers to freely test and experiment applications end to end.

Types of Sandboxes are:

  • Developer
  • Developer Pro
  • Partial Copy
  • Full

4. Can you edit an apex trigger/ apex class in production environment? Can you edit a Visualforce page in production environment?

No, it is not possible to edit apex classes and triggers directly in production environment.

It needs to be done first in Developer edition or testing org or in Sandbox org. Then, to deploy it in production, a user with Author Apex permission must deploy the triggers and classes using deployment tools.

However, Visualforce pages can be created and edited in both sandbox and in production.

Only if the page has to do something unique (different values), it would have to be developed via Sandbox.

5. What are the different data types that a standard field record name can have?

A standard field record name can have data type of either auto number or text field with a limit of 80 chars.

For generating auto numbers, the format needs to be specified while defining the field and after that for every record that is added, the number will get auto generated. For example:-
Sr No-{1}
Sr No-{2}
Sr No-{3}

6. Why are Visualforce pages served from a different domain?

Visualforce pages are served from a different domain to improve security standards and block cross site scripting. Take a look at the highlighted portion in the below Visualforce page:-

sample visualforce page - salesforce interview questions

B. Declarative Features – Salesforce Interview Questions

7. What is WhoId and WhatId in activities?

WhoID refers to people. Typically: contacts or leads. Example: LeadID, ContactID

WhatID refers to objects. Example: AccountID, OpportunityID

8. What is the use of writing sharing rules? Can you use sharing rules to restrict data access?

Sharing rules are written to give edit access (public read and write) or public read only access to certain individuals in Salesforce org. A classic example is when:- only your managers or superiors need to be given extra credentials to your records in objects as compared to your peers.

By default, all users in your organization will have organization-wide-default sharing settings of either Public Read Only or Private.
To give access to more records, which users do not own, we write sharing rules.
Example: Sharing rules are used to extend sharing access to users in public groups or roles. Hence, sharing rules are not as strict as organization-wide default settings. They allow greater access for those users.

As far as the second part of the question is concerned, the answer is no. We cannot use sharing rules to restrict data access. It is only used for allowing greater access to records.

9. What are the different types of email templates that can be created in Salesforce?

The different types of Email templates are listed in the below table:-

Text All users can create or change this template
HTML with letterhead Only Administrators and users having “Edit HTML Templates” permissions can create this template based on a letterhead.
Custom HTML Administrators and users having “Edit HTML Templates” permissions can create this template without the need of a letterhead
Visualforce Only administrators and developers can create this template. Advanced functionalities like merging data from multiple records is available only in this template

C. Audit & Reporting Features – Salesforce Interview Questions

10. What is a bucket field in reports?

A bucket field lets you group related records together by ranges and segments, without the use of complex formulas and custom fields. Bucketing can thus be used to group, filter, or arrange report data. When you create a bucket field, you need to define multiple categories (buckets) that are used to group report values.

The advantage is that earlier, we had to create custom fields to group or segment certain data.

11. What are dynamic dashboards? Can dynamic dashboards be scheduled?

Before we understand dynamic dashboards, let us first understand static dashboards. Static dashboards are the basic dashboard types that will be visible to any user who has made a report out of his data. An example of this is what a Sales manager/ Marketing manager would be able to see on his Salesforce org. In other words, a normal dashboard shows data only from a single user’s perspective. Now comes the concept of dynamic dashboards.

Dynamic dashboards are used to display information which is tailored to a specific user. Let us consider the same example as above. In case the Sales manager wants to view the report generated specific to only one of his team members, then he can use dynamic dashboards.

You can use dynamic dashboards when you want to show user-specific data of a particular user, such as their personal quotas and sales, or number of case closures, or leads converted etc.
You can also use a normal/ static dashboard when you want to show regional or organization-wide data to a set of users, such as a particular region’s sales number, or a particular support team’s performance on case closures.

As far as the second part of the question is concerned, no we cannot schedule a dynamic dashboard. That is because whenever we open the dashboard, it will show the data generated in real-time.

12. What are the different types of reports available in Salesforce? Can we mass delete reports in Salesforce?

Salesforce Report Types

1. Tabular reports Simple Excel type tables which provide a list of items with the grand total
2. Summary reports Similar to Tabular reports, but also have functionality of grouping rows, viewing subtotals & creating charts
3. Matrix reports Two-dimensional reports which allow you to group records both by row and column
4. Joined reports Multiple blocks showing data from different reports based on same or different report types

Another important point to note here is that, only Summary reports and Matrix reports can be fed as data source for dashboards. Tabular and Joined reports cannot be used as data source for dashboards.

Can we mass delete reports in Salesforce? Of Course we can mass delete reports in Salesforce. The option to mass delete reports can be found under Data Management in Setup.

D. Data Modelling & Data Management – Salesforce Interview Questions

13. What are the different types of object relations in salesforce? How can you create them?

No list of Salesforce interview questions is complete without involving relationships between objects in Salesforce. Relationships in Salesforce can be used to establish links between two or more objects.

The different types of object relationships in Salesforce are:

  1. Master-Detail Relationship (1:n):- It is a parent-child relationship in which the master object controls the behavior of the dependent child object. It is a 1:n relationship, in which there can be only one parent, but many children.The main concept you need to be know is that, being the controlling object, the master field cannot be empty. If a record/ field in master object is deleted, the corresponding fields in the dependent object are also deleted. This is called a cascade delete. Dependent fields will inherit the owner, sharing and security settings from its master.You can define master-detail relationships between two custom objects, or between a custom object and standard object as long as the standard object is the master in the relationship.
  2. Lookup Relationship (1:n):-
    Lookup relationships are used when you want to create a link between two objects, but without the dependency on the parent object. Similar to Master-Detail relationship, you can think of this as a form of parent-child relationship where there is only one parent, but many children i.e. 1:n relationship.
    The difference here is that despite being controlling field, deleting a record will not result in automatic deletion of the lookup field in the child object. Thus the records in the child object will not be affected and there is no cascade delete here. Neither will the child fields inherit the owner, sharing or security settings of its parent.
  3. Junction Relationship (Many-To-Many):-
    This kind of a relationship can exist when there is a need to create two master-detail relationships. Two master-detail relationships can be created by linking 3 custom objects. Here, two objects will be master objects and the third object will be dependent on both the objects. In simpler words, it will be a child object for both the master objects.

14. What happens to detail record when a master record is deleted? What happens to child record when a parent record is deleted?

In a Master-Detail relationship, when a master record is deleted, the detail record is deleted automatically (Cascade delete).

In a Lookup relationship, even if the parent record is deleted, the child record will not be deleted.

15. Can you have a roll up summary field in case of Master-Detail relationship?

Yes. You can have a roll-up summary in case of a master-detail relationship. But not in case of a lookup relationship.

A roll-up summary field is used to display a value in a master record based on the values of a set of fields in a detail record. The detail record must be related to the master through a master-detail relationship.

There are 4 calculations that you can do using roll-up summary field. You can count the number of detail records related to a master record. Or, you can calculate the sum, minimum value, or maximum value of a field in the detail records.

16. Explain the term “Data Skew” in Salesforce.

Data skew” is a condition which you will encounter when working for a big client where there are over 10,000 records. When one single user owns that many records we call that condition ‘ownership data skew’.

When such users perform updates, performance issues will be encountered because of “data skew”. This happens when a single user/ members of a single role own most of the records for a particular object.

17. Explain skinny table. What are the considerations for Skinny Table?

In Salesforce, skinny tables are used to access frequently used fields and to avoid joins. This largely improves performance. Skinny tables are highly effective, so much so that even when the source tables are modified, skinny tables will be in sync with source tables.

Considerations for skinny tables:

  • Skinny tables can contain a maximum of 100 columns.
  • Skinny tables cannot contain fields from other objects.
  • For full sandboxes: Skinny tables are copied to your Full sandbox organizations, as of the Summer ’15 release.

18. Which fields are automatically Indexed in Salesforce?

Only the following fields are automatically indexed in Salesforce:

  • Primary keys (Id, Name and Owner fields).
  • Foreign keys (lookup or master-detail relationship fields).
  • Audit dates (such as SystemModStamp).
  • Custom fields marked as an External ID or a unique field.

19. How to handle comma within a field while uploading using Data Loader?

In a Data Loader .CSV, if there is a comma in field content, you will have to enclose the contents within double quotation marks: ” “.

E. Logic & Process Automation – Salesforce Interview Questions

20. For which criteria in workflow “time dependent workflow action” cannot be created?

Time dependent workflow action cannot be create for: “created, and every time it’s edited”.

21. What are the types of custom settings in Salesforce? What is the advantage of using custom settings?

There are two types of custom settings in Salesforce: List Custom Settings and Hierarchy Custom Settings.

List Custom Settings are a type of custom settings that provides a reusable set of static data that can be accessed across your organization irrespective of user/ profile.
Hierarchy Custom Settings are another type of custom settings that uses built-in hierarchical logic for “personalizing” settings for specific profiles or users.

The advantage of using custom settings is that it allows developers to create a custom set of access rules for various users and profiles.

22. How many active assignment rules can you have in a lead/ case?

Only one rule can be active at a time.

23. What are custom labels in Salesforce? What is the character limit of custom label?

Custom labels are custom text values that can be accessed from Apex classes or Visualforce pages. The values here can be translated into any language supported by Salesforce.
Their benefit is that they enable developers to create multilingual applications which automatically presents information in a user’s native language.

You can create up to 5,000 custom labels for your organization, and they can be up to 1,000 characters in length.

24. What is the difference between a Role and Profile in Salesforce?

As mentioned in one of the previous Salesforce interview questions, a profile will ultimately control access to which records a user has in a Salesforce org. No user can work on the Salesforce org without being assigned a profile. The Profile is therefore mandatory for every user.

Role however is not mandatory for every user. The primary function of the Role/ Role hierarchy is that it allows higher level users in hierarchy get access to records owned by lower level users in the hierarchy. An example of that is Sales Managers getting access to records owned by Sales Reps while their peers do not get access to it.

25. What are the examples of non-deterministic Force.com formula fields?

Before I mention some of the examples, let me give you an introduction to deterministic and non-deterministic formula fields. Formula fields whose value will be static are referred to as deterministic fields. Whereas, formula fields whose value will be changed dynamically or whose values will have to be calculated on the fly, they are referred to as non-deterministic formula fields. A classic example of that is a formula returning the current date and time.

Some examples of non-deterministic fields in Force.com are:

  • Lookup fields
  • Formula fields whose reference spans over other entities
  • Fields having dynamic date functions like:- TODAY() or NOW()

F. Software Testing – Salesforce Interview Questions

26. Why do we need to write test classes? How to identify if a class is a test class?

Software developers from around the world will unanimously agree that writing code in test classes makes debugging more efficient. Why? That is because test classes help in creating robust and error-free code be it Apex or any other programming language. Since Unit tests are powerful in their own right, Salesforce requires you to write test classes in Apex code.

Why are they so powerful? Because test classes and test methods verify whether a particular piece of code is working properly or not. If that piece of code fails, then developers/ testers can accurately locate the test class having the faulty bug.

Test classes can be determined easily because every test class will be annotated with @isTest keyword. In fact, if we do not annotate a test class with @isTest, then it cannot be defined as a test class. Similarly, any method within a class which has the keyword testMethod, is a test method.

27. What is minimum test coverage required for trigger to deploy?

In Salesforce, if you want to deploy your code to production, then you must make sure that at least 75% of your Apex code is covered by unit tests. And all these tests must complete successfully.

G. Debug & Deployment Tools – Salesforce Interview Questions

28. What are the different ways of deployment in Salesforce?

You can deploy code in Salesforce using:

  1. Change Sets
  2. Eclipse with Force.com IDE
  3. Force.com Migration Tool – ANT/Java based
  4. Salesforce Package

H. Integration – Salesforce Interview Questions

29. What is an external ID in Salesforce? Which all field data types can be used as external IDs?

An external ID is a custom field which can be used as a unique identifier in a record. External IDs are mainly used while importing records/ data. When importing records, one among the many fields in those records need to be marked as an external ID (unique identifier).

An important point to note is that only custom fields can be used as External IDs. The fields that can be marked as external IDs are: Text, Number, E-Mail and Auto-Number.

30. How many callouts to external service can be made in a single Apex transaction?

Governor limits will restrict a single Apex transaction to make a maximum of 100 callouts to an HTTP request or an API call.

31. How can you expose an Apex class as a REST WebService in Salesforce?

You can expose your Apex class and methods so that external applications can access your code and your application through the REST architecture. This is done by defining your Apex class with the @RestResource annotation to expose it as a REST resource. You can then use global classes and a WebService callback method.

Invoking a custom Apex REST Web service method always uses system context. Consequently, the current user’s credentials are not used, and any user who has access to these methods can use their full power, regardless of permissions, field-level security, or sharing rules.

Developers who expose methods using the Apex REST annotations should therefore take care that they are not inadvertently exposing any sensitive data. Look at the below piece of code for instance:-

global class AccountPlan {
 webservice String area; 
 webservice String region; 
 //Define an object in apex that is exposed in apex web service
 global class Plan {
 webservice String name;
 webservice Integer planNumber;
 webservice Date planningPeriod;
 webservice Id planId;
 }
 webservice static Plan createAccountPlan(Plan vPlan) {
 //A plan maps to the Account object in salesforce.com. 
 //So need to map the Plan class object to Account standard object
 Account acct = new Account();
 acct.Name = vPlan.name;
 acct.AccountNumber = String.valueOf(vPlan.planNumber);
 insert acct;
 vPlan.planId=acct.Id;
 return vPlan;
 } }

I. Programmatic Features – Salesforce Interview Questions

32. What is the difference between a standard controller and a custom controller?

Standard controller in Apex, inherits all the standard object properties and standard button functionality directly. It contains the same functionality and logic that are used for standard Salesforce pages.

Custom controller is an Apex class that implements all of the logic for a page without leveraging a standard controller. Custom Controllers are associated with Visualforce pages through the controller attribute.

33. How can we implement pagination in Visualforce?

To control the number of records displayed on each page, we use pagination. By default, a list controller returns 20 records on the page. To customize it, we can use a controller extension to set the pageSize. Take a look at the sample code below:-

<apex:page standardController="Account" recordSetvar="accounts">
 <apex:pageBlock title="Viewing Accounts">
 <apex:form id="theForm">
 <apex:pageBlockSection >
 <apex:dataList var="a" value="{!accounts}" type="1">
 {!a.name}
 </apex:dataList>
 </apex:pageBlockSection>
 <apex:panelGrid columns="2">
 <apex:commandLink action="{!previous}">Previous</apex:commandlink>
 <apex:commandLink action="{!next}">Next</apex:commandlink>
 </apex:panelGrid>
 </apex:form>
 </apex:pageBlock>
</apex:page>

34. How can you call a controller method from JavaScript?

To call a controller method (Apex function) from JavaScript, you need to use actionfunction.

Look at the below piece of code to understand how a controller method is called using actionfunction.

 





<script>
function JSmethodCallFromAnyAction()
{
callfromJS();
}
</apex:page>

35. How to get the UserID of all the currently logged in users using Apex code?

You can get the ID’s of all the currently logged in users by using this global function: UserInfo.getUserId().

36. How many records can a select query return? How many records can a SOSL query return?

The Governor Limits enforces the following:-

Maximum number of records that can be retrieved by SOQL command: 50,000.

Maximum number of records that can be retrieved by SOSL command: 2,000.

37. What is an attribute tag? What is the syntax for including them?

An attribute tag is a definition of an attribute of a custom component and it can only be a child of a component tag.

Note that you cannot define attributes with names like id or rendered. These attributes are automatically created for all custom component definitions. The below piece of code shows the syntax for including them:

<apex:component>
 <apex:attribute name="myValue" description="This is the value for the component." type="String" required="true"/>
 <apex:attribute name="borderColor" description="This is color for the border." type="String" required="true"/>
 
</p>
<p>
</p>
<h1 style="border:{!borderColor}">
 <apex:outputText value="{!myValue}"/>
 </h1>
<p>
</p>
<p>

</apex:component>

38. What are the three types of bindings used in Visualforce? What does each refer to?

There are three types of bindings used in Salesforce:-

  • Data bindings, which refer to the data set in the controller
  • Action bindings, which refer to action methods in the controller
  • Component bindings, which refer to other Visualforce components.

Data bindings and Action bindings are the most common and they will be used in every Visualforce page.

39. What are the different types of collections in Apex? What are maps in Apex?

Collections are the type of variables which can be used to store multiple number of records (data).

It is useful because Governor Limits restrict the number of records you can retrieve per transaction. Hence, collections can be used to store multiple records in a single variable defined as type collection and by retrieving data in the form of collections, Governor Limits will be in check. Collections are similar to how arrays work.

There are 3 collection types in Salesforce:

  • Lists
  • Maps
  • Sets

Maps are used to store data in the form of key-value pairs, where each unique key maps to a single value.
Syntax: Map<String, String> country_city = new Map<String, String>();

40. How can you embed a Visualflow in a Visualforce page?

  1. Find the flow’s unique name.

    1. From Setup, enter Flows in the Quick Find box, then select Flows.
    2. Click the name of the flow.
    3. Copy the unique name of the flow.
  2. From Setup, enter Visualforce Pages in the Quick Find box, then select Visualforce Pages.

  3. Define a new Visualforce page, or open an existing one.

  4. Add the <flow:interview> component somewhere between the <apex:page> tags.

  5. Set the name attribute to the unique name of the flow.

For example:

 </apex:page>
 <flow:interview name="flowuniquename"/>
 <apex:page>
  1. Click Save.

  2. Restrict which users can access the Visualforce page.

    1. Click Visualforce Pages.
    2. Click Security next to your Visualforce page.
    3. Move all the appropriate profiles from Available Profiles to Enabled Profiles by using the ‘add’ and ‘remove’ buttons.
    4. Click Save.
  3. Add the Visualforce page to your Force.com app by using a custom button, link, or Visualforce tab.

41. What is the use of “@future” annotation?

Future annotations are used to identify and execute methods asynchronously. If the method is annotated with “@future”, then it will be executed only when Salesforce has the available resources.

For example, you can use it while making an asynchronous web service callout to an external service. Whereas without using the annotation, the web service callout is made from the same thread that is executing the Apex code, and no additional processing will occur until that callout is complete (synchronous processing).

42. What are the different methods of batch Apex class?

Database.Batchable interface contains three methods that must be implemented:

  1. Start method:
    global (Database.QueryLocator | Iterable<sObject>) start(Database.BatchableContext bc) {}
  2. Execute method:
    global void execute(Database.BatchableContext BC, list<P>){}
  3. Finish method:
    global void finish(Database.BatchableContext BC){}

43. What is a Visualforce component?

A Visualforce Component is either a predefined component (standard from component library) or a custom component that determines the user interface behavior. For example, if you want to send the text captured from the Visualforce page to an object in Salesforce, then you need to make use of Visualforce components. Example: <apex:detail>

44. What is Trigger.new?

Triger.new is a command which returns the list of records that have been added recently to the sObjects. To be more precise, those records will be returned which are yet to be saved to the database. Note that this sObject list is only available in insert and update triggers, and the records can only be modified in before triggers.

But just for your information, Trigger.old returns a list of the old versions of the sObject records. Note that this sObject list is only available in update and delete triggers.

45. What all data types can a set store?

Sets can have any of the following data types:

  • Primitive types
  • Collections
  • sObjects
  • User-defined types
  • Built-in Apex types

46. What is an sObject type?

An sObject is any object that can be stored in the Force.com platform database. Apex allows the use of generic sObject abstract type to represent any object.

For example, Vehicle is a generic type and Car, Motor Bike all are concrete types of Vehicle.
In SFDC, sObject is generic and Account, Opportunity, CustomObject__c are its concrete type.

47. What is the difference between SOQL and SOSL?

The differences are mentioned in the table below:

SOQL vs SOSL

SOQL (Salesforce Object Query Language) SOSL (Salesforce Object Search Language)
Only one object can be searched at a time Many objects can be searched at a time
Can query any type of field Can query only on email, text or phone
Can be used in classes and triggers Can be used in classes, but not triggers
DML Operation can be performed on query results DML Operation cannot be performed on search results
Returns records Returns fields

48. What is an Apex transaction?

An Apex transaction represents a set of operations that are executed as a single unit. The operations here include the DML operations which are responsible for querying records. All the DML operations in a transaction either complete successfully, or if an error occurs even in saving a single record, then the entire transaction is rolled back.

49. What is the difference between public and global class in Apex?

Global class is accessible across the Salesforce instance irrespective of namespaces.
Whereas, public classes are accessible only in the corresponding namespaces.

50. What are getter methods and setter methods?

Get (getter) method is used to pass values from the controller to the VF page.
Whereas, the set (setter) method is used to set the value back to controller variable.

I hope this set of Salesforce interview questions will help you ace your job interview. As the next step for your career, check out the various certifications offered by Salesforce here: Salesforce Certifications. It will also help you to understand the job roles and chalk out a career path for yourself. 

Also, check out this video on the Top 50 Frequently Asked Salesforce Interview Questions which was delivered by an industry expert. He has shared his opinion of Salesforce job interviews and industry demand. Do take a at it look and let us know if this helped in your interview preparation.

Salesforce Interview Questions And Answers | Salesforce Tutorial | Edureka

This Salesforce Interview questions and answers tutorial will help you to prepare for Salesforce interviews. Learn about the most important Salesforce interview questions and answers and know what will set you apart in the interview process.
Check out our Salesforce Certification Training, which comes with instructor-led live training and real life project experience. Feel free to leave any questions you have in the comment box below.

Got a question for us? Please mention it in the comments section and we will get back to you.



Top 50 Hadoop Interview Questions You Must Prepare In 2019

$
0
0

Top 50 Hadoop Interview Questions for 2019

In this Hadoop interview questions blog, we will be covering all the frequently asked questions that will help you ace the interview with their best solutions. But before that, let me tell you how the demand is continuously increasing for Big Data and Hadoop experts. You can check out this skill report which talks about the top technical skills to master in 2019.

Following are a few stats that reflect the growth in the demand for Big Data & Hadoop certification quite accurately:

  • Big Data will drive $48.6 billion in annual spending by 2019- IDC.
  • Average salary of a Big Data Hadoop developer in the US is $135k- Indeed.com
  • Average annual salary in the United Kingdom is £66,250 – £66,750- itjobswatch.co.uk

I would like to draw your attention towards the Big Data revolution. Earlier, organizations were only concerned about operational data, which was less than 20% of the whole data. Later, they realized that analyzing the whole data will give them better business insights & decision-making capability. That was the time when big giants like Yahoo, Facebook, Google, etc. started adopting Hadoop & Big Data related technologies. In fact, nowadays one of every fifth company is moving to Big Data analytics. Hence, the demand for jobs in Big Data Hadoop is rising like anything. Therefore, if you want to boost your career, Hadoop and Spark are just the technology you need. This would always give you a good start either as a fresher or experienced.

Prepare with these top Hadoop interview questions to get an edge in the burgeoning Big Data market where global and local enterprises, big or small, are looking for the quality Big Data and Hadoop experts. This definitive list of top Hadoop interview questions will take you through the questions and answers around Hadoop ClusterHDFSMapReduce, Pig, Hive, HBase. This blog is the gateway to your next Hadoop job.

In case you have come across a few difficult questions in a Hadoop interview and are still confused about the best answer, kindly put those questions in the comment section below. We will be happy to answer them.

Hadoop Interview Questions and Answers | Edureka

 

In the meantime, you can maximize the Big Data Analytics career opportunities that are sure to come your way by taking Hadoop online training with Edureka.  Click below to know more.

1. What are the basic differences between relational database and HDFS?

Here are the key differences between HDFS and relational database:

RDBMS vs. Hadoop

RDBMS Hadoop
Data Types RDBMS relies on the structured data and the schema of the data is always known. Any kind of data can be stored into Hadoop i.e. Be it structured, unstructured or semi-structured.
Processing RDBMS provides limited or no processing capabilities. Hadoop allows us to process the data which is distributed across the cluster in a parallel fashion.
Schema on Read Vs. Write RDBMS is based on ‘schema on write’ where schema validation is done before loading the data. On the contrary, Hadoop follows the schema on read policy.
Read/Write Speed In RDBMS, reads are fast because the schema of the data is already known. The writes are fast in HDFS because no schema validation happens during HDFS write.
Cost Licensed software, therefore, I have to pay for the software. Hadoop is an open source framework. So, I don’t need to pay for the software.
Best Fit Use Case RDBMS is used for OLTP (Online Trasanctional Processing) system. Hadoop is used for Data discovery, data analytics or OLAP system.

 

2. Explain “Big Data” and what are five V’s of Big Data?

“Big data” is the term for a collection of large and complex data sets, that makes it difficult to process using relational database management tools or traditional data processing applications. It is difficult to capture, curate, store, search, share, transfer, analyze, and visualize Big data. Big Data has emerged as an opportunity for companies. Now they can successfully derive value from their data and will have a distinct advantage over their competitors with enhanced business decisions making capabilities.

♣ Tip: It will be a good idea to talk about the 5Vs in such questions, whether it is asked specifically or not!

  • Volume: The volume represents the amount of data which is growing at an exponential rate i.e. in Petabytes and Exabytes. 
  • Velocity: Velocity refers to the rate at which data is growing, which is very fast. Today, yesterday’s data are considered as old data. Nowadays, social media is a major contributor to the velocity of growing data.
  • Variety: Variety refers to the heterogeneity of data types. In another word, the data which are gathered has a variety of formats like videos, audios, csv, etc. So, these various formats represent the variety of data.
  • Veracity: Veracity refers to the data in doubt or uncertainty of data available due to data inconsistency and incompleteness. Data available can sometimes get messy and may be difficult to trust. With many forms of big data, quality and accuracy are difficult to control. The volume is often the reason behind for the lack of quality and accuracy in the data.
  • Value: It is all well and good to have access to big data but unless we can turn it into a value it is useless. By turning it into value I mean, Is it adding to the benefits of the organizations? Is the organization working on Big Data achieving high ROI (Return On Investment)? Unless, it adds to their profits by working on Big Data, it is useless.

As we know Big Data is growing at an accelerating rate, so the factors associated with it are also evolving. To go through them and understand it in detail, I recommend you to go through Big Data Tutorial blog.

3. What is Hadoop and its components. 

When “Big Data” emerged as a problem, Apache Hadoop evolved as a solution to it. Apache Hadoop is a framework which provides us various services or tools to store and process Big Data. It helps in analyzing Big Data and making business decisions out of it, which can’t be done efficiently and effectively using traditional systems.

♣ Tip: Now, while explaining Hadoop, you should also explain the main components of Hadoop, i.e.:

  • Storage unit– HDFS (NameNode, DataNode)
  • Processing framework– YARN (ResourceManager, NodeManager)

4. What are HDFS and YARN?

HDFS (Hadoop Distributed File System) is the storage unit of Hadoop. It is responsible for storing different kinds of data as blocks in a distributed environment. It follows master and slave topology.

♣ Tip: It is recommended to explain the HDFS components too i.e.

  • NameNode: NameNode is the master node in the distributed environment and it maintains the metadata information for the blocks of data stored in HDFS like block location, replication factors etc.
  • DataNode: DataNodes are the slave nodes, which are responsible for storing data in the HDFS. NameNode manages all the DataNodes.

YARN (Yet Another Resource Negotiator) is the processing framework in Hadoop, which manages resources and provides an execution environment to the processes.

♣ Tip: Similarly, as we did in HDFS, we should also explain the two components of YARN:   

  • ResourceManager: It receives the processing requests, and then passes the parts of requests to corresponding NodeManagers accordingly, where the actual processing takes place. It allocates resources to applications based on the needs.
  • NodeManager: NodeManager is installed on every DataNode and it is responsible for the execution of the task on every single DataNode.

If you want to learn in detail about HDFS & YARN go through Hadoop Tutorial blog.

5. Tell me about the various Hadoop daemons and their roles in a Hadoop cluster.

Generally approach this question by first explaining the HDFS daemons i.e. NameNode, DataNode and Secondary NameNode, and then moving on to the YARN daemons i.e. ResorceManager and NodeManager, and lastly explaining the JobHistoryServer.

  • NameNode: It is the master node which is responsible for storing the metadata of all the files and directories. It has information about blocks, that make a file, and where those blocks are located in the cluster.
  • Datanode: It is the slave node that contains the actual data.
  • Secondary NameNode: It periodically merges the changes (edit log) with the FsImage (Filesystem Image), present in the NameNode. It stores the modified FsImage into persistent storage, which can be used in case of failure of NameNode.
  • ResourceManager: It is the central authority that manages resources and schedule applications running on top of YARN.
  • NodeManager: It runs on slave machines, and is responsible for launching the application’s containers (where applications execute their part), monitoring their resource usage (CPU, memory, disk, network) and reporting these to the ResourceManager.
  • JobHistoryServer: It maintains information about MapReduce jobs after the Application Master terminates.

Hadoop HDFS Interview Questions

6. Compare HDFS with Network Attached Storage (NAS).

In this question, first explain NAS and HDFS, and then compare their features as follows:

  • Network-attached storage (NAS) is a file-level computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. NAS can either be a hardware or software which provides services for storing and accessing files. Whereas Hadoop Distributed File System (HDFS) is a distributed filesystem to store data using commodity hardware.
  • In HDFS Data Blocks are distributed across all the machines in a cluster. Whereas in NAS data is stored on a dedicated hardware.
  • HDFS is designed to work with MapReduce paradigm, where computation is moved to the data. NAS is not suitable for MapReduce since data is stored separately from the computations.
  • HDFS uses commodity hardware which is cost-effective, whereas a NAS is a high-end storage devices which includes high cost.

7. List the difference between Hadoop 1 and Hadoop 2.

This is an important question and while answering this question, we have to mainly focus on two points i.e. Passive NameNode and YARN architecture.

  • In Hadoop 1.x, “NameNode” is the single point of failure. In Hadoop 2.x, we have Active and Passive “NameNodes”. If the active “NameNode” fails, the passive “NameNode” takes charge. Because of this, high availability can be achieved in Hadoop 2.x.
  • Also, in Hadoop 2.x, YARN provides a central resource manager. With YARN, you can now run multiple applications in Hadoop, all sharing a common resource. MRV2 is a particular type of distributed application that runs the MapReduce framework on top of YARN. Other tools can also perform data processing via YARN, which was a problem in Hadoop 1.x.

Hadoop 1.x vs. Hadoop 2.x

Hadoop 1.x Hadoop 2.x
Passive  NameNode NameNode is a Single Point of Failure Active & Passive NameNode
Processing MRV1 (Job Tracker & Task Tracker) MRV2/YARN (ResourceManager & NodeManager)

8. What are active and passive “NameNodes”? 

In HA (High Availability) architecture, we have two NameNodes – Active “NameNode” and Passive “NameNode”.

  • Active “NameNode” is the “NameNode” which works and runs in the cluster.
  • Passive “NameNode” is a standby “NameNode”, which has similar data as active “NameNode”.

When the active “NameNode” fails, the passive “NameNode” replaces the active “NameNode” in the cluster. Hence, the cluster is never without a “NameNode” and so it never fails.

9. Why does one remove or add nodes in a Hadoop cluster frequently?

One of the most attractive features of the Hadoop framework is its utilization of commodity hardware. However, this leads to frequent “DataNode” crashes in a Hadoop cluster. Another striking feature of Hadoop Framework is the ease of scale in accordance with the rapid growth in data volume. Because of these two reasons, one of the most common task of a Hadoop administrator is to commission (Add) and decommission (Remove) “Data Nodes” in a Hadoop Cluster.

Read this blog to get a detailed understanding on commissioning and decommissioning nodes in a Hadoop cluster.

10. What happens when two clients try to access the same file in the HDFS?

HDFS supports exclusive writes only.

When the first client contacts the “NameNode” to open the file for writing, the “NameNode” grants a lease to the client to create this file. When the second client tries to open the same file for writing, the “NameNode” will notice that the lease for the file is already granted to another client, and will reject the open request for the second client.

11. How does NameNode tackle DataNode failures?

NameNode periodically receives a Heartbeat (signal) from each of the DataNode in the cluster, which implies DataNode is functioning properly.

A block report contains a list of all the blocks on a DataNode. If a DataNode fails to send a heartbeat message, after a specific period of time it is marked dead.

The NameNode replicates the blocks of dead node to another DataNode using the replicas created earlier.

12. What will you do when NameNode is down?

The NameNode recovery process involves the following steps to make the Hadoop cluster up and running:

  1. Use the file system metadata replica (FsImage) to start a new NameNode. 
  2. Then, configure the DataNodes and clients so that they can acknowledge this new NameNode, that is started.
  3. Now the new NameNode will start serving the client after it has completed loading the last checkpoint FsImage (for metadata information) and received enough block reports from the DataNodes. 

Whereas, on large Hadoop clusters this NameNode recovery process may consume a lot of time and this becomes even a greater challenge in the case of the routine maintenance. Therefore, we have HDFS High Availability Architecture which is covered in the HA architecture blog.

13. What is a checkpoint?

In brief, “Checkpointing” is a process that takes an FsImage, edit log and compacts them into a new FsImage. Thus, instead of replaying an edit log, the NameNode can load the final in-memory state directly from the FsImage. This is a far more efficient operation and reduces NameNode startup time. Checkpointing is performed by Secondary NameNode. 

14. How is HDFS fault tolerant? 

When data is stored over HDFS, NameNode replicates the data to several DataNode. The default replication factor is 3. You can change the configuration factor as per your need. If a DataNode goes down, the NameNode will automatically copy the data to another node from the replicas and make the data available. This provides fault tolerance in HDFS.

15. Can NameNode and DataNode be a commodity hardware? 

The smart answer to this question would be, DataNodes are commodity hardware like personal computers and laptops as it stores data and are required in a large number. But from your experience, you can tell that, NameNode is the master node and it stores metadata about all the blocks stored in HDFS. It requires high memory (RAM) space, so NameNode needs to be a high-end machine with good memory space.

16. Why do we use HDFS for applications having large data sets and not when there are a lot of small files? 

HDFS is more suitable for large amounts of data sets in a single file as compared to small amount of data spread across multiple files. As you know, the NameNode stores the metadata information regarding the file system in the RAM. Therefore, the amount of memory produces a limit to the number of files in my HDFS file system. In other words, too many files will lead to the generation of too much metadata. And, storing these metadata in the RAM will become a challenge. As a thumb rule, metadata for a file, block or directory takes 150 bytes. 

17. How do you define “block” in HDFS? What is the default block size in Hadoop 1 and in Hadoop 2? Can it be changed? 

Blocks are the nothing but the smallest continuous location on your hard drive where data is stored. HDFS stores each as blocks, and distribute it across the Hadoop cluster. Files in HDFS are broken down into block-sized chunks, which are stored as independent units.

  • Hadoop 1 default block size: 64 MB
  • Hadoop 2 default block size:  128 MB

Yes, blocks can be configured. The dfs.block.size parameter can be used in the hdfs-site.xml file to set the size of a block in a Hadoop environment.

18. What does ‘jps’ command do? 

The ‘jps’ command helps us to check if the Hadoop daemons are running or not. It shows all the Hadoop daemons i.e namenode, datanode, resourcemanager, nodemanager etc. that are running on the machine.

19. How do you define “Rack Awareness” in Hadoop?

Rack Awareness is the algorithm in which the “NameNode” decides how blocks and their replicas are placed, based on rack definitions to minimize network traffic between “DataNodes” within the same rack. Let’s say we consider replication factor 3 (default), the policy is that “for every block of data, two copies will exist in one rack, third copy in a different rack”. This rule is known as the “Replica Placement Policy”.

To know rack awareness in more detail, refer to the HDFS architecture blog. 

20. What is “speculative execution” in Hadoop? 

If a node appears to be executing a task slower, the master node can redundantly execute another instance of the same task on another node. Then, the task which finishes first will be accepted and the other one is killed. This process is called “speculative execution”.

21. How can I restart “NameNode” or all the daemons in Hadoop? 

This question can have two answers, we will discuss both the answers. We can restart NameNode by following methods:

  1. You can stop the NameNode individually using. /sbin /hadoop-daemon.sh stop namenode command and then start the NameNode using. /sbin/hadoop-daemon.sh start namenode command.
  2. To stop and start all the daemons, use. /sbin/stop-all.sh and then use ./sbin/start-all.sh command which will stop all the daemons first and then start all the daemons.

These script files reside in the sbin directory inside the Hadoop directory.

22. What is the difference between an “HDFS Block” and an “Input Split”?

The “HDFS Block” is the physical division of the data while “Input Split” is the logical division of the data. HDFS divides data in blocks for storing the blocks together, whereas for processing, MapReduce divides the data into the input split and assign it to mapper function.

23. Name the three modes in which Hadoop can run.

The three modes in which Hadoop can run are as follows:

  1. Standalone (local) mode: This is the default mode if we don’t configure anything. In this mode, all the components of Hadoop, such NameNode, DataNode, ResourceManager, and NodeManager, run as a single Java process. This uses the local filesystem.
  2. Pseudo-distributed mode: A single-node Hadoop deployment is considered as running Hadoop system in pseudo-distributed mode. In this mode, all the Hadoop services, including both the master and the slave services, were executed on a single compute node.
  3. Fully distributed mode: A Hadoop deployments in which the Hadoop master and slave services run on separate nodes, are stated as fully distributed mode.

Hadoop MapReduce Interview Questions

24. What is “MapReduce”? What is the syntax to run a “MapReduce” program? 

It is a framework/a programming model that is used for processing large data sets over a cluster of computers using parallel programming. The syntax to run a MapReduce program is hadoop_jar_file.jar /input_path /output_path.

If you have any doubt in MapReduce or want to revise your concepts you can refer this MapReduce tutorial.

25. What are the main configuration parameters in a “MapReduce” program?

The main configuration parameters which users need to specify in “MapReduce” framework are:

  • Job’s input locations in the distributed file system
  • Job’s output location in the distributed file system
  • Input format of data
  • Output format of data
  • Class containing the map function
  • Class containing the reduce function
  • JAR file containing the mapper, reducer and driver classes

26. State the reason why we can’t perform “aggregation” (addition) in mapper? Why do we need the “reducer” for this?

This answer includes many points, so we will go through them sequentially.

  • We cannot perform “aggregation” (addition) in mapper because sorting does not occur in the “mapper” function. Sorting occurs only on the reducer side and without sorting aggregation cannot be done.
  • During “aggregation”, we need the output of all the mapper functions which may not be possible to collect in the map phase as mappers may be running on the different machine where the data blocks are stored.
  • And lastly, if we try to aggregate data at mapper, it requires communication between all mapper functions which may be running on different machines. So, it will consume high network bandwidth and can cause network bottlenecking.

27. What is the purpose of “RecordReader” in Hadoop?

The “InputSplit” defines a slice of work, but does not describe how to access it. The “RecordReader” class loads the data from its source and converts it into (key, value) pairs suitable for reading by the “Mapper” task. The “RecordReader” instance is defined by the “Input Format”.

28. Explain “Distributed Cache” in a “MapReduce Framework”.

Distributed Cache can be explained as, a facility provided by the MapReduce framework to cache files needed by applications. Once you have cached a file for your job, Hadoop framework will make it available on each and every data nodes where you map/reduce tasks are running. Then you can access the cache file as a local file in your Mapper or Reducer job.

29. How do “reducers” communicate with each other? 

This is a tricky question. The “MapReduce” programming model does not allow “reducers” to communicate with each other. “Reducers” run in isolation.

30. What does a “MapReduce Partitioner” do?

A “MapReduce Partitioner” makes sure that all the values of a single key go to the same “reducer”, thus allowing even distribution of the map output over the “reducers”. It redirects the “mapper” output to the “reducer” by determining which “reducer” is responsible for the particular key.

31. How will you write a custom partitioner?

Custom partitioner for a Hadoop job can be written easily by following the below steps:

  • Create a new class that extends Partitioner Class
  • Override method – getPartition, in the wrapper that runs in the MapReduce.
  • Add the custom partitioner to the job by using method set Partitioner or add the custom partitioner to the job as a config file.

32. What is a “Combiner”? 

A “Combiner” is a mini “reducer” that performs the local “reduce” task. It receives the input from the “mapper” on a particular “node” and sends the output to the “reducer”. “Combiners” help in enhancing the efficiency of “MapReduce” by reducing the quantum of data that is required to be sent to the “reducers”.

33. What do you know about “SequenceFileInputFormat”?

“SequenceFileInputFormat” is an input format for reading within sequence files. It is a specific compressed binary file format which is optimized for passing the data between the outputs of one “MapReduce” job to the input of some other “MapReduce” job.

Sequence files can be generated as the output of other MapReduce tasks and are an efficient intermediate representation for data that is passing from one MapReduce job to another.

Apache Pig Interview Questions

34. What are the benefits of Apache Pig over MapReduce? 

Apache Pig is a platform, used to analyze large data sets representing them as data flows developed by Yahoo. It is designed to provide an abstraction over MapReduce, reducing the complexities of writing a MapReduce program.

  • Pig Latin is a high-level data flow language, whereas MapReduce is a low-level data processing paradigm.
  • Without writing complex Java implementations in MapReduce, programmers can achieve the same implementations very easily using Pig Latin.
  • Apache Pig reduces the length of the code by approx 20 times (according to Yahoo). Hence, this reduces the development period by almost 16 times.
  • Pig provides many built-in operators to support data operations like joins, filters, ordering, sorting etc. Whereas to perform the same function in MapReduce is a humongous task.
  • Performing a Join operation in Apache Pig is simple. Whereas it is difficult in MapReduce to perform a Join operation between the data sets, as it requires multiple MapReduce tasks to be executed sequentially to fulfill the job.
  • In addition, pig also provides nested data types like tuples, bags, and maps that are missing from MapReduce.

35. What are the different data types in Pig Latin?

Pig Latin can handle both atomic data types like int, float, long, double etc. and complex data types like tuple, bag and map.

Atomic data types: Atomic or scalar data types are the basic data types which are used in all the languages like string, int, float, long, double, char[], byte[].

Complex Data Types: Complex data types are Tuple, Map and Bag.

To know more about these data types, you can go through our Pig tutorial blog.

36. What are the different relational operations in “Pig Latin” you worked with?

Different relational operators are:

  1. for each
  2. order by
  3. filters
  4. group
  5. distinct
  6. join
  7. limit

37. What is a UDF?

If some functions are unavailable in built-in operators, we can programmatically create User Defined Functions (UDF) to bring those functionalities using other languages like Java, Python, Ruby, etc. and embed it in Script file.

Apache Hive Interview Questions

38. What is “SerDe” in “Hive”?

Apache Hive is a data warehouse system built on top of Hadoop and is used for analyzing structured and semi-structured data developed by Facebook. Hive abstracts the complexity of Hadoop MapReduce.

The “SerDe” interface allows you to instruct “Hive” about how a record should be processed. A “SerDe” is a combination of a “Serializer” and a “Deserializer”. “Hive” uses “SerDe” (and “FileFormat”) to read and write the table’s row.

To know more about Apache Hive, you can go through this Hive tutorial blog.

39. Can the default “Hive Metastore” be used by multiple users (processes) at the same time?

“Derby database” is the default “Hive Metastore”. Multiple users (processes) cannot access it at the same time. It is mainly used to perform unit tests.

40. What is the default location where “Hive” stores table data?

The default location where Hive stores table data is inside HDFS in /user/hive/warehouse.

Apache HBase Interview Questions

41. What is Apache HBase?

HBase is an open source, multidimensional, distributed, scalable and a NoSQL database written in Java. HBase runs on top of HDFS (Hadoop Distributed File System) and provides BigTable (Google) like capabilities to Hadoop. It is designed to provide a fault-tolerant way of storing the large collection of sparse data sets. HBase achieves high throughput and low latency by providing faster Read/Write Access on huge datasets.

To know more about HBase you can go through our HBase tutorial blog.

42. What are the components of Apache HBase?

HBase has three major components, i.e. HMaster Server, HBase RegionServer and Zookeeper.

  • Region Server: A table can be divided into several regions. A group of regions is served to the clients by a Region Server.
  • HMaster: It coordinates and manages the Region Server (similar as NameNode manages DataNode in HDFS).
  • ZooKeeper: Zookeeper acts like as a coordinator inside HBase distributed environment. It helps in maintaining server state inside the cluster by communicating through sessions.

To know more, you can go through this HBase architecture blog.

43. What are the components of Region Server?

The components of a Region Server are:

  • WAL: Write Ahead Log (WAL) is a file attached to every Region Server inside the distributed environment. The WAL stores the new data that hasn’t been persisted or committed to the permanent storage.
  • Block Cache: Block Cache resides in the top of Region Server. It stores the frequently read data in the memory.
  • MemStore: It is the write cache. It stores all the incoming data before committing it to the disk or permanent memory. There is one MemStore for each column family in a region.
  • HFile: HFile is stored in HDFS. It stores the actual cells on the disk.

44. Explain “WAL” in HBase?

Write Ahead Log (WAL) is a file attached to every Region Server inside the distributed environment. The WAL stores the new data that hasn’t been persisted or committed to the permanent storage. It is used in case of failure to recover the data sets.

45. Mention the differences between “HBase” and “Relational Databases”?

HBase is an open source, multidimensional, distributed, scalable and a NoSQL database written in Java. HBase runs on top of HDFS and provides BigTable like capabilities to Hadoop. Let us see the differences between HBase and relational database.

HBase vs. Relational Database

HBase Relational Database
It is schema-less It is schema-based database
It is column-oriented data store It is row-oriented data store
It is used to store de-normalized data It is used to store normalized data
It contains sparsely populated tables It contains thin tables
Automated partitioning is done is HBase There is no such provision or built-in support for partitioning

Apache Spark Interview Questions

46. What is Apache Spark?

The answer to this question is, Apache Spark is a framework for real-time data analytics in a distributed computing environment. It executes in-memory computations to increase the speed of data processing.

It is 100x faster than MapReduce for large-scale data processing by exploiting in-memory computations and other optimizations.

47. Can you build “Spark” with any particular Hadoop version?

Yes, one can build “Spark” for a specific Hadoop version. Check out this blog to learn more about building YARN and HIVE on Spark. 

48. Define RDD.

RDD is the acronym for Resilient Distribution Datasets – a fault-tolerant collection of operational elements that run parallel. The partitioned data in RDD are immutable and distributed, which is a key component of Apache Spark.

Oozie & ZooKeeper Interview Questions

49. What is Apache ZooKeeper and Apache Oozie?

Apache ZooKeeper coordinates with various services in a distributed environment. It saves a lot of time by performing synchronization, configuration maintenance, grouping and naming.

Apache Oozie is a scheduler which schedules Hadoop jobs and binds them together as one logical work. There are two kinds of Oozie jobs:

  • Oozie Workflow: These are the sequential set of actions to be executed. You can assume it as a relay race. Where each athlete waits for the last one to complete his part.
  • Oozie Coordinator: These are the Oozie jobs which are triggered when the data is made available to it. Think of this as the response-stimuli system in our body. In the same manner, as we respond to an external stimulus, an Oozie coordinator responds to the availability of data and it rests otherwise.

50. How do you configure an “Oozie” job in Hadoop?

“Oozie” is integrated with the rest of the Hadoop stack supporting several types of Hadoop jobs such as “Java MapReduce”, “Streaming MapReduce”, “Pig”, “Hive” and “Sqoop”.

To understand “Oozie” in detail and learn how to configure an “Oozie” job, do check out this introduction to Apache Oozie blog.

Feeling overwhelmed with all the questions the interviewer might ask in your Hadoop interview? Now it is time to go through a series of Hadoop interview questions which covers different aspects of the Hadoop framework. It’s never too late to strengthen your basics. Learn Hadoop from industry experts while working with real-life use cases.

Got a question for us? Please mention it in the comments section and we will get back to you.

Top Hive Commands with Examples in HQL

$
0
0

In this blog post, let’s discuss top Hive commands with examples. These Hive commands are very important to set up the foundation for Hive Certification Training.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

What is Hive?

Apache Hive is a Data warehouse system which is built to work on Hadoop. It is used to querying and managing large datasets residing in distributed storage. Before becoming an open source project of Apache Hadoop, Hive was originated in Facebook. It provides a mechanism to project structure onto the data in Hadoop and to query that data using a SQL-like language called HiveQL (HQL).

Hive is used because the tables in Hive are similar to tables in a relational database. If you are familiar with SQL, it’s a cakewalk. Many users can simultaneously query the data using Hive-QL.

What is HQL?

Hive defines a simple SQL-like query language to querying and managing large datasets called Hive-QL ( HQL ). It’s easy to use if you’re familiar with SQL Language. Hive allows programmers who are familiar with the language to write the custom MapReduce framework to perform more sophisticated analysis.

Uses of Hive:

1. The Apache Hive distributed storage.

2. Hive provides tools to enable easy data extract/transform/load (ETL)

3. It provides the structure on a variety of data formats.

4. By using Hive, we can access files stored in Hadoop Distributed File System (HDFS is used to querying and managing large datasets residing in) or in other data storage systems such as Apache HBase.

Limitations of Hive:

• Hive is not designed for Online transaction processing (OLTP ), it is only used for the Online Analytical Processing.

• Hive supports overwriting or apprehending data, but not updates and deletes.

• In Hive, sub queries are not supported.

Why Hive is used inspite of Pig?

The following are the reasons why Hive is used in spite of Pig’s availability:

  • Hive-QL is a declarative language line SQL, PigLatin is a data flow language.
  • Pig: a data-flow language and environment for exploring very large datasets.
  • Hive: a distributed data warehouse.

Components of Hive:

Metastore :

Hive stores the schema of the Hive tables in a Hive Metastore. Metastore is used to hold all the information about the tables and partitions that are in the warehouse. By default, the metastore is run in the same process as the Hive service and the default Metastore is DerBy Database.

SerDe :

Serializer, Deserializer gives instructions to hive on how to process a record.

Hive Commands :

Data Definition Language (DDL )

DDL statements are used to build and modify the tables and other objects in the database.

Example :

CREATE, DROP, TRUNCATE, ALTER, SHOW, DESCRIBE Statements.

Go to Hive shell by giving the command sudo hive and enter the command ‘create database<data base name>’ to create the new database in the Hive.

Create Hive database using Hive Commands

To list out the databases in Hive warehouse, enter the command ‘show databases’.

List Hive database using Hive Commands

The database creates in a default location of the Hive warehouse. In Cloudera, Hive database store in a /user/hive/warehouse.

List Hive database using Hive Commands

The command to use the database is USE <data base name>

Hive command to use the database

Copy the input data to HDFS from local by using the copy From Local command.

 Input data in Hive

Hive table

When we create a table in hive, it creates in the default location of the hive warehouse. – “/user/hive/warehouse”, after creation of the table we can move the data from HDFS to hive table.

The following command creates a table with in location of “/user/hive/warehouse/retail.db”

Note : retail.db is the database created in the Hive warehouse.

Creating Database in Hive Warehouse.

Describe provides information about the schema of the table.

Describe - Hive Command

Data Manipulation Language (DML )

DML statements are used to retrieve, store, modify, delete, insert and update data in the database.

Example :

LOAD, INSERT Statements.

Syntax :

LOAD data <LOCAL> inpath <file path> into table [tablename]

The Load operation is used to move the data into corresponding Hive table. If the keyword local is specified, then in the load command will give the local file system path. If the keyword local is not specified we have to use the HDFS path of the file.

DML statement - Load Command

DML statement - Load Command

Here are some examples for the LOAD data LOCAL command

DML statement - Load Command

DML statement - Load Command

After loading the data into the Hive table we can apply the Data Manipulation Statements or aggregate functions retrieve the data.

Example to count number of records:

Count aggregate function is used count the total number of the records in a table.

Example to count number of records:

‘create external’ Table :

The create external keyword is used to create a table and provides a location where the table will create, so that Hive does not use a default location for this table. An EXTERNAL table points to any HDFS location for its storage, rather than default storage.

Create External Command in Hive

Create External Command in Hive

Insert Command:

The insert command is used to load the data Hive table. Inserts can be done to a table or a partition.

• INSERT OVERWRITE is used to overwrite the existing data in the table or partition.

• INSERT INTO is used to append the data into existing data in a table. (Note: INSERT INTO syntax is work from the version 0.8)

Insert Command

Insert Command

Example for ‘Partitioned By’ and ‘Clustered By’ Command :

‘Partitioned by‘ is used to divided the table into the Partition and can be divided in to buckets by using the ‘Clustered By‘ command.

Example for 'Partitioned By' and 'Clustered By' Command

Example for 'Partitioned By' and 'Clustered By' Command

When we insert the data Hive throwing errors, the dynamic partition mode is strict and dynamic partition not enabled (by Jeff at dresshead website). So we need to set the following parameters in Hive shell.

set hive.exec.dynamic.partition=true;

To enable dynamic partitions, by default, it’s false

set hive.exec.dynamic.partition.mode=nonstrict;

Dynamic Partitions

Dynamic Partitions

Dynamic Partitions

Partition is done by the category and can be divided in to buckets by using the ‘Clustered By’ command.

Partition

The ‘Drop Table’ statement deletes the data and metadata for a table. In the case of external tables, only the metadata is deleted.

Drop Table statement

Drop Table statement

The ‘Drop Table’ statement deletes the data and metadata for a table. In the case of external tables, only the metadata is deleted.

Load data local inpath ‘aru.txt’ into table tablename and then we check employee1 table by using Select * from table name command

To count the number of records in table by using Select count(*) from txnrecords;

To count the number of records in table by using Select count(*) from txnrecords;

To count the number of records in table by using Select count(*) from txnrecords;

Aggregation :

Select count (DISTINCT category) from tablename;

This command will count the different category of ‘cate’ table. Here there are 3 different categories.

Suppose there is another table cate where f1 is field name of category.

 Count the different category of 'cate' table.

 Count the different category of 'cate' table.

Grouping :

Group command is used to group the result-set by one or more columns.

Select category, sum( amount) from txt records group by category

It calculates the amount of same category.

Group command

The result one table is stored in to another table.

Create table newtablename as select * from oldtablename;

Group command

Join Command :

Here one more table is created in the name ‘mailid’

Join Command

Join Command

Join Operation:

A Join operation is performed to combining fields from two tables by using values common to each.

Join Operation

Left Outer Join:

The result of a left outer join (or simply left join) for tables A and B always contains all records of the “left” table (A), even if the join-condition does not find any matching record in the “right” table (B).

Left Outer Join

Left Outer Join

Right Outer Join:

A right outer join (or right join) closely resembles a left outer join, except with the treatment of the tables reversed. Every row from the “right” table (B) will appear in the joined table at least once.

Right Outer Join

Right Outer Join

Full Join:

The joined table will contain all records from both tables, and fill in NULLs for missing matches on either side.

Full Join

Top Hive Commands with Examples in HQL

Once done with hive we can use quit command to exit from the hive shell.

Exiting from Hive

Hive is just a part of the big puzzle called Big Data and Hadoop. Hadoop is much more than just Hive. Click below to see what other skills you should master in Hadoop.

Got a question for us? Please mention it in the comments section and we will get back to you.

Related Posts:

7 Ways Big Data Training Can Change Your Organization

Get Started with Big Data & Hadoop

Hive Data Models

Top 50 React Interview Questions You Must Prepare In 2019

$
0
0

If you are an aspiring front end developer and preparing for interviews, then this blog is for you. This blog on Top 50 React Interview Questions is the perfect guide for you to learn all the concepts required to clear a React interview. But before starting with the React Interview Questions, let’s take a quick look at React’s demand and status in the market.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

Slowly and steadily, the JavaScript tools are firming their roots in the marketplace and the demand for React certification is increasing exponentially. Selecting the right technology for developing an application or website is becoming more challenging. Among all, React is considered as the fastest growing Javascript framework.

As of today, there are about 1,000 contributors on Github. Unique features like Virtual DOM and reusable components grab the attention of front end developers. Despite being just a library for ‘View’ in MVC (Model-View-Controller), it’s giving strong competition to full-blown frameworks like Angular, Meteor, Vue, etc. Check out the belo2w graph which shows the trend of popular JS frameworks:

Job Trend - React Interview Questions - Edureka

React Interview Questions

So, here are the Top 50 React Interview Questions and Answers which are most likely to be asked by the interviewer. For your ease of access, I have categorized the React interview questions namely:

General React – React Interview Questions

1.  Differentiate between Real DOM and Virtual DOM.

Real DOM vs Virtual DOM

Real DOM Virtual  DOM
1. It updates slow. 1. It updates faster.
2. Can directly update HTML. 2. Can’t directly update HTML.
3. Creates a new DOM if element updates. 3. Updates the JSX if element updates.
4. DOM manipulation is very expensive. 4. DOM manipulation is very easy.
5. Too much of memory wastage. 5. No memory wastage.

2. What is React?

  • React is a front-end JavaScript library developed by Facebook in 2011.
  • It follows the component based approach which helps in building reusable UI components.
  • It is used for developing complex and interactive web and mobile UI.
  • Even though it was open-sourced only in 2015, it has one of the largest communities supporting it.

3. What are the features of React? 

Major features of React are listed below:

  1. It uses the virtual DOM instead of the real DOM.
  2. It uses server-side rendering.
  3. It follows uni-directional data flow or data binding.

4. List some of the major advantages of React.

Some of the major advantages of React are:

  1. It increases the application’s performance
  2. It can be conveniently used on the client as well as server side
  3. Because of JSX, code’s readability increases
  4. React is easy to integrate with other frameworks like Meteor, Angular, etc
  5. Using React, writing UI test cases become extremely easy

5. What are the limitations of React?

Limitations of React are listed below:

  1. React is just a library, not a full-blown framework
  2. Its library is very large and takes time to understand
  3. It can be little difficult for the novice programmers to understand
  4. Coding gets complex as it uses inline templating and JSX

React Interview Questions and Answers | Edureka

6. What is JSX?

JSX is a shorthand for JavaScript XML. This is a type of file used by React which utilizes the expressiveness of JavaScript along with HTML like template syntax. This makes the HTML file really easy to understand. This file makes applications robust and boosts its performance. Below is an example of JSX:

render(){
    return(        
         


<div>
            


<h1> Hello World from Edureka!!</h1>



         </div>



    );
}

7. What do you understand by Virtual DOM? Explain its working.

A virtual DOM is a lightweight JavaScript object which originally is just the copy of the real DOM. It is a node tree that lists the elements, their attributes and content as Objects and their properties. React’s render function creates a node tree out of the React components. It then updates this tree in response to the mutations in the data model which is caused by various actions done by the user or by the system.
This Virtual DOM works in three simple steps.

  1. Whenever any underlying data changes, the entire UI is re-rendered in Virtual DOM representation.Virtual DOM 1 - What Is ReactJS? - Edureka
  2. Then the difference between the previous DOM representation and the new one is calculated.Virtual DOM 2 - React Interview Questions - Edureka
  3. Once the calculations are done, the real DOM will be updated with only the things that have actually changed. Virtual DOM 3 - React Interview Questions - Edureka

8. Why can’t browsers read JSX?

Browsers can only read JavaScript objects but JSX in not a regular JavaScript object. Thus to enable a browser to read JSX, first, we need to transform JSX file into a JavaScript object using JSX transformers like Babel and then pass it to the browser.

9. How different is React’s ES6 syntax when compared to ES5?

Syntax has changed from ES5 to ES6 in following aspects:

  1. require vs import
    // ES5
    var React = require('react');
    
    // ES6
    import React from 'react';
  2. export vs exports
    // ES5
    module.exports = Component;
    
    // ES6
    export default Component;
  3. component and function
    // ES5
    var MyComponent = React.createClass({
        render: function() {
            return 
    
    
    <h3>Hello Edureka!</h3>
    
    
    
    ;
        }
    });
    
    // ES6
    class MyComponent extends React.Component {
        render() {
            return 
    
    
    <h3>Hello Edureka!</h3>
    
    
    
    ;
        }
    }
  4. props
    // ES5
    var App = React.createClass({
        propTypes: { name: React.PropTypes.string },
        render: function() {
            return 
    
    
    <h3>Hello, {this.props.name}!</h3>
    
    
    
    ;
        }
    });
    
    // ES6
    class App extends React.Component {
        render() {
            return 
    
    
    <h3>Hello, {this.props.name}!</h3>
    
    
    
    ;
        }
    }
  5. state
    // ES5
    var App = React.createClass({
        getInitialState: function() {
            return { name: 'world' };
        },
        render: function() {
            return 
    
    
    <h3>Hello, {this.state.name}!</h3>
    
    
    
    ;
        }
    });
    
    // ES6
    class App extends React.Component {
        constructor() {
            super();
            this.state = { name: 'world' };
        }
        render() {
            return 
    
    
    <h3>Hello, {this.state.name}!</h3>
    
    
    
    ;
        }
    }

10. How is React different from Angular?

React vs Angular

TOPIC REACT ANGULAR
1. ARCHITECTURE Only the View of MVC Complete MVC
2. RENDERING Server-side rendering Client-side rendering
3. DOM Uses virtual DOM Uses real DOM
4. DATA BINDING One-way data binding Two-way data binding
5. DEBUGGING Compile time debugging Runtime debugging
6. AUTHOR Facebook Google
React Components – React Interview Questions

11. What do you understand from “In React, everything is a component.”

Components are the building blocks of a React application’s UI. These components split up the entire UI into small independent and reusable pieces. Then it renders each of these components independent of each other without affecting the rest of the UI.

12. Explain the purpose of render() in React.

Each React component must have a render() mandatorily. It returns a single React element which is the representation of the native DOM component. If more than one HTML element needs to be rendered, then they must be grouped together inside one enclosing tag such as <form>, <group>,<div> etc. This function must be kept pure i.e., it must return the same result each time it is invoked.

13. How can you embed two or more components into one?

We can embed components into one in the following way:

class MyComponent extends React.Component{
    render(){
        return(          
            


<div>
              
                


<h1>Hello</h1>



                <Header/>
            </div>



        );
    }
}
class Header extends React.Component{
    render(){
        return 


<h1>Header Component</h1>



   
   };
}
ReactDOM.render(
    <MyComponent/>, document.getElementById('content')
);

14. What is Props?

Props is the shorthand for Properties in React. They are read-only components which must be kept pure i.e. immutable. They are always passed down from the parent to the child components throughout the application. A child component can never send a prop back to the parent component. This help in maintaining the unidirectional data flow and are generally used to render the dynamically generated data.

15. What is a state in React and how is it used?

States are the heart of React components. States are the source of data and must be kept as simple as possible. Basically, states are the objects which determine components rendering and behavior. They are mutable unlike the props and create dynamic and interactive components. They are accessed via this.state().

16. Differentiate between states and props.

States vs Props

Conditions State Props
1. Receive initial value from parent component Yes Yes
2. Parent component can change value No Yes
3. Set default values inside component Yes Yes
4. Changes inside component Yes No
5. Set initial value for child components Yes Yes
6. Changes inside child components No Yes

17. How can you update the state of a component?

State of a component can be updated using this.setState().

class MyComponent extends React.Component {
    constructor() {
        super();
        this.state = {
            name: 'Maxx',
            id: '101'
        }
    }
    render()
        {
            setTimeout(()=>{this.setState({name:'Jaeha', id:'222'})},2000)
            return (              
                   


<div>
                  
                     


<h1>Hello {this.state.name}</h1>



                
                     


<h2>Your Id is {this.state.id}</h2>



                   </div>



            );
        }
    }
ReactDOM.render(
    <MyComponent/>, document.getElementById('content')
);

18. What is arrow function in React? How is it used?

Arrow functions are more of brief syntax for writing the function expression. They are also called ‘fat arrow‘ (=>) the functions. These functions allow to bind the context of the components properly since in ES6 auto binding is not available by default. Arrow functions are mostly useful while working with the higher order functions.

//General way
render() {    
    return(
        <MyInput onChange={this.handleChange.bind(this) } />
    );
}
//With Arrow Function
render() {  
    return(
        <MyInput onChange={ (e) => this.handleOnChange(e) } />
    );
}

19. Differentiate between stateful and stateless components.

Stateful vs Stateless

Stateful Component Stateless Component
1. Stores info about component’s state change in memory 1. Calculates the internal state of the components
2. Have authority to change state 2. Do not have the authority to change state
3. Contains the knowledge of past, current and possible future changes in state 3. Contains no knowledge of past, current and possible future state changes
4. Stateless components notify them about the requirement of the state change, then they send down the props to them. 4. They receive the props from the Stateful components and treat them as callback functions.

20. What are the different phases of React component’s lifecycle?

There are three different phases of React component’s lifecycle:

  1. Initial Rendering Phase: This is the phase when the component is about to start its life journey and make its way to the DOM.
  2. Updating PhaseOnce the component gets added to the DOM, it can potentially update and re-render only when a prop or state change occurs. That happens only in this phase.
  3. Unmounting Phase: This is the final phase of a component’s life cycle in which the component is destroyed and removed from the DOM.

21. Explain the lifecycle methods of React components in detail.

Some of the most important lifecycle methods are:

  1. componentWillMount()  Executed just before rendering takes place both on the client as well as server-side.
  2. componentDidMount()  Executed on the client side only after the first render.
  3. componentWillReceiveProps() – Invoked as soon as the props are received from the parent class and before another render is called.
  4. shouldComponentUpdate()  Returns true or false value based on certain conditions. If you want your component to update, return true else return false. By default, it returns false.
  5. componentWillUpdate() – Called just before rendering takes place in the DOM.
  6. componentDidUpdate()  Called immediately after rendering takes place.
  7. componentWillUnmount() – Called after the component is unmounted from the DOM. It is used to clear up the memory spaces.

22. What is an event in React?

In React, events are the triggered reactions to specific actions like mouse hover, mouse click, key press, etc. Handling these events are similar to handling events in DOM elements. But there are some syntactical differences like:

  1. Events are named using camel case instead of just using the lowercase.
  2. Events are passed as functions instead of strings.

The event argument contains a set of properties, which are specific to an event. Each event type contains its own properties and behavior which can be accessed via its event handler only.

23. How do you create an event in React?

class Display extends React.Component({    
    show(evt) {
        // code   
    },   
    render() {      
        // Render the div with an onClick prop (value is a function)        
        return (            
          


<div onClick={this.show}>Click Me!</div>



        );    
    }
});

24. What are synthetic events in React?

Synthetic events are the objects which act as a cross-browser wrapper around the browser’s native event. They combine the behavior of different browsers into one API. This is done to make sure that the events show consistent properties across different browsers.

25. What do you understand by refs in React?

Refs is the short hand for References in React. It is an attribute which helps to store a reference to a particular React element or component, which will be returned by the components render configuration function. It is used to return references to a particular element or component returned by render(). They come in handy when we need DOM measurements or to add methods to the components.

class ReferenceDemo extends React.Component{
     display() {
         const name = this.inputDemo.value;
         document.getElementById('disp').innerHTML = name;
     }
render() {
    return(        
         


<div>
            Name: <input type="text" ref={input => this.inputDemo = input} />
            <button name="Click" onClick={this.display}>Click</button>            
            


<h2>Hello <span id="disp"></span> !!!</h2>



        </div>



    );
   }
 }

26. List some of the cases when you should use Refs.

Following are the cases when refs should be used:

  • When you need to manage focus, select text or media playback
  • To trigger imperative animations
  • Integrate with third-party DOM libraries

27. How do you modularize code in React?

We can modularize code by using the export and import properties. They help in writing the components separately in different files.

//ChildComponent.jsx
export default class ChildComponent extends React.Component {
    render() {
        return(           
             


<div>
             
                


<h1>This is a child component</h1>



            </div>



        );
    }
}

//ParentComponent.jsx
import ChildComponent from './childcomponent.js';
class ParentComponent extends React.Component {    
    render() {        
        return(           
            


<div>               
                <App />          
            </div>



       
        );  
    }
}

28. How are forms created in React?

React forms are similar to HTML forms. But in React, the state is contained in the state property of the component and is only updated via setState(). Thus the elements can’t directly update their state and their submission is handled by a JavaScript function. This function has full access to the data that is entered by the user into a form.

handleSubmit(event) {
    alert('A name was submitted: ' + this.state.value);
    event.preventDefault();
}

render() {
    return (        
        


<form onSubmit={this.handleSubmit}>
            <label>
                Name:
                <input type="text" value={this.state.value} onChange={this.handleSubmit} />
            </label>
            <input type="submit" value="Submit" />
        </form>



    );
}

29. What do you know about controlled and uncontrolled components?

Controlled vs Uncontrolled Components

Controlled Components Uncontrolled Components
1. They do not maintain their own state 1. They maintain their own state
2. Data is controlled by the parent component 2. Data is controlled by the DOM
3. They take in the current values through props and then notify the changes via callbacks 3. Refs are used to get their current values

30. What are Higher Order Components(HOC)?

Higher Order Component is an advanced way of reusing the component logic. Basically, it’s a pattern that is derived from React’s compositional nature. HOC are custom components which wrap another component within it. They can accept any dynamically provided child component but they won’t modify or copy any behavior from their input components. You can say that HOC are ‘pure’ components.

31. What can you do with HOC?

HOC can be used for many tasks like:

  • Code reuse, logic and bootstrap abstraction
  • Render High jacking
  • State abstraction and manipulation
  • Props manipulation

32. What are Pure Components?

Pure components are the simplest and fastest components which can be written. They can replace any component which only has a render(). These components enhance the simplicity of the code and performance of the application.

33. What is the significance of keys in React?

Keys are used for identifying unique Virtual DOM Elements with their corresponding data driving the UI. They help React to optimize the rendering by recycling all the existing elements in the DOM. These keys must be a unique number or string, using which React just reorders the elements instead of re-rendering them. This leads to increase in application’s performance.

React Redux – React Interview Questions

34. What were the major problems with MVC framework?

Following are some of the major problems with MVC framework:

  • DOM manipulation was very expensive
  • Applications were slow and inefficient
  • There was huge memory wastage
  • Because of circular dependencies, a complicated model was created around models and views

35. Explain Flux.

Flux is an architectural pattern which enforces the uni-directional data flow. It controls derived data and enables communication between multiple components using a central Store which has authority for all data. Any update in data throughout the application must occur here only. Flux provides stability to the application and reduces run-time errors.flux -React Interview Questions - Edureka

36. What is Redux?

Redux is one of the hottest libraries for front-end development in today’s marketplace. It is a predictable state container for JavaScript applications and is used for the entire applications state management. Applications developed with Redux are easy to test and can run in different environments showing consistent behavior.

37. What are the three principles that Redux follows?

  1. Single source of truth: The state of the entire application is stored in an object/ state tree within a single store. The single state tree makes it easier to keep track of changes over time and debug or inspect the application.
  2. State is read-only: The only way to change the state is to trigger an action. An action is a plain JS object describing the change. Just like state is the minimal representation of data, the action is the minimal representation of the change to that data. 
  3. Changes are made with pure functions: In order to specify how the state tree is transformed by actions, you need pure functions. Pure functions are those whose return value depends solely on the values of their arguments.Store - React Interview Questions - Edureka

38. What do you understand by “Single source of truth”?

Redux uses ‘Store’ for storing the application’s entire state at one place. So all the component’s state are stored in the Store and they receive updates from the Store itself. The single state tree makes it easier to keep track of changes over time and debug or inspect the application.

39. List down the components of Redux.

Redux is composed of the following components:

  1. Action – It’s an object that describes what happened.
  2. Reducer –  It is a place to determine how the state will change.
  3. Store – State/ Object tree of the entire application is saved in the Store.
  4. View – Simply displays the data provided by the Store.

40. Show how the data flows through Redux?

Data Flow in Redux - React Interview Questions - Edureka

41. How are Actions defined in Redux?

Actions in React must have a type property that indicates the type of ACTION being performed. They must be defined as a String constant and you can add more properties to it as well. In Redux, actions are created using the functions called Action Creators. Below is an example of Action and Action Creator:

function addTodo(text) {
       return {
                type: ADD_TODO,    
                 text    
    }
}

42. Explain the role of Reducer.

Reducers are pure functions which specify how the application’s state changes in response to an ACTION. Reducers work by taking in the previous state and action, and then it returns a new state. It determines what sort of update needs to be done based on the type of the action, and then returns new values. It returns the previous state as it is, if no work needs to be done.

43. What is the significance of Store in Redux?

A store is a JavaScript object which can hold the application’s state and provide a few helper methods to access the state, dispatch actions and register listeners. The entire state/ object tree of an application is saved in a single store. As a result of this, Redux is very simple and predictable. We can pass middleware to the store to handle the processing of data as well as to keep a log of various actions that change the state of stores. All the actions return a new state via reducers.

44. How is Redux different from Flux?

Flux vs Redux

Flux Redux
1. The Store contains state and change logic 1. Store and change logic are separate
2. There are multiple stores 2. There is only one store
3. All the stores are disconnected and flat 3. Single store with hierarchical reducers
4. Has singleton dispatcher 4. No concept of dispatcher
5. React components subscribe to the store 5. Container components utilize connect
6. State is mutable 6. State is immutable

45. What are the advantages of Redux?

Advantages of Redux are listed below:

  • Predictability of outcome – Since there is always one source of truth, i.e. the store, there is no confusion about how to sync the current state with actions and other parts of the application.
  • Maintainability – The code becomes easier to maintain with a predictable outcome and strict structure.
  • Server-side rendering – You just need to pass the store created on the server, to the client side. This is very useful for initial render and provides a better user experience as it optimizes the application performance.
  • Developer tools – From actions to state changes, developers can track everything going on in the application in real time.
  • Community and ecosystem – Redux has a huge community behind it which makes it even more captivating to use. A large community of talented individuals contribute to the betterment of the library and develop various applications with it.
  • Ease of testing – Redux’s code is mostly functions which are small, pure and isolated. This makes the code testable and independent.
  • Organization – Redux is precise about how code should be organized, this makes the code more consistent and easier when a team works with it.
React Router – React Interview Questions

46. What is React Router?

React Router is a powerful routing library built on top of React, which helps in adding new screens and flows to the application. This keeps the URL in sync with data that’s being displayed on the web page. It maintains a standardized structure and behavior and is used for developing single page web applications. React Router has a simple API.

47. Why is switch keyword used in React Router v4?

Although a <div> is used to encapsulate multiple routes inside the Router. The ‘switch’ keyword is used when you want to display only a single route to be rendered amongst the several defined routes. The <switch> tag when in use matches the typed URL with the defined routes in sequential order. When the first match is found, it renders the specified route. Thereby bypassing the remaining routes.

48. Why do we need a Router in React?

A Router is used to define multiple routes and when a user types a specific URL, if this URL matches the path of any ‘route’ defined inside the router, then the user is redirected to that particular route. So basically, we need to add a Router library to our app that allows creating multiple routes with each leading to us a unique view.

<switch>
    <route exact path=’/’ component={Home}/>
    <route path=’/posts/:id’ component={Newpost}/>
    <route path=’/posts’   component={Post}/>
</switch>

49. List down the advantages of React Router.

Few advantages are:

  1. Just like how React is based on components, in React Router v4, the API is ‘All About Components’. A Router can be visualized as a single root component (<BrowserRouter>) in which we enclose the specific child routes (<route>).
  2. No need to manually set History value: In React Router v4, all we need to do is wrap our routes within the <BrowserRouter> component.
  3. The packages are split: Three packages one each for Web, Native and Core. This supports the compact size of our application. It is easy to switch over based on a similar coding style.

50. How is React Router different from conventional routing?

Conventional Routing vs React Routing

Topic Conventional Routing React Routing
PAGES INVOLVED Each view corresponds to a new file Only single HTML page is involved
URL CHANGES A HTTP request is sent to a server and corresponding HTML page is received Only the History attribute is changed
FEEL User actually navigates across different pages for each view User is duped thinking he is navigating across different pages

 

I hope this set of React Interview Questions and Answers will help you in preparing for your interviews. All the best!

If you want to get trained in React and wish to develop interesting UI’s on your own, then check out the ReactJS with Redux Certification Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe.

Got a question for us? Please mention it in the comments section and we will get back to you.

Top 50 Tableau Interview Questions You Must Prepare In 2019

$
0
0
“The art and practice of visualizing data is becoming ever more important in bridging the human-computer gap to mediate analytical insight in a meaningful way.” ―Edd Dumbill

In this Tableau interview questions blog, I have collected the most frequently asked questions by interviewers. These questions are collected after consulting with top industry experts in the field of Data analytics and visualization. If you want to brush up with the Tableau basics, which I recommend you to do before going ahead with this Tableau Interview Questions, take a look at this blog

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

In case you have attended a Tableau interview in the recent past, do paste those Tableau interview questions in the comments section and we’ll answer them ASAP. You can also comment below if you have any questions in your mind which you might face in your Tableau interview. In the meantime, maximize the data visualization career opportunities that are sure to come your way by taking Tableau online training with Edureka. You can even get a Tableau certification after the course. Click below to know more.

This Tableau Interview questions is divided into the following parts:

Let’s begin this Tableau Interview Questions with Beginners level questions first.

Beginners Level Tableau Interview Questions

1. What is the difference between Traditional BI Tools and Tableau?

Traditional BI Tools vs Tableau

Traditional BI Tools Tableau
1. Architecture has hardware limitations. 1. Do not have dependencies.
2. Based on a complex set of technologies. 2. Based on Associative Search which makes it dynamic and fast
3. Do not support in-memory, multi-thread, multi-core computing. 3. Supports in memory when used with advanced technologies.
4. Has a predefined view of data. 4. Uses predictive analysis for various business operations.

2. What is Tableau?

  • Tableau is a business intelligence software.
  • It allows anyone to connect to the respective data.
  • Visualizes and creates interactive, shareable dashboards.

Tableau Logo - Tableau Interview Questions - Edureka

 

3. What are the different Tableau Products and what is the latest version of Tableau?

Here is the Tableau Product family.

Tableau Data Types - Tableau Interview Questions - Edureka(i)Tableau Desktop:

It is a self service business analytics and data visualization that anyone can use. It translates pictures of data into optimized queries. With tableau desktop, you can directly connect to data from your data warehouse for live upto date data analysis. You can also perform queries without writing a single line of code. Import all your data into Tableau’s data engine from multiple sources & integrate altogether by combining multiple views in a interactive dashboard.

(ii)Tableau Server:

It is more of an enterprise level Tableau software. You can publish dashboards with Tableau Desktop and share them throughout the organization with web-based Tableau server. It leverages fast databases through live connections.

(iii)Tableau Online:

This is a hosted version of Tableau server which helps makes business intelligence faster and easier than before. You can publish Tableau dashboards with Tableau Desktop and share them with colleagues.

(iv)Tableau Reader:

It’s a free desktop application that enables you to open and view visualizations that are built in Tableau Desktop. You can filter, drill down data but you cannot edit or perform any kind of interactions.

(v)Tableau Public:

This is a free Tableau software which you can use to make visualizations with but you need to save your workbook or worksheets in the Tableau Server which can be viewed by anyone.

4. What are the different datatypes in Tableau?

Tableau supports the following data-types:

Tableau Data Types - Tableau Interview Questions - Edureka

5. What are Measures and Dimensions?

Measures are the numeric metrics or measurable quantities of the data, which can be analyzed by dimension table. Measures are stored in a table that contain foreign keys referring uniquely to the associated dimension tables. The table supports data storage at atomic level and thus, allows more number of records to be inserted at one time. For instance, a Sales table can have product key, customer key, promotion key, items sold, referring to a specific event.

Dimensions are the descriptive attribute values for multiple dimensions of each attribute, defining multiple characteristics. A dimension table ,having reference of a product key form the table, can consist of product name, product type, size, color, description, etc.

Tableau Interview Questions & Answers | Edureka

6. What is the difference between .twb and .twbx extension?

  • A .twb is an xml document which contains all the selections and layout made you have made in your Tableau workbook. It does not contain any data.
  • A .twbx is a ‘zipped’ archive containing a .twb and any external files such as extracts and background images.

7. What are the different types of joins in Tableau?

The joins in Tableau are same as SQL joins. Take a look at the diagram below to understand it.Joins - Tableau Interview Questions - Edureka

8. How many maximum tables can you join in Tableau?

You can join a maximum of 32 tables in Tableau.

9. What are the different connections you can make with your dataset?

We can either connect live to our data set or extract data onto Tableau.

  • Live: Connecting live to a data set leverages its computational processing and storage. New queries will go to the database and will be reflected as new or updated within the data.
  • Extract: An extract will make a static snapshot of the data to be used by Tableau’s data engine. The snapshot of the data can be refreshed on a recurring schedule as a whole or incrementally append data. One way to set up these schedules is via the Tableau server.

The benefit of Tableau extract over live connection is that extract can be used anywhere without any connection and you can build your own visualization without connecting to database.

10. What are shelves?

They are Named areas to the left and top of the view. You build views by placing fields onto the shelves. Some shelves are available only when you select certain mark types.

Shelves - Tableau Interview Questions - Edureka

11. What are sets?

Sets are custom fields that define a subset of data based on some conditions. A set can be based on a computed condition, for example, a set may contain customers with sales over a certain threshold. Computed sets update as your data changes. Alternatively, a set can be based on specific data point in your view.

12. What are groups?

A group is a combination of dimension members that make higher level categories. For example, if you are working with a view that shows average test scores by major, you may want to group certain majors together to create major categories.

13. What is a hierarchical field?

A hierarchical field in tableau is used for drilling down data. It means viewing your data in a more granular level.

14. What is Tableau Data Server?

Tableau server acts a middle man between Tableau users and the data. Tableau Data Server allows you to upload and share data extracts, preserve database connections, as well as reuse calculations and field metadata. This means any changes you make to the data-set, calculated fields, parameters, aliases, or definitions, can be saved and shared with others, allowing for a secure, centrally managed and standardized dataset. Additionally, you can leverage your server’s resources to run queries on extracts without having to first transfer them to your local machine.

Intermediate Level Tableau Interview Questions

15. What is Tableau Data Engine?

Tableau Data Engine is a really cool feature in Tableau. Its an analytical database designed to achieve instant query response, predictive performance, integrate seamlessly into existing data infrastructure and is not limited to load entire data sets into memory.

If you work with a large amount of data, it does takes some time to import, create indexes and sort data but after that everything speeds up. Tableau Data Engine is not really in-memory technology. The data is stored in disk after it is imported and the RAM is hardly utilized. 

16. What are the different filters in Tableau and how are they different from each other?

In Tableau, filters are used to restrict the data from database.

The different filters in Tableau are: Quick , Context and Normal/Traditional filter are:

  • Normal Filter is used to restrict the data from database based on selected dimension or measure. A Traditional Filter can be created by simply dragging a field onto the ‘Filters’ shelf.
  • Quick filter is used to view the filtering options and filter each worksheet on a dashboard while changing the values dynamically (within the range defined) during the run time.
  • Context Filter is used to filter the data that is transferred to each individual worksheet. When a worksheet queries the data source, it creates a temporary, flat table that is uses to compute the chart. This temporary table includes all values that are not filtered out by either the Custom SQL or the Context Filter.

17. How to create a calculated field in Tableau?

  • Click the drop down to the right of Dimensions on the Data pane and select “Create > Calculated Field” to open the calculation editor.
  • Name the new field and create a formula.

Take a look at the example below:

Calculated Field - Tableau Interview Questions - Edureka

18. What is a dual axis?

Dual Axis is an excellent phenomenon supported by Tableau that helps users view two scales of two measures in the same graph. Many websites like Indeed.com and other make use of dual axis to show the comparison between two measures and their growth rate in a septic set of years. Dual axes let you compare multiple measures at once, having two independent axes layered on top of one another. This is how it looks like:

Dual Axis Graph - Tableau Interview Questions - Edureka

19. What is the difference between a tree map and heat map?

A heat map can be used for comparing categories with color and size. With heat maps, you can compare two different measures together.

Heat Map - Tableau Interview Questions - EdurekaA tree map also does the same except it is considered a very powerful visualization as it can be used for illustrating hierarchical data and part-to-whole relationships.

Tree Map - Tableau Interview Questions - Edureka

20. What is disaggregation and aggregation of data?

The process of viewing numeric values or measures at higher and more summarized levels of the data is called aggregation. When you place a measure on a shelf, Tableau automatically aggregates the data, usually by summing it. You can easily determine the aggregation applied to a field because the function always appears in front of the field’s name when it is placed on a shelf. For example, Sales becomes SUM(Sales).  You can aggregate measures using Tableau only for relational data sources. Multidimensional data sources contain aggregated data only. In Tableau, multidimensional data sources are supported only in Windows.

According to Tableau, Disaggregating your data allows you to view every row of the data source which can be useful when you are analyzing measures that you may want to use both independently and dependently in the view. For example, you may be analyzing the results from a product satisfaction survey with the Age of participants along one axis. You can aggregate the Age field to determine the average age of participants or disaggregate the data to determine what age participants were most satisfied with the product.

21. What is the difference between joining and blending in Tableau?

  • Joining term is used when you are combining data from the same source, for example, worksheet in an Excel file or tables in Oracle database
  • While blending requires two completely defined data sources in your report.

22. What are Extracts and Schedules in Tableau server?

Data extracts are the first copies or subdivisions of the actual data from original data sources. The workbooks using data extracts instead of those using live DB connections are faster since the extracted data is imported in Tableau Engine.After this extraction of data, users can publish the workbook, which also publishes the extracts in Tableau Server. However, the workbook and extracts won’t refresh unless users apply a scheduled refresh on the extract. Scheduled Refreshes are the scheduling tasks set for data extract refresh so that they get refreshed automatically while publishing a workbook with data extract. This also removes the burden of republishing the workbook every time the concerned data gets updated.

23. How to view underlying SQL Queries in Tableau?

Viewing underlying SQL Queries in Tableau provides two options:

  • Create a Performance Recording to record performance information about the main events you interact with workbook. Users can view the performance metrics in a workbook created by Tableau.
    Help -> Settings and Performance -> Start Performance Recording
    Help -> Setting and Performance -> Stop Performance Recording.
  • Reviewing the Tableau Desktop Logs located at C:UsersMy DocumentsMy Tableau Repository. For live connection to data source, you can check log.txt and tabprotosrv.txt files. For an extract, check tdeserver.txt file.

24. How to do Performance Testing in Tableau?

Performance testing is again an important part of implementing tableau. This can be done by loading Testing Tableau Server with TabJolt, which is a “Point and Run” load generator created to perform QA. While TabJolt is not supported by tableau directly, it has to be installed using other open source products.

25. Name the components of a Dashboard.

  • Horizontal – Horizontal layout containers allow the designer to group worksheets and dashboard components left to right across your page and edit the height of all elements at once.
  • Vertical – Vertical containers allow the user to group worksheets and dashboard components top to bottom down your page and edit the width of all elements at once.
  • Text – All textual fields.
  • Image Extract  – A Tableau workbook is in XML format. In order to extracts images, Tableau applies some codes to extract an image which can be stored in XML.
  • Web [URL ACTION] – A URL action is a hyperlink that points to a Web page, file, or other web-based resource outside of Tableau. You can use URL actions to link to more information about your data that may be hosted outside of your data source. To make the link relevant to your data, you can substitute field values of a selection into the URL as parameters.

26. How to remove ‘All’ options from a Tableau auto-filter?

The auto-filter provides a feature of removing ‘All’ options by simply clicking the down arrow in the auto-filter heading. You can scroll down to ‘Customize’ in the dropdown and then uncheck the ‘Show “All” Value’ attribute. It can be activated by checking the field again.

27. How to add Custom Color to Tableau?

Adding a Custom Color refers to a power tool in Tableau. Restart you Tableau desktop once you save .tps file. From the Measures pane, drag the one you want to add color to Color. From the color legend menu arrow, select Edit Colors. When a dialog box opens, select the palette drop-down list and customize as per requirement.

28. What is TDE file?

 TDE is a Tableau desktop file that contains a .tde extension. It refers to the file that contains data extracted from external sources like MS Excel, MS Access or CSV file.
There are two aspects of TDE design that make them ideal for supporting analytics and data discovery.
  • Firstly, TDE is a columnar store.
  • The second is how they are structured which impacts how they are loaded into memory and used by Tableau. This is an important aspect of how TDEs are “architecture aware”. Architecture-awareness means that TDEs use all parts of your computer memory, from RAM to hard disk, and put each part to work what best fits its characteristics.

29. Mention whether you can create relational joins in Tableau without creating a new table?

 Yes, one can create relational joins in tableau without creating a new table.

30. How to automate reports?

You need to publish report to tableau server, while publishing you will find one option to schedule reports.You just need to select the time when you want to refresh data.

31. What is Assume referential integrity?

In some cases, you can improve query performance by selecting the option to Assume Referential Integrity from the Data menu. When you use this option, Tableau will include the joined table in the query only if it is specifically referenced by fields in the view.

32. Explain when would you use Joins vs. Blending in Tableau?

If data resides in a single source, it is always desirable to use Joins.  When your data is not in one place blending is the most viable way to create a left join like the connection between your primary and secondary data sources.

33. What is default Data Blending Join?

Data blending is the ability to bring data from multiple data sources into one Tableau view, without the need for any special coding. A default blend is equivalent to a left outer join. However, by switching which data source is primary, or by filtering nulls, it is possible to emulate left, right and inner joins.

34. What do you understand by blended axis?

In Tableau, measures can share a single axis so that all the marks are shown in a single pane. Instead of adding rows and columns to the view, when you blend measures there is a single row or column and all of the values for each measure is shown along one continuous axis. We can blend multiple measures by simply dragging one measure or axis and dropping it onto an existing axis.

35. What is story in Tableau?

A story is a sheet that contains a sequence of worksheets or dashboards that work together to convey information. You can create stories to show how facts are connected, provide context, demonstrate how decisions relate to outcomes, or simply make a compelling case. Each individual sheet in a story is called a story point.

36. What is the difference between discrete and continuous in Tableau?

There are two types of data roles in Tableau – discrete and continuous dimension.

  • Discrete data roles are values that are counted as distinct and separate and can only take individual values within a range. Examples: number of threads in a sheet, customer name or row ID or State. Discrete values are shown as blue pills on the shelves and blue icons in the data window.
  • Continuous data roles are used to measure continuous data and can take on any value within a finite or infinite interval. Examples: unit price, time and profit or order quantity. Continuous variables behave in a similar way in that they can take on any value. Continuous values are shown as green pills.

37.How to create stories in Tableau?

There are many ways to create story in Tableau. Each story point can be based on a different view or dashboard, or the entire story can be based on the same visualization, just seen at different stages, with different marks filtered and annotations added. You can use stories to make a business case or to simply narrate a sequence of events.

  • Click the New Story tab.
  • In the lower-left corner of the screen, choose a size for your story. Choose from one of the predefined sizes, or set a custom size, in pixels.
  • By default, your story gets its title from its sheet name. To edit it, double-click the title. You can also change your title’s font, color, and alignment. Click Apply to view your changes.
  • To start building your story, drag a sheet from the Story tab on the left and drop it into the center of the view
  • Click Add a caption to summarize the story point.
  • To highlight a key takeaway for your viewers, drag a text object over to the story worksheet and type your comment.
  • To further highlight the main idea of this story point, you can change a filter or sort on a field in the view, then save your changes by clicking Update above the navigator box.

38. What is the DRIVE Program Methodology?

Tableau Drive is a methodology for scaling out self-service analytics. Drive is based on best practices from successful enterprise deployments. The methodology relies on iterative, agile methods that are faster and more effective than traditional long-cycle deployment.

A cornerstone of this approach is a new model of partnership between business and IT.

39. How to use group in calculated field?

By adding the same calculation to ‘Group By’ clause in SQL query or creating a Calculated Field in the Data Window and using that field whenever you want to group the fields.

  • Using groups in a calculation. You cannot reference ad-hoc groups in a calculation.
  • Blend data using groups created in the secondary data source: Only calculated groups can be used in data blending if the group was created in the secondary data source.
  • Use a group in another workbook. You can easily replicate a group in another workbook by copy and pasting a calculation.

40. Mention what is the difference between published data sources and embedded data sources in Tableau?

The difference between published data source and embedded data source is that,

  • Published data source: It contains connection information that is independent of any workbook and can be used by multiple workbooks.
  • Embedded data source: It contains connection information and is associated with a workbook.

41. Mention what are different Tableau files?

Different Tableau files include:

  • Workbooks: Workbooks hold one or more worksheets and dashboards
  • Bookmarks: It contains a single worksheet and its an easy way to quickly share your work
  • Packaged Workbooks: It contains a workbook along with any supporting local file data and background images
  • Data Extraction Files: Extract files are a local copy of a subset or entire data source
  • Data Connection Files: It’s a small XML file with various connection information

Expert level Tableau Interview Questions

42. How to embed views onto Webpages?

You can embed interactive Tableau views and dashboards into web pages, blogs, wiki pages, web applications, and intranet portals. Embedded views update as the underlying data changes, or as their workbooks are updated on Tableau Server. Embedded views follow the same licensing and permission restrictions used on Tableau Server. That is, to see a Tableau view that’s embedded in a web page, the person accessing the view must also have an account on Tableau Server.

Alternatively, if your organization uses a core-based license on Tableau Server, a Guest account is available. This allows people in your organization to view and interact with Tableau views embedded in web pages without having to sign in to the server. Contact your server or site administrator to find out if the Guest user is enabled for the site you publish to.

You can do the following to embed views and adjust their default appearance:

  • Get the embed code provided with a view: The Share button at the top of each view includes embed code that you can copy and paste into your webpage. (The Share button doesn’t appear in embedded views if you change the showShareOptions parameter to false in the code.)
  • Customize the embed code: You can customize the embed code using parameters that control the toolbar, tabs, and more. For more information, see Parameters for Embed Code.
  • Use the Tableau JavaScript API: Web developers can use Tableau JavaScript objects in web applications. To get access to the API, documentation, code examples, and the Tableau developer community, see the Tableau Developer Portal.

43. Design a view in a map such that if user selects any state, the cities under that state has to show profit and sales.

According to your question you must have state, city, profit and sales fields in your dataset.

Step 1: Double click on the state field

Step 2: Drag the city and drop it into Marks card.

Step 3: Drag the sales and drop it into size.

Step 4: Drag profit and drop it into color.

Step 5: Click on size legend and increase the size.

Step 6: Right click on state field and select show quick filter.

Step 7:  Select any state now and check the view.

44. Think that I am using Tableau Desktop & have a live connection to Cloudera Hadoop data. I need to press F5 to refresh the visualization. Is there anyway to automatically refresh visualization every ‘x’ seconds instead of pressing F5?

Here is an example of refreshing the dashboard for every  5 seconds.

All you need to do is replace the api src and server url with yours.

<!DOCTYPE html>

<html lang="en">

<head>

<title>Tableau JavaScript API </title>

<script type="text/javascript" src="http://servername/javascripts/api/tableau_v8.js"></script>

</head>

<div id="tableau Viz"></div>

<script type='text/javascript'>

var placeholderDiv = document.getElementById("tableau Viz");

var url = "http://servername/t/311/views/Mayorscreenv5/Mayorscreenv2";

var options={

hideTabs:True,

width:"100%",

height:"1000px"

};

var viz= new tableauSoftware.Viz(placeholderDiv,url,options);

setInterval (function() {viz.refreshDataAsync()},5000);

</script>

</body>

</html>

Some Additional Tricky Tableau Interview Questions

45. Suppose my license expires today, will users be able to view dashboards or workbooks which I published in the server earlier?

If your server license expires today, your username on the server will have the role ‘unlicensed’ which means you cannot access but others can. The site admin can change the ownership to another person so that the extracts do not fail.

46. Is Tableau software good for strategic acquisition?

Yes! For sure. It gives you data insight to the extent that other tools can’t. Moreover, it also helps you to plan and point the anomalies and improvise your process for betterment of your company.

47. Can we place an excel file in a shared location and and use it to develop a report and refresh it in regular intervals?

Yes, we can do it. But for better performance we should use Extract.

48. Can Tableau be installed on MacOS?

Yes, Tableau Desktop can be installed on both on Mac and Windows Operating System.

49. What is the maximum no. of rows Tableau can utilize at one time?

Tableau is not restricted by the no. of rows in the table. Customers use Tableau to access petabytes of data because it only retrieves the rows and columns needed to answer your questions.

50. When publishing workbooks on Tableau online, sometimes a error about needing to extract appears. Why does it happen occasionally?

This happens when a user is trying to publish a workbook that is connected to an internal server or a file stored on a local drive, such as a SQL server that is within a company’s network.

I hope that this Tableau Interview Questions were helpful to you. I will be coming up with more blogs on Tableau for you all very soon.

If you wish to master Tableau, Edureka has a curated course on Tableau Training & Certification which covers various concepts of data visualization in depth, including conditional formatting, scripting, linking charts, dashboard integration, Tableau integration with R and more. It comes with 24*7 support to guide you throughout your learning period. New batches are starting soon.

Got a question for us? Please mention it in the comments section and we will get back to you at the earliest.

Top Angular Interview Questions You Must Prepare In 2019

$
0
0

In this Angular Interview Questions article, I am going to list some of the most important Angular Interview Questions and Answers which will set you apart in the interview process in 2019. Angular continues to dominate the arena of the Javascript framework and has proved itself to be a worthy investment for web developers who seek to fast-track their career. No surprises there, Angular is majorly known for its ability to create single-page web applications that encompass three critical components – speed, agility and a strong community backing it up. Angular is known as the Swiss army knife of frontend developers! This is the reason why Angular Certification is the most in-demand certification in scripting domain.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

We have compiled a list of top Angular interview questions which are classified into 3 sections, namely:

As an Angular professional, it is essential to know the right buzzwords, learn the right technologies and prepare the right answers to commonly asked Angular Interview Questions. Here’s a definitive list of top Angular Interview Questions that will guarantee a breeze-through to the next level.

In case you attended any Angular interview recently, or have additional questions beyond what we covered, we encourage you to post them in our QnA Forum. Our expert team will get back to you at the earliest.  

So let’s get started with the first set of basic Angular Interview Questions.

Beginners Level – Angular Interview Questions

1. Differentiate between Angular and AngularJS.

Feature AngularJS Angular
Architecture Supports MVC design model Uses components and directives
Language Recommended Language: JavaScript  Recommended Language: TypeScript
Expression Syntax Specific ng directive is required for the image/property and an event Uses () to bind an event and [] for property binding
Mobile Support Doesn’t provide any mobile support Provides mobile support
Routing $routeprovider.when() is used for routing configs @RouteConfig{(…)} is used for routing config
Dependency Injection Doesn’t supports the concept of Dependency Injection Supports hierarchical Dependency Injection with a unidirectional tree-based change detection
Structure Less manageable Simplified structure and makes the development and maintenance of large applications easier
Speed With two-way data binding development effort and time are reduced Faster than AngularJS with upgraded features
Support No support or new updates are provided anymore Active support and frequent new updates are made

2. What is Angular?

Angular is an open-source front-end web framework. It is one of the most popular JavaScript frameworks that is mainly maintained by Google. It provides a platform for easy development of web-based applications and empowers the front end developers in curating cross-platform applications. It integrates powerful features like declarative templates, an end to end tooling, dependency injection and various other best practices that smoothens the development path.

3. What are the advantages of using Angular?

A few of the major advantages of using Angular framework are listed below:

  • It supports two-way data-binding
  • It follows MVC pattern architecture
  • It supports static template and Angular template
  • You can add a custom directive
  • It also supports RESTfull services
  • Validations are supported
  • Client and server communication is facilitated
  • Support for dependency injection
  • Has strong features like Event Handlers, Animation, etc.

4. What is Angular mainly used for?

Angular is typically used for the development of SPA which stands for Single Page Applications. Angular provides a set of ready-to-use modules that simplify the development of single page applications. Not only this, with features like built-in data streaming, type safety, and a modular CLI,  Angular is regarded as a full-fledged web framework.

5. What are Angular expressions?

Angular expressions are code snippets that are usually placed in binding such as {{ expression }} similar to JavaScript. These expressions are used to bind application data to HTML

Syntax: {{ expression }}

6. What are templates in Angular?

Templates in Angular are written with HTML that contains Angular-specific elements and attributes. These templates are combined with information coming from the model and controller which are further rendered to provide the dynamic view to the user.

7. In Angular what is string interpolation?

String interpolation in Angular is a special syntax that uses template expressions within double curly {{ }} braces for displaying the component data. It is also known as moustache syntax. The JavaScript expressions are included within the curly braces to be executed by Angular and the relative output is then embedded into the HTML code. These expressions are usually updated and registered like watches, as a part of the digest cycle.

8. What is the difference between an Annotation and a Decorator in Angular?

Annotations in angular are “only” metadata set of the class using the Reflect Metadata library. They are used to create an “annotation” array. On the other hand, decorators are the design patterns that are used for separating decoration or modification of a class without actually altering the original source code.

9. What do you understand by controllers in Angular?

Controllers are JavaScript functions which provide data and logic to HTML UI. As the name suggests, they control how data flows from the server to HTML UI.

10. What is scope in Angular?

Scope in Angular is an object that refers to the application model. It is an execution context for expressions. Scopes are arranged in a hierarchical structure which mimics the DOM structure of the application. Scopes can watch expressions and propagate events.

11. What are directives in Angular?

A core feature of Angular, directives are attributes that allow you to write new HTML syntax, specific to your application. They are essentially functions that execute when the Angular compiler finds them in the DOM.  The Angular directives are segregated into 3 parts:

  1. Component Directives
  2. Structural Directives
  3. Attribute Directives

12. What is data binding?

In Angular, data binding is one of the most powerful and important features that allow you to define the communication between the component and DOM(Document Object Model). It basically simplifies the process of defining interactive applications without having to worry about pushing and pulling data between your view or template and component. In Angular, there are four forms of data binding:

  1.  String Interpolation
  2. Property Binding
  3. Event Binding
  4. Two-Way Data Binding

13. What is the purpose of a filter in Angular?

Filters in Angular are used for formatting the value of an expression in order to display it to the user. These filters can be added to the templates, directives, controllers or services. Not just this, you can create your own custom filters. Using them, you can easily organize data in such a way that the data is displayed only if it fulfills certain criteria. Filters are added to the expressions by using the pipe character |, followed by a filter.

14. What are the differences between Angular and jQuery?

Features jQuery Angular
DOM Manipulation Yes Yes
RESTful API No Yes
Animation Support Yes Yes
Deep Linking Routing No Yes
Form Validation No Yes
Two Way Data Binding No Yes
AJAX/JSONP Yes Yes

15. What is a provider in Angular?

A provider is a configurable service in Angular. It is an instruction to the Dependency Injection system that provides information about the way to obtain a value for a dependency. It is an object that has a $get() method which is called to create a new instance of a service. A Provider can also contain additional methods and uses $provide in order to register new providers.

Intermediate Level – Angular Interview Questions

16. Does Angular support nested controllers?

Yes, Angular does support the concept of nested controllers. The nested controllers are needed to be defined in a hierarchical manner for using it in the View. 

17. How can you differentiate between Angular expressions and JavaScript expressions?

Angular Expressions JavaScript Expressions
1. They can contain literals, operators, and variables. 1. They can contain literals, operators, and variables.
2. They can be written inside the HTML tags. 2. They can’t be written inside the HTML tags.
3. They do not support conditionals, loops, and exceptions. 3. They do support conditionals, loops, and exceptions.
4.  They support filters. 4.  They do not support filters.

18. List at down the ways in which you can communicate between applications modules using core Angular functionality.

Below are the most general ways for communicating between application modules using core Angular functionality :

  • Using events
  • Using services
  • By assigning models on $rootScope
  • Directly between controllers [$parent$$childHead$$nextSibling, etc.]
  • Directly between controllers [ControllerAs, or other forms of inheritance]

19. What is the difference between a service() and a factory()?

A service() in Angular is a function that is used for the business layer of the application. It operates as a constructor function and is invoked once at the runtime using the ‘new’ keyword. Whereas factory() is a function which works similar to the service() but is much more powerful and flexible. factory() are design patterns which help in creating Objects.

20. What is the difference between $scope and scope in Angular?

  • $scope in Angular is used for implementing the concept of dependency injection (D.I) on the other hand scope is used for directive linking.
  • $scope is the service provided by $scopeProviderwhich can be injected into controllers, directives or other services whereas Scope can be anything such as a function parameter name, etc.

21. Explain the concept of scope hierarchy?

The $scope objects in Angular are organized into a hierarchy and are majorly used by views. It contains a root scope which can further contain scopes known as child scopes. One root scope can contain more than one child scopes. Here each view has its own $scope thus the variables set by its view controller will remain hidden to the other controllers. The Scope hierarchy generally looks like:

  • Root $scope
    • $scope for Controller 1
    • $scope for Controller 2
    • ..
    • $scope for Controller ‘n’

22. What is AOT?

AOT stands for Angular Ahead-of-Time compiler. It is used for pre-compiling the application components and along with their templates during the build process. Angular applications which are compiled with AOT has a smaller launching time. Also, components of these applications can execute immediately, without needing any client-side compilation. Templates in these applications are embedded as code within their components. It reduces the need for downloading the Angular compiler which saves you from a cumbersome task. AOT compiler can discard the unused directives which are further thrown out using a tree-shaking tool.

23. Explain jQLite.

jQlite is also known as jQuery lite is a subset of jQuery and contains all its features. It is packaged within Angular, by default. It helps Angular to manipulate the DOM in a way that is compatible cross-browser. jQLite basically implements only the most commonly needed functionality which results in having a small footprint.

24. Explain the process of digest cycle in Angular?

The digest cycle in Angular is a process of monitoring the watchlist for keeping a track of changes in the value of the watch variable. In each digest cycle, Angular compares the previous and the new version of the scope model values. Generally, this process is triggered implicitly but you can activate it manually as well by using $apply().

Digest Cycle - Angular Interview Questions - Edureka

25. What are the Angular Modules?

All the Angular apps are modular and follow a modularity system known as NgModules. These are the containers which hold a cohesive block of code dedicated specifically to an application domain, a workflow, or some closely related set of capabilities. These modules generally contain components, service providers, and other code files whose scope is defined by the containing NgModule.  With modules makes the code becomes more maintainable, testable, and readable. Also, all the dependencies of your applications are generally defined in modules only.

26. On which types of the component can we create a custom directive?

Angular provides support to create custom directives for the following:

  • Element directives − Directive activates when a matching element is encountered.
  • Attribute − Directive activates when a matching attribute is encountered.
  • CSS − Directive activates when a matching CSS style is encountered.
  • Comment − Directive activates when a matching comment is encountered

27. What are the different types of filters in Angular?

Below are the various filters supported by Angular:

  • currency: Format a number to a currency format.
  • date: Format a date to a specified format.
  • filter: Select a subset of items from an array.
  • json: Format an object to a JSON string.
  • limit: To Limits an array/string, into a specified number of elements/characters.
  • lowercase: Format a string to lower case.
  • number: Format a number to a string.
  • orderBy: Orders an array by an expression.
  • uppercase: Format a string to upper case.

28. What is Dependency Injection in Angular? 

Dependency Injection (DI) is a software design pattern where the objects are passed as dependencies rather than hard-coding them within the component. The concept of Dependency Injection comes in handy when you are trying to separate the logic of object creation to that of its consumption. The ‘config’ operation makes use of DI that must be configured beforehand while the module gets loaded to retrieve the elements of the application. With this feature, a user can change dependencies as per his requirements.

29. Differentiate between one-way binding and two-way data binding.

In One-Way data binding, the View or the UI part does not update automatically whenever the data model changes. You need to manually write custom code in order to update it every time the view changes.

1 way data binding - Angular Interview Questions - Edureka

Whereas, in Two-way data binding, the View or the UI part is updated implicitly as soon as the data model changes. It is a synchronization process, unlike One-way data binding.

2 way data binding - Angular Interview Questions - Edureka

30. What are the lifecycle hooks for components and directives?

An Angular component has a discrete life-cycle which contains different phases as it transits through birth till death. In order to gain better control of these phases, we can hook into them using the following:

  • constructor: It is invoked when a component or directive is created by calling new on the class.
  • ngOnChanges: It is invoked whenever there is a change or update in any of the input properties of the component.
  • ngOnInit: It is invoked every time a given component is initialized. This hook is only once called in its lifetime after the first ngOnChanges.
  • ngDoCheck: It is invoked whenever the change detector of the given component is called. This allows you to implement your own change detection algorithm for the provided component.
  • ngOnDestroy: It is invoked right before the component is destroyed by Angular. You can use this hook in order to unsubscribe observables and detach event handlers for avoiding any kind of memory leaks.

31. What do you understand by dirty checking in Angular?

In Angular, the digest process is known as dirty checking. It is called so as it scans the entire scope for changes. In other words, it compares all the new scope model values with the previous scope values. Since all the watched variables are contained in a single loop, any change/update in any of the variable leads to reassigning of rest of the watched variables present inside the DOM. A watched variable is in a single loop(digest cycle), any value change of any variable forces to reassign values of other watched variables in DOM

32. Differentiate between DOM and BOM.

DOM BOM
1. Stands for Document Object Model 1. Stands for Browser Object Model
2. Represents the contents of a web page 2. Works a level above web page and includes browser attributes
3. All the Objects are arranged in a tree structure and the document can be manipulated & accessed via provided APIs only 3. All global JavaScript objects, variables & functions become members of the window object implicitly
4. Manipulates HTML documents 4. Access and manipulate the browser window
5. W3C Recommended standard specifications 5. Each browser has its own implementation

33. What is Transpiling in Angular?
Transpiling in Angular refers to the process of conversion of the source code from one programming language to another. In Angular, generally, this conversion is done from TypeScript to JavaScript. It is an implicit process and happens internally.

34. How to perform animation in Angular?

In order to perform animation in an Angular application, you need to include a special Angular library known as Animate Library and then refer to the ngAnimate module into your application or add the ngAnimate as a dependency inside your application module.

35. What is transclusion in Angular?

The transclusion in Angular allows you to shift the original children of a directive into a specific location within a new template. The ng directive indicates the insertion point for a transcluded DOM of the nearest parent directive that is using transclusion. Attribute directives like ng-transclude or ng-transclude-slot are mainly used for transclusion.

36. What are events in Angular?

Events in Angular are specific directives that help in customizing the behavior of various DOM events. Few of the events supported by Angular are listed below:

  • ng-click
  • ng-copy
  • ng-cut
  • ng-dblclick
  • ng-keydown
  • ng-keypress
  • ng-keyup
  • ng-mousedown
  • ng-mouseenter
  • ng-mouseleave
  • ng-mousemove
  • ng-mouseover
  • ng-mouseup
  • ng-blur

37. List some tools for testing angular applications?

  1. Karma
  2. Angular Mocks
  3. Mocha
  4. Browserify
  5. Sion

38. How to create a service in Angular?

In Angular, a service is a substitutable object that is wired together using dependency injection. A service is created by registering it in the module it is going to be executed within. There are basically three ways in which you can create an angular service. They are basically three ways in which a service can be created in Angular:

  • Factory
  • Service
  • Provider

39. What is a singleton pattern and where we can find it in Angular?

Singleton pattern in Angular is a great pattern which restricts a class from being used more than once. Singleton pattern in Angular is majorly implemented on dependency injection and in the services. Thus, if you use ‘new Object()’ without making it a singleton, then two different memory locations will be allocated for the same object. Whereas, if the object is declared as a singleton, in case it already exists in the memory then simply it will be reused.

40. What do you understand by REST in Angular?

REST stands for REpresentational State Transfer. REST is an API (Application Programming Interface) style that works on the HTTP request. In this, the requested URL pinpoints the data that needs to be processed. Further ahead, an HTTP method then identifies the specific operation that needs to be performed on that requested data. Thus, the APIs which follows this approach are known as RESTful APIs.

41. What is bootstrapping in Angular?

Bootstrapping in Angular is nothing but initializing, or starting the Angular app. Angular supports automatic and manual bootstrapping.

  • Automatic Bootstrapping: this is done by adding the ng-app directive to the root of the application, typically on the tag or tag if you want angular to bootstrap your application automatically. When Angular finds ng-app directive, it loads the module associated with it and then compiles the DOM.
  • Manual Bootstrapping: Manual bootstrapping provides you more control on how and when to initialize your Angular application. It is useful where you want to perform any other operation before Angular wakes up and compile the page.

42. What is the difference between a link and compile in Angular?

  • Compile function is used for template DOM Manipulation and to collect all the directives.
  • Link function is used for registering DOM listeners as well as instance DOM manipulation and is executed once the template has been cloned.

43. What do you understand by constants in Angular?

In Angular, constants are similar to the services which are used to define the global data. Constants are declared using the keyword “constant”. They are created using constant dependency and can be injected anywhere in controller or services.

44. What is the difference between a provider, a service and a factory in Angular?

Provider Service Factory
A provider is a method using which you can pass a portion of your application into app.config A service is a method that is used to create a service instantiated with the ‘new’ keyword. It is a method that is used for creating and configuring services. Here you create an object, add properties to it and then return the same object and pass the factory method into your controller.

45. What are Angular Global APIs?

Angular Global API is a combination of global JavaScript functions for performing various common tasks like:

  • Comparing objects
  • Iterating objects
  • Converting data

There are some common Angular Global API functions like:

  • angular. lowercase: Converts a string to lowercase string.
  • angular. uppercase: Converts a string to uppercase string.
  • angular. isString: Returns true if the current reference is a string.
  • angular. isNumber: Returns true if the current reference is a number.

Advanced Level – Angular Interview Questions

46. In Angular, describe how will you set, get and clear cookies?

For using cookies in Angular, you need to include a  module called ngCookies angular-cookies.js.

To set Cookies – For setting the cookies in a key-value format ‘put’ method is used.

cookie.set('nameOfCookie',"cookieValue");

To get Cookies – For retrieving the cookies ‘get’ method is used.

cookie.get(‘nameOfCookie’);

To clear Cookies – For removing cookies ‘remove’ method is used.

cookie.delete(‘nameOfCookie’);

47.  If your data model is updated outside the ‘Zone’, explain the process how will you the view?

You can update your view using any of the following:

  1. ApplicationRef.prototype.tick(): It will perform change detection on the complete component tree.
  2. NgZone.prototype.run(): It will perform the change detection on the entire component tree. Here, the run() under the hood will call the tick itself and then parameter will take the function before tick and executes it.
  3. ChangeDetectorRef.prototype.detectChanges(): It will launch the change detection on the current component and its children.

48. Explain ng-app directive in Angular.

ng-app directive is used to define Angular applications which let us use the auto-bootstrap in an Angular application. It represents the root element of an Angular application and is generally declared near <html> or <body> tag. Any number of ng-app directives can be defined within an HTML document but just a single Angular application can be officially bootstrapped implicitly. Rest of the applications must be manually bootstrapped. 

Example

<div ng-app=“myApp” ng-controller=“myCtrl”>
First Name :
<input type=“text” ng-model=“firstName”>
<br />
Last Name :
<input type=“text” ng-model=“lastName”>
<br>
Full Name: {{firstName + ” ” + lastName }}
</div>  

49. What is the process of inserting an embedded view from a prepared TemplateRef?

@Component({
    selector: 'app-root',
    template: `
        <ng-template #template let-name='fromContext'><div>{{name}}</ng-template>
    `
})
export class AppComponent implements AfterViewChecked {
    @ViewChild('template', { read: TemplateRef }) _template: TemplateRef<any>;
    constructor() { }

    ngAfterViewChecked() {
        this.vc.createEmbeddedView(this._template, {fromContext: 'John'});
    }
}

50. How can you hide an HTML element just by a button click in angular?

An HTML element can be easily hidden using the ng-hide directive in conjunction along with a controller to hide an HTML element on button click.

View

<div ng-controller="MyController">
  <button ng-click="hide()">Hide element</button>
  <p ng-hide="isHide">Hello World!</p>
</div>

Controller

controller: function() {
this.isHide = false;
this.hide = function(){
this.isHide = true; }; }

So this brings us to the end of the Angular interview questions article. The topics that you learned in this Angular Interview Questions article are the most sought-after skill sets that recruiters look for in an Angular Professional. These set of Angular Interview Questions will definitely help you ace your job interview. Good luck with your interview!

If you found this “Angular Interview Questions” relevant, check out the Angular Certification Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. 

Got a question for us? Please mention it in the comments section of this Angular Interview Questions and we will get back to you.


AWS Certification – All you need to know

$
0
0
Each of the 5 AWS certifications come with salaries in excess of $100,000 – Forbes

Over the last one year or so, I’ve met a lot of people who have varying degrees of ambiguity around AWS certifications and which certification to choose for their specific nature of work or towards their career goals. This blog aims to demystify these ambiguities and provide clarity on what each AWS certification entails. For starters, let’s understand what AWS is and what certifications it provides:

Amazon Web Services (AWS), the popular cloud platform, houses a collection of cloud computing services, that have opened up hot career prospects in the world of cloud computing.  AWS has more than 70 services, spanning a wide range, including compute, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools and Internet of things. AWS also offers cloud certifications that assert your ability to operate on the cloud. There are five different certifications and each of them open the floodgates to enhanced career opportunities. Depending on your interest and career goals, you can choose to engage yourself in any of the two certification tracks prescribed by AWS. Let me now break them down for you, one by one.

AWS Certifications in 15 Minutes | Prepare for AWS Certifications | AWS Training | Edureka

AWS suite of certifications

AWS certifications are aligned to two broad streams – Solutions Architect and DevOps Engineer. The Solutions Architect stream is well defined; you should first bag the Solutions Architect – Associate certification followed by the Solutions Architect – Professional certification. But if you choose to certify yourself as an AWS DevOps Engineer, you have to either clear the Developer – Associate certification or the System Operations (SysOps) Administrator – Associate certifications. Of course, you should choose based on your current job or the stream of specialization you aspire for.

Alternatively, AWS allows you to containerize its suite of certifications into three blocks – Solutions Architect, Developer and SysOps. While Solutions Architect is an independent stream in itself, a Developer and/or SysOps Associate certification can lead you to a DevOps Engineer certification.

AWS certification-career-progression

I have good news for you. The associate certifications of Solutions Architect and Developer share 50% content between them. So if you prepare for one, you’ve pretty much prepared half of the other as well. While most people I’ve met feel that Developer is the easiest to crack, Solutions Architect makes you a master of almost all of AWS’ services and helps you understand all key concepts. SysOps, on the other hand, is an ideal starting point if you are currently working as an infrastructure/system admin and/or are managing VMs, storage or networking in your current job.

AWS Certification Job Prospects

According to Forbes, AWS Certified Solution Architect – Associate is the number 1 certification program of 2016. In its global survey of top 15 certifications in 2016, Solutions Architect, with a median salary of $125,871, is the highest paying certification. Additionally, all of AWS’ certifications will help you earn salaries in excess of $100,000. Specific Solution Architect skills that recruiters look for include designing on AWS, selecting the appropriate AWS services for your business, ingress and egress of data to and from the AWS, estimating AWS costs and identifying cost-control measures for your organization.

Globally, there are more than 380,000 cloud computing jobs in the IT industry itself. With cloud computing pervading into almost every business vertical, the need for qualified and certified cloud professionals is ever-growing. AWS is currently leading the pack with most companies having invested in or planning to invest in AWS tools and services. This is a clear sign that good times are in store for you if you commit yourself to AWS.

Get Certified With Industry Level Projects & Fast Track Your Career

Which AWS Certification should you choose?

Depending on your interest, career goal and experience, you can decide which certification to go for. It is important to note that the Associate-level certifications do not require to have any prior AWS experience. However, experts believe around 1 year of experience with any of the AWS tools will be a great help in cracking the exams faster. The three Professional exams mandate 2 or more years of hands-on experience on AWS, but Amazon takes your word for it and does not require you to furnish any proof of experience. Let me now explain what you will achieve with each of these certifications:

aws certification-solutions-architect

Do you wish to get certified as an AWS Solutions Architect already? Our live, instructor-led course will help you prepare for the exam by mastering concepts around EC2, EIP, ELB, EBS, S3, and lots more. Check out full details here

aws certification-developer-associate

The Edureka course AWS Development Certification Training is specially curated to help you crack the AWS exam and become a certified developer. This live, instructor-led course helps you master concepts like cloud essentials, models, services and mastering AWS services and much more. Click here to know more

aws certification-sysops-associate

awscertification-architect-professional

aws cerification-devops-professional

Edureka tips and recommendations for AWS Certification

Now, here’s that part of the blog where I provide you a cheat sheet to ace your AWS certification. While each AWS certification has a different level of complexity, there are certain common tips that can help you become battle-ready.

  • Do remember that you need to pay a fees of $150 to enroll for the Associate exams and $300 to enroll for the Professional exams.
  • Questions are usually in multiple choice format with the pass percentage changing with every exam, based on statistical analysis by Amazon.
  • An average Associate-level exam lasts for about 80 minutes with around 60 minutes, with a pass percentage of 60%.
  • For effective practice prior to taking the exam, we recommend you sign up with Amazon and activate the AWS Free Tier, which includes 750 hours of both Linux and Windows instances with 30 GB of EBS storage each month for one year for new AWS customers.
  • Utilize this space and time to get hands-on experience on AWS tools and operations. This is the most important aspect of the certifications, and the easiest ways to understand core concepts.
  • Additionally, you can check out the Exam Blueprint mentioned in the AWS website for detailed information on specific exam modules.

I hope I’ve been able to help you de-clutter your head about AWS certifications. If you have attempted any of these exams in the recent past, I need to hear from you! Place your comments below and let’s connect.

If you have decided to prepare for an AWS certification, you should check out our courses on AWS Architect and AWS Development. Each of these courses is specially curated by industry experts, and come with live, real-life project experience and 24/7 technical support.

Related Posts:

AWS Career Opportunities

Deep dive into Amazon S3

10 Reasons Why Big Data Analytics is the Best Career Move

$
0
0

The Great White is considered to be the King of the Ocean. This is because the great White is on top of its game. Imagine if you could be on top of the game in the ocean of Big Data!

Big Data is everywhere and there is almost an urgent need to collect and preserve whatever data is being generated, for the fear of missing out on something important. There is a huge amount of data floating around. What we do with it is all that matters right now. This is why Big Data Analytics is in the frontiers of IT. Big Data Analytics has become crucial as it aids in improving business, decision makings and providing the biggest edge over the competitors. This applies for organizations as well as professionals in the Analytics domain. For professionals, who are skilled in Big Data Analytics, there is an ocean of opportunities out there.

Why Big Data Analytics is the Best Career move

If you are still not convinced by the fact that Big Data Analytics is one of the hottest skills, here are 10 more reasons for you to see the big picture.

1. Soaring Demand for Analytics Professionals:

Jeanne Harris, senior executive at Accenture Institute for High Performance, has stressed the significance of analytics professionals by saying, “…data is useless without the skill to analyze it.” There are more job opportunities in Big Data management and Analytics than there were last year and many IT professionals are prepared to invest time and money for the training.

The job trend graph for Big Data Analytics, from Indeed.com, proves that there is a growing trend for it and as a result there is a steady increase in the number of job opportunities.

Big Data Analytics Job Trend

Source: Indeed.com

The current demand for qualified data professionals is just the beginning. Srikanth Velamakanni, the Bangalore-based cofounder and CEO of CA headquartered Fractal Analytics states: “In the next few years, the size of the analytics market will evolve to at least one-thirds of the global IT market from the current one-tenths”.

Technology professionals who are experienced in Analytics are in high demand as organizations are looking for ways to exploit the power of Big Data. The number of job postings related to Analytics in Indeed and Dice has increased substantially over the last 12 months. Other job sites are showing similar patterns as well. This apparent surge is due to the increased number of organizations implementing Analytics and thereby looking for Analytics professionals.

In a study by QuinStreet Inc., it was found that the trend of implementing Big Data Analytics is zooming and is considered to be a high priority among U.S. businesses. A majority of the organizations are in the process of implementing it or actively planning to add this feature within the next two years.

You can watch the sample class recording of Edureka’s Big Data and Hadoop course here:

Hadoop Tutorial for Beginners | Hadoop Training | Edureka

 

2. Huge Job Opportunities & Meeting the Skill Gap:

The demand for Analytics skill is going up steadily but there is a huge deficit on the supply side. This is happening globally and is not restricted to any part of geography. In spite of Big Data Analytics being a ‘Hot’ job, there is still a large number of unfilled jobs across the globe due to shortage of required skill. A McKinsey Global Institute study states that the US will face a shortage of about 190,000 data scientists and 1.5 million managers and analysts who can understand and make decisions using Big Data by 2018.

To get in-depth knowledge on Data Science, you can enroll for live Data Science Certification Training by Edureka with 24/7 support and lifetime access.

India, currently has the highest concentration of analytics globally. In spite of this, the scarcity of data analytics talent is particularly acute and demand for talent is expected to be on the higher side  as more global organizations are outsourcing their work.

According to Srikanth Velamakanni, co-founder and CEO of Fractal Analytics, there are two types of talent deficits: Data Scientists, who can perform analytics and Analytics Consultant, who can understand and use data. The talent supply for these job title, especially Data Scientists is extremely scarce and the demand is huge.

3. Salary Aspects:

Strong demand for Data Analytics skills is boosting the wages for qualified professionals and making Big Data pay big bucks for the right skill. This phenomenon is being seen globally where countries like Australia and the U.K are witnessing this ‘Moolah Marathon’.

According to the 2015 Skills and Salary Survey Report published by the Institute of Analytics Professionals of Australia (IAPA), the annual median salary for data analysts is $130,000, up four per cent from last year. Continuing the trend set in 2013 and 2014, the median respondent earns 184% of the Australian full-time median salary. The rising demand for analytics professionals is also reflected in IAPA’s membership, which has grown to more than 5000 members in Australia since its formation in 2006.

Randstad states that the annual pay hikes for Analytics professionals in India is on an average 50% more than other IT professionals. According to The Indian Analytics Industry Salary Trend Report by Great Lakes Institute of Management, the average salaries for analytics professionals in India was up by 21% in 2015 as compared to 2014. The report also states that 14% of all analytics professionals get a salary of more than Rs. 15 lakh per annum.

A look at the salary trend for Big Data Analytics in the UK also indicates a positive and exponential growth. A quick search on Itjobswatch.co.uk shows a median salary of £62,500 in early 2016 for Big Data Analytics jobs, as compared to £55,000 in the same period in 2015. Also, a year-on-year median salary change of +13.63% is observed.

The table below looks at the statistics for Big Data Analytics skills in IT jobs advertised across the UK. Included is a guide to the salaries offered in IT jobs that have cited Big Data Analytics over the 3 months to 23 June 2016 with a comparison to the same period over the previous 2 years.


Analytics Job Titles and their Salaries

4. Big Data Analytics: A Top Priority in a lot of Organizations

According to the ‘Peer Research – Big Data Analytics’ survey, it was concluded that Big Data Analytics is one of the top priorities of the organizations participating in the survey as they believe that  it improves the performances of their organizations.

Big Data Analytics - A Top Priority in Organizations

Based on the responses, it was found that approximately 45% of the surveyed believe that Big Data analytics will enable much more precise business insights, 38% are looking to use Analytics to recognize sales and market opportunities. More than 60% of the respondents are depending on Big Data Analytics to  boost the organization’s social media marketing abilities. The QuinStreet research based on their survey also back the fact that Analytics is the need of the hour, where 77% of the respondents consider Big Data Analytics a top priority.

A survey by Deloitte, Technology in the Mid-Market; Perspectives and Priorities, reports that executives clearly see the value of analytics. Based on the survey, 65.2% of respondents are using some form of analytics that is helping their business needs. The image below clearly depicts their attitude and belief towards Big Data Analytics.

Big Data Analytics - A Top Priority in Organizations

5. Adoption of Big Data Analytics is Growing:

New technologies are now making it easier to perform increasingly sophisticated data analytics on a very large and diverse datasets. This is evident as the report from The Data Warehousing Institute (TDWI) shows. According to this report, more than a third of the respondents are currently using some form of advanced analytics on Big Data,  for Business Intelligence, Predictive Analytics and Data Mining tasks.

With Big Data Analytics providing an edge over the competition, the rate of implementation of the necessary Analytics tools has increased exponentially. In fact, most of the respondents of the  ‘Peer Research – Big Data Analytics’ survey reported that they already have a strategy setup for dealing with Big Data Analytics. And those who are yet to come up with a strategy are also in the process of planning for it.

Formal Strategy for Big Data AnalyticsWhen it comes to Big Data Analytics tools, the adoption of Apache Hadoop framework continues to be the popular choice. There are various commercial and open-source frameworks to choose from and organizations are making the appropriate choice based on their requirement. Over half of the respondents have already deployed or are currently implementing a Hadoop distribution. Out of them, a quarter of the respondents have deployed open-source framework, which is twice the number of organizations that have deployed a commercial distribution of the Hadoop framework.

Adoption of Big Data Analytics Tools

6. Analytics: A Key Factor in Decision Making

Analytics is a key competitive resource for many companies. There is no doubt about that. According to the ‘Analytics Advantage’ survey overseen by Tom Davenport, ninety six percent of respondents feel that analytics will become more important to their organizations in the next three years. This is because there is a huge amount of data that is not being used and at this point, only rudimentary analytics is being done. About forty nine percent of the respondents strongly believe that analytics is a key factor in better decision-making capabilities. Another sixteen percent like it for its superior key strategic initiatives.

Even though there is a fight for the title of ‘Greatest Benefit of Big Data Analytics’, one thing is undeniable and stands out the most: Analytics play an important role in driving business strategy and making effective business decisions.

Key Benefits of Big Data Analytics

Seventy Four percent of the respondents of the ‘Peer-Research Big Data Analytics Survey’ have agreed that Big Data Analytics is adding value to their organization and allows vital information for making timely and effective business decisions of great importance. This is a clear indicator than Big Data Analytics is here to stay and a career in it is the wisest decision one can make.

7. The Rise of Unstructured and Semistructured Data Analytics:

The ‘Peer Research – Big Data Analytics’ survey clearly reports that there is a huge growth when it comes to unstructured and semistructured data analytics. Eighty four percent of the respondents have mentioned that the organization they work for are currently processing and analyzing unstructured data sources, including weblogs, social media, e-mail, photos, and video. The remaining respondents have indicated that steps are being taken to implement them in the next 12 to 18 months.

Rise of Unstructured Data

8. Big Data Analytics is Used Everywhere!

It is a given that there is a huge demand for Big Data Analytics owing to its awesome features. The tremendous growth is also due to the varied domain across which Analytics is being utilized. The image below depicts the job opportunities across various domains.

Big Data Analytics Across Domains

9. Surpassing Market Forecast / Predictions for Big Data Analytics:

Big Data Analytics has  topped a survey carried out by Nimbus Ninety, as the most disruptive technologies that will have the biggest influence in three years’ time. Added to this, there are more market forecasts that support this:

  • According to IDC, the Big Data Analytics market will reach $125 billion worldwide in 2015.
  • IIA states that Big Data Analytics tools will be the first line of defense, combining machine learning, text mining and ontology modeling to provide holistic and integrated security threat prediction, detection, and deterrence and prevention programs.
  • According to the survey ‘The Future of Big Data Analytics – Global Market and Technologies Forecast – 2015-2020’, Big Data Analytics Global Market will grow by 14.4% CAGR over this period.
  • The Big Data Analytics Global Market for Apps and Analytics Technology will grow by 28.2% CAGR, for  Cloud Technology will grow by 16.1% CAGR, for Computing Technology will grow by 7.1% CAGR, for NoSQL Technology will grow by 18.9% CAGR  over the entire 2015-2020 period.

10. Numerous Choices in Job Titles and Type of Analytics :

From a career point of view, there are so many option available, in terms of domain as well as nature of job. Since Analytics is utilized in varied fields, there are numerous job titles for one to choose from.

  • Big Data Analytics Business Consultant
  • Big Data Analytics Architect
  • Big Data Engineer
  • Big Data Solution Architect
  • Big Data Analyst
  • Analytics Associate
  • Business Intelligence and Analytics Consultant
  • Metrics and Analytics Specialist

Big Data Analytics career is deep and one can choose from the 3 types of data analytics depending on the Big Data environment.

  • Prescriptive Analytics
  • Predictive Analytics
  • Descriptive Analytics.

A huge array of organizations like Ayata, IBM, Alteryx, Teradata, TIBCO, Microsoft, Platfora, ITrend, Karmasphere, Oracle, Opera, Datameer, Pentaho, Centrofuge, FICO, Domo, Quid, Saffron, Jaspersoft, GoodData, Bluefin Labs, Tracx, Panaroma Software, and countless more are utilizing Big Data Analytics for their business needs and a huge job opportunities are possible with them.

Conclusion:

Analytics no matter how advanced they are, does not remove the need for human insights. On the contrary, there is a compelling need for skilled people with the ability to understand data, think from the business point of view and come up with insights. For this very reason technology professionals with Analytics skill are finding themselves in high demand as businesses look to harness the power of Big Data. A professional with the Analytical skills can master the ocean of Big Data and become a vital asset to an organization, boosting the business and their career.

Get-a-Structured-Course-on-Hadoop

Got a question for us? Please mention them in the comments section and we will get back to you.

Related Posts:

Top 10 Reasons To Learn Hadoop

Hadoop Tutorial for Beginners

Career in Big Data Analytics

Become Certified Big Data Professional

Top 100 Python Interview Questions You Must Prepare In 2019

$
0
0

Python Certification is the most sought-after skill in programming domain. In this Python Interview Questions blog, I will introduce you to the most frequently asked questions in Python interviews. Our Python Interview Questions is the one-stop resource from where you can boost your interview preparation. We have 100+ questions on Python Programming basics which will help you with different expertise levels to reap the maximum benefit from our blog.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.
We have compiled a list of top Python interview questions which are classified into 7 sections, namely:
Before moving ahead, you may go through the recording of Python Interview Questions where our instructor has shared his experience and expertise that will help you to crack any Python Interview:

Python Interview Questions And Answers | Python Training | Edureka

If you have other doubts regarding Python, feel free to post them in our QnA Forum. Our expert team will get back to you at the earliest.

Basic Python Interview Questions

Q1. What is the difference between list and tuples in Python?

LIST vs TUPLES

LIST TUPLES
Lists are mutable i.e they can be edited. Tuples are immutable (tuples are lists which can’t be edited).
Lists are slower than tuples. Tuples are faster than list.
Syntax: list_1 = [10, ‘Chelsea’, 20] Syntax: tup_1 = (10, ‘Chelsea’ , 20)

Q2. What are the key features of Python?

  • Python is an interpreted language. That means that, unlike languages like C and its variants, Python does not need to be compiled before it is run. Other interpreted languages include PHP and Ruby.
  • Python is dynamically typed, this means that you don’t need to state the types of variables when you declare them or anything like that. You can do things like x=111 and then x="I'm a string" without error
  • Python is well suited to object orientated programming in that it allows the definition of classes along with composition and inheritance. Python does not have access specifiers (like C++’s public, private).
  • In Python, functions are first-class objects. This means that they can be assigned to variables, returned from other functions and passed into functions. Classes are also first class objects
  • Writing Python code is quick but running it is often slower than compiled languages. Fortunately,Python allows the inclusion of C based extensions so bottlenecks can be optimized away and often are. The numpy package is a good example of this, it’s really quite quick because a lot of the number crunching it does isn’t actually done by Python
  • Python finds use in many spheres – web applications, automation, scientific modeling, big data applications and many more. It’s also often used as “glue” code to get other languages and components to play nice.

Q3. What type of language is python? Programming or scripting?

Ans: Python is capable of scripting, but in general sense, it is considered as a general-purpose programming language. To know more about Scripting, you can refer to the Python Scripting Tutorial.

Q4.How is Python an interpreted language?

Ans: An interpreted language is any programming language which is not in machine level code before runtime. Therefore, Python is an interpreted language.

Q5.What is pep 8?

Ans: PEP stands for Python Enhancement Proposal. It is a set of rules that specify how to format Python code for maximum readability.

Q6. How is memory managed in Python?

Ans: 

  1. Memory management in python is managed by Python private heap space. All Python objects and data structures are located in a private heap. The programmer does not have access to this private heap. The python interpreter takes care of this instead.
  2. The allocation of heap space for Python objects is done by Python’s memory manager. The core API gives access to some tools for the programmer to code.
  3. Python also has an inbuilt garbage collector, which recycles all the unused memory and so that it can be made available to the heap space.

Q7. What is namespace in Python?

Ans: A namespace is a naming system used to make sure that names are unique to avoid naming conflicts.

Q8. What is PYTHONPATH?

Ans: It is an environment variable which is used when a module is imported. Whenever a module is imported, PYTHONPATH is also looked up to check for the presence of the imported modules in various directories. The interpreter uses it to determine which module to load.

Q9. What are python modules? Name some commonly used built-in modules in Python?

Ans: Python modules are files containing Python code. This code can either be functions classes or variables. A Python module is a .py file containing executable code.

Some of the commonly used built-in modules are:

  • os
  • sys
  • math
  • random
  • data time
  • JSON

Q10.What are local variables and global variables in Python?

Global Variables:

Variables declared outside a function or in global space are called global variables. These variables can be accessed by any function in the program.

Local Variables:

Any variable declared inside a function is known as a local variable. This variable is present in the local space and not in the global space.

Example:

a=2                       #Global Variable
def add(): 
b=3                       #Local Variable
c=a+b 
print(c) 
add()

Output: 5

When you try to access the local variable outside the function add(), it will throw an error.

Q11. Is python case sensitive?

Ans: Yes. Python is a case sensitive language.

Q12.What is type conversion in Python?

Ans: Type conversion refers to the conversion of one data type iinto another.

int() – converts any data type into integer type

float() – converts any data type into float type

ord() – converts characters into integer

hex() – converts integers to hexadecimal

oct() – converts integer to octal

tuple() – This function is used to convert to a tuple.

set() – This function returns the type after converting to set.

list() – This function is used to convert any data type to a list type.

dict() – This function is used to convert a tuple of order (key,value) into a dictionary.

str() – Used to convert integer into a string.

complex(real,imag) – This functionconverts real numbers to complex(real,imag) number.

Q13. How to install Python on Windows and set path variable?

Ans: To install Python on Windows, follow the below steps:

  • Install python from this link: https://www.python.org/downloads/
  • After this, install it on your PC. Look for the location where PYTHON has been installed on your PC using the following command on your command prompt: cmd python. 
  • Then go to advanced system settings and add a new variable and name it as PYTHON_NAME and paste the copied path.
  • Look for the path variable, select its value and select ‘edit’.
  • Add a semicolon towards the end of the value if it’s not present and then type %PYTHON_HOME% 

Q14. Is indentation required in python?

Ans: Indentation is necessary for Python. It specifies a block of code. All code within loops, classes, functions, etc is specified within an indented block. It is usually done using four space characters. If your code is not indented necessarily, it will not execute accurately and will throw errors as well.

Q15. What is the difference between Python Arrays and lists?

Ans: Arrays and lists, in Python, have the same way of storing data. But, arrays can hold only a single data type elements whereas lists can hold any data type elements.

Example:

import array as arr
My_Array=arr.array('i',[1,2,3,4])
My_list=[1,'abc',1.20]
print(My_Array)
print(My_list)

Output:

array(‘i’, [1, 2, 3, 4]) [1, ‘abc’, 1.2]

Q16. What are functions in Python?

Ans: A function is a block of code which is executed only when it is called. To define a Python function, the def keyword is used.

Example:

def Newfunc():
print("Hi, Welcome to Edureka")
Newfunc(); #calling the function

Output: Hi, Welcome to Edureka

Q17.What is __init__?

Ans: __init__ is a method or constructor in Python. This method is automatically called to allocate memory when a new object/ instance of a class is created. All classes have the __init__ method.

Here is an example of how to use it.

class Employee:
def __init__(self, name, age,salary):
self.name = name
self.age = age
self.salary = 20000
E1 = Employee("XYZ", 23, 20000)
# E1 is the instance of class Employee.
#__init__ allocates memory for E1. 
print(E1.name)
print(E1.age)
print(E1.salary)

Output:

XYZ

23

20000

Q18.What is a lambda function?

Ans: An anonymous function is known as a lambda function. This function can have any number of parameters but, can have just one statement.

Example:

a = lambda x,y : x+y
print(a(5, 6))

Output: 11

Q19. What is self in Python?

Ans: Self is an instance or an object of a class. In Python, this is explicitly included as the first parameter. However, this is not the case in Java where it’s optional.  It helps to differentiate between the methods and attributes of a class with local variables.

The self variable in the init method refers to the newly created object while in other methods, it refers to the object whose method was called.

Q20. How does break, continue and pass work?

Break Allows loop termination when some condition is met and the control is transferred to the next statement.
Continue Allows skipping some part of a loop when some specific condition is met and the control is transferred to the beginning of the loop
Pass Used when you need some block of code syntactically, but you want to skip its execution. This is basically a null operation. Nothing happens when this is executed.

Q21. What does [::-1} do?

Ans: [::-1] is used to reverse the order of an array or a sequence.
For example:
import array as arr
My_Array=arr.array('i',[1,2,3,4,5])
My_Array[::-1]

Output: array(‘i’, [5, 4, 3, 2, 1])

[::-1] reprints a reversed copy of ordered data structures such as an array or a list. the original array or list remains unchanged.

 

Q22. How can you randomize the items of a list in place in Python?

Ans: Consider the example shown below:

from random import shuffle
x = ['Keep', 'The', 'Blue', 'Flag', 'Flying', 'High']
shuffle(x)
print(x)

The output of the following code is as below.

['Flying', 'Keep', 'Blue', 'High', 'The', 'Flag']

Q23. What are python iterators?

Ans: Iterators are objects which can be traversed though or iterated upon.

Q24. How can you generate random numbers in Python?

Ans: Random module is the standard module that is used to generate a random number. The method is defined as:

import random
random.random

The statement random.random() method return the floating point number that is in the range of [0, 1). The function generates random float numbers. The methods that are used with the random class are the bound methods of the hidden instances. The instances of the Random can be done to show the multi-threading programs that creates a different instance of individual threads. The other random generators that are used in this are:

  1. randrange(a, b): it chooses an integer and define the range in-between [a, b). It returns the elements by selecting it randomly from the range that is specified. It doesn’t build a range object.
  2. uniform(a, b): it chooses a floating point number that is defined in the range of [a,b).Iyt returns the floating point number
  3. normalvariate(mean, sdev): it is used for the normal distribution where the mu is a mean and the sdev is a sigma that is used for standard deviation.
  4. The Random class that is used and instantiated creates an independent multiple random number generators.

Q25. What is the difference between range & xrange?

Ans: For the most part, xrange and range are the exact same in terms of functionality. They both provide a way to generate a list of integers for you to use, however you please. The only difference is that range returns a Python list object and x range returns an xrange object.

This means that xrange doesn’t actually generate a static list at run-time like range does. It creates the values as you need them with a special technique called yielding. This technique is used with a type of object known as generators. That means that if you have a really gigantic range you’d like to generate a list for, say one billion, xrange is the function to use.

This is especially true if you have a really memory sensitive system such as a cell phone that you are working with, as range will use as much memory as it can to create your array of integers, which can result in a Memory Error and crash your program. It’s a memory hungry beast.

Q26. How do you write comments in python?

Ans: Comments in Python start with a # character. However, alternatively at times, commenting is done using docstrings(strings enclosed within triple quotes).

Example:

#Comments in Python start like this
print("Comments in Python start with a #")

Output:  Comments in Python start with a #

Q27. What is pickling and unpickling?

Ans: Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.

Q28. What are the generators in python?

Ans: Functions that return an iterable set of items are called generators.

Q29. How will you capitalize the first letter of string?

Ans: In Python, the capitalize() method capitalizes the first letter of a string. If the string already consists of a capital letter at the beginning, then, it returns the original string.

Q30. How will you convert a string to all lowercase?

Ans: To convert a string to lowercase, lower() function can be used.

Example:

stg='ABCD'
print(stg.lower())

Output: abcd

Q31. How to comment multiple lines in python?

Ans: Multi-line comments appear in more than one line. All the lines to be commented are to be prefixed by a #. You can also a very good shortcut method to comment multiple lines. All you need to do is hold the ctrl key and left click in every place wherever you want to include a # character and type a # just once. This will comment all the lines where you introduced your cursor.

Q32.What are docstrings in Python?

Ans: Docstrings are not actually comments, but, they are documentation strings. These docstrings are within triple quotes. They are not assigned to any variable and therefore, at times, serve the purpose of comments as well.

Example:

"""
Using docstring as a comment.
This code divides 2 numbers
"""
x=8
y=4
z=x/y
print(z)

Output: 2.0

Q33. What is the purpose of is, not and in operators?

Ans: Operators are special functions. They take one or more values and produce a corresponding result.

is: returns true when 2 operands are true  (Example: “a” is ‘a’)

not: returns the inverse of the boolean value

in: checks if some element is present in some sequence

Q34. What is the usage of help() and dir() function in Python?

Ans: Help() and dir() both functions are accessible from the Python interpreter and used for viewing a consolidated dump of built-in functions. 

  1. Help() function: The help() function is used to display the documentation string and also facilitates you to see the help related to modules, keywords, attributes, etc.
  2. Dir() function: The dir() function is used to display the defined symbols.

Q35. Whenever Python exits, why isn’t all the memory de-allocated?

Ans: 

  1. Whenever Python exits, especially those Python modules which are having circular references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed.
  2. It is impossible to de-allocate those portions of memory that are reserved by the C library.
  3. On exit, because of having its own efficient clean up mechanism, Python would try to de-allocate/destroy every other object.

Q36. What is a dictionary in Python?

Ans: The built-in datatypes in Python is called dictionary. It defines one-to-one relationship between keys and values. Dictionaries contain pair of keys and their corresponding values. Dictionaries are indexed by keys.

Let’s take an example:

The following example contains some keys. Country, Capital & PM. Their corresponding values are India, Delhi and Modi respectively.

dict={'Country':'India','Capital':'Delhi','PM':'Modi'}
print dict[Country]
India
print dict[Capital]
Delhi
print dict[PM]
Modi

Q37. How can the ternary operators be used in python?

Ans: The Ternary operator is the operator that is used to show the conditional statements. This consists of the true or false values with a statement that has to be evaluated for it.

Syntax:

The Ternary operator will be given as:
[on_true] if [expression] else [on_false]x, y = 25, 50big = x if x < y else y

Example:

The expression gets evaluated like if x<y else y, in this case if x<y is true then the value is returned as big=x and if it is incorrect then big=y will be sent as a result.

Q38. What does this mean: *args, **kwargs? And why would we use it?

Ans: We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments. The identifiers args and kwargs are a convention, you could also use *bob and **billy but that would not be wise.

Q39. What does len() do?

Ans: It is used to determine the length of a string, a list, an array, etc.

Example:

stg='ABCD'
len(stg)

Q40. Explain split(), sub(), subn() methods of “re” module in Python.

Ans: To modify the strings, Python’s “re” module is providing 3 methods. They are:

  • split() – uses a regex pattern to “split” a given string into a list.
  • sub() – finds all substrings where the regex pattern matches and then replace them with a different string
  • subn() – it is similar to sub() and also returns the new string along with the no. of replacements.

Q41. What are negative indexes and why are they used?

Ans: The sequences in Python are indexed and it consists of the positive as well as negative numbers. The numbers that are positive uses ‘0’ that is uses as first index and ‘1’ as the second index and the process goes on like that.

The index for the negative number starts from ‘-1’ that represents the last index in the sequence and ‘-2’ as the penultimate index and the sequence carries forward like the positive number.

The negative index is used to remove any new-line spaces from the string and allow the string to except the last character that is given as S[:-1]. The negative index is also used to show the index to represent the string in correct order.

Q42. What are Python packages?

Ans: Python packages are namespaces containing multiple modules.

Q43.How can files be deleted in Python?

Ans: To delete a file in Python, you need to import the OS Module. After that, you need to use the os.remove() function.

Example:

import os
os.remove("xyz.txt")

Q44. What are the built-in types of python?

Ans: Built-in types in Python are as follows –

  • Integers
  • Floating-point
  • Complex numbers
  • Strings
  • Boolean
  • Built-in functions

Q45. What advantages do NumPy arrays offer over (nested) Python lists?

Ans: 

  1. Python’s lists are efficient general-purpose containers. They support (fairly) efficient insertion, deletion, appending, and concatenation, and Python’s list comprehensions make them easy to construct and manipulate.
  2. They have certain limitations: they don’t support “vectorized” operations like elementwise addition and multiplication, and the fact that they can contain objects of differing types mean that Python must store type information for every element, and must execute type dispatching code when operating on each element.
  3. NumPy is not just more efficient; it is also more convenient. You get a lot of vector and matrix operations for free, which sometimes allow one to avoid unnecessary work. And they are also efficiently implemented.
  4. NumPy array is faster and You get a lot built in with NumPy, FFTs, convolutions, fast searching, basic statistics, linear algebra, histograms, etc. 

Q46. How to add values to a python array?

Ans: Elements can be added to an array using the append()extend() and the insert (i,x) functions.

Example:

a=arr.array('d', [1.1 , 2.1 ,3.1] )
a.append(3.4)
print(a)
a.extend([4.5,6.3,6.8])
print(a)
a.insert(2,3.8)
print(a)

Output:

array(‘d’, [1.1, 2.1, 3.1, 3.4])

array(‘d’, [1.1, 2.1, 3.1, 3.4, 4.5, 6.3, 6.8])

array(‘d’, [1.1, 2.1, 3.8, 3.1, 3.4, 4.5, 6.3, 6.8])

Q47. How to remove values to a python array?

Ans: Array elements can be removed using pop() or remove() method. The difference between these two functions is that the former returns the deleted value whereas the latter does not.

Example:

a=arr.array('d', [1.1, 2.2, 3.8, 3.1, 3.7, 1.2, 4.6])
print(a.pop())
print(a.pop(3))
a.remove(1.1)
print(a)

Output:

4.6

3.1

array(‘d’, [2.2, 3.8, 3.7, 1.2])

Q48. Does Python have OOps concepts?

Ans: Python is an object-oriented programming language. This means that any program can be solved in python by creating an object model. However, Python can be treated as procedural as well as structural language.

Q49. What is the difference between deep and shallow copy?

Ans: Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance. Shallow copy is used to copy the reference pointers just like it copies the values. These references point to the original objects and the changes made in any member of the class will also affect the original copy of it. Shallow copy allows faster execution of the program and it depends on the size of the data that is used.

Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored. The changes made in the original copy won’t affect any other copy that uses the object. Deep copy makes execution of the program slower due to making certain copies for each object that is been called.

Q50. How is Multithreading achieved in Python?

Ans: 

  1. Python has a multi-threading package but if you want to multi-thread to speed your code up, then it’s usually not a good idea to use it.
  2. Python has a construct called the Global Interpreter Lock (GIL). The GIL makes sure that only one of your ‘threads’ can execute at any one time. A thread acquires the GIL, does a little work, then passes the GIL onto the next thread.
  3. This happens very quickly so to the human eye it may seem like your threads are executing in parallel, but they are really just taking turns using the same CPU core.
  4. All this GIL passing adds overhead to execution. This means that if you want to make your code run faster then using the threading package often isn’t a good idea.

Q51. What is the process of compilation and linking in python?

Ans: The compiling and linking allows the new extensions to be compiled properly without any error and the linking can be done only when it passes the compiled procedure. If the dynamic loading is used then it depends on the style that is being provided with the system. The python interpreter can be used to provide the dynamic loading of the configuration setup files and will rebuild the interpreter.

The steps that are required in this as:

  1. Create a file with any name and in any language that is supported by the compiler of your system. For example file.c or file.cpp
  2. Place this file in the Modules/ directory of the distribution which is getting used.
  3. Add a line in the file Setup.local that is present in the Modules/ directory.
  4. Run the file using spam file.o
  5. After a successful run of this rebuild the interpreter by using the make command on the top-level directory.
  6. If the file is changed then run rebuildMakefile by using the command as ‘make Makefile’.

Q52. What are Python libraries? Name a few of them.

Python libraries are a collection of Python packages. Some of the majorly used python libraries are – Numpy, Pandas, Matplotlib, Scikit-learn and many more.

Q53. What is split used for?

The split() method is used to separate a given string in Python.

Example:

a="edureka python"
print(a.split())

Output:  [‘edureka’, ‘python’]

Q54. How to import modules in python?

Modules can be imported using the import keyword.  You can import modules in three ways-

Example:


import array           #importing using the original module name
import array as arr    # importing using an alias name
from array import *    #imports everything present in the array module

OOPS Interview Questions

Q55. Explain Inheritance in Python with an example.

Ans: Inheritance allows One class to gain all the members(say attributes and methods) of another class. Inheritance provides code reusability, makes it easier to create and maintain an application. The class from which we are inheriting is called super-class and the class that is inherited is called a derived / child class.

They are different types of inheritance supported by Python:

  1. Single Inheritance – where a derived class acquires the members of a single super class.
  2. Multi-level inheritance – a derived class d1 in inherited from base class base1, and d2 are inherited from base2.
  3. Hierarchical inheritance – from one base class you can inherit any number of child classes
  4. Multiple inheritance – a derived class is inherited from more than one base class.

Q56. How are classes created in Python? 

Ans: Class in Python is created using the class keyword.

Example:

class Employee:
def __init__(self, name):
self.name = name
E1=Employee("abc")
print(E1.name)

Output: abc

Q57. What is monkey patching in Python?

Ans: In Python, the term monkey patch only refers to dynamic modifications of a class or module at run-time.

Consider the below example:

# m.py
class MyClass:
def f(self):
print "f()"

We can then run the monkey-patch testing like this:

import m
def monkey_f(self):
print "monkey_f()"

m.MyClass.f = monkey_f
obj = m.MyClass()
obj.f()

The output will be as below:

monkey_f()

As we can see, we did make some changes in the behavior of f() in MyClass using the function we defined, monkey_f(), outside of the module m.

Q58. Does python support multiple inheritance?

Ans: Multiple inheritance means that a class can be derived from more than one parent classes. Python does support multiple inheritance, unlike Java.

Q59. What is Polymorphism in Python?

Ans: Polymorphism means the ability to take multiple forms. So, for instance, if the parent class has a method named ABC then the child class also can have a method with the same name ABC having its own parameters and variables. Python allows polymorphism.

Q60. Define encapsulation in Python?

Ans: Encapsulation means binding the code and the data together. A Python class in an example of encapsulation.

Q61. How do you do data abstraction in Python?

Ans: Data Abstraction is providing only the required details and hiding the implementation from the world. It can be achieved in Python by using interfaces and abstract classes.

Q62.Does python make use of access specifiers?

Ans: Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.  

Q63. How to create an empty class in Python? 

Ans: An empty class is a class that does not have any code defined within its block. It can be created using the pass keyword. However, you can create objects of this class outside the class itself. IN PYTHON THE PASS command does nothing when its executed. it’s a null statement. 
For example-
class a:
    pass
obj=a()
obj.name="xyz"
print("Name = ",obj.name)

Output: 

Name =  xyz

Q64. What does an object() do?

Ans: It returns a featureless object that is a base for all classes. Also, it does not take any parameters.

Basic Python Programs

Q65. Write a program in Python to execute the Bubble sort algorithm.

def bs(a):             # a = name of list
    b=len(a)-1         # minus 1 because we always compare 2 adjacent values
                             
    for x in range(b):
        for y in range(b-x):
            if a[y]>a[y+1]:
                a[y],a[y+1]=a[y+1],a[y]
    return a
a=[32,5,3,6,7,54,87]
bs(a)

Output:  [3, 5, 6, 7, 32, 54, 87]

Q66. Write a program in Python to produce Star triangle.


def pyfunc(r):
    for x in range(r):
        print(' '*(r-x-1)+'*'*(2*x+1))    
pyfunc(9)

Output:

        *
       ***
      *****
     *******
    *********
   ***********
  *************
 ***************
*****************

Q67. Write a program to produce Fibonacci series in Python.

# Enter number of terms needed                   #0,1,1,2,3,5....
a=int(input("Enter the terms"))
f=0                                         #first element of series
s=1                                         #second element of series
if a<=0:
    print("The requested series is
",f)
else:
    print(f,s,end=" ")
    for x in range(2,a):
        next=f+s                           
        print(next,end=" ")
        f=s
        s=next</pre>

 

Output: Enter the terms 5 0 1 1 2 3

Q68. Write a program in Python to check if a number is prime.

a=int(input("enter number"))     
if a>1:
    for x in range(2,a):
        if(a%x)==0:
            print("not prime")
            break
    else:
        print("Prime")
else:
    print("not prime")

Output:

enter number 3

Prime

Q69. Write a program in Python to check if a sequence is a Palindrome.

a=input("enter sequence")
b=a[::-1]
if a==b:
    print("palindrome")
else:
    print("Not a Palindrome")

Output:

enter sequence 323 palindrome

Q70. Write a one-liner that will count the number of capital letters in a file. Your code should work even if the file is too big to fit in memory.

Ans:  Let us first write a multiple line solution and then convert it to one-liner code.

with open(SOME_LARGE_FILE) as fh:
count = 0
text = fh.read()
for character in text:
    if character.isupper():
count += 1

We will now try to transform this into a single line.

count sum(1 for line in fh for character in line if character.isupper())

Q71. Write a sorting algorithm for a numerical dataset in Python.

Ans: The following code can be used to sort a list in Python:

list = ["1", "4", "0", "6", "9"]
list = [int(i) for i in list]
list.sort()
print (list)

Q72. Looking at the below code, write down the final values of A0, A1, …An.

A0 = dict(zip(('a','b','c','d','e'),(1,2,3,4,5)))
A1 = range(10)A2 = sorted([i for i in A1 if i in A0])
A3 = sorted([A0[s] for s in A0])
A4 = [i for i in A1 if i in A3]
A5 = {i:i*i for i in A1}
A6 = [[i,i*i] for i in A1]
print(A0,A1,A2,A3,A4,A5,A6)

Ans: The following will be the final outputs of A0, A1, … A6

A0 = {'a': 1, 'c': 3, 'b': 2, 'e': 5, 'd': 4} # the order may vary
A1 = range(0, 10) 
A2 = []
A3 = [1, 2, 3, 4, 5]
A4 = [1, 2, 3, 4, 5]
A5 = {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81}
A6 = [[0, 0], [1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36], [7, 49], [8, 64], [9, 81]]

Python Libraries Interview Questions

Q73. Explain what Flask is and its benefits?

Ans: Flask is a web microframework for Python based on “Werkzeug, Jinja2 and good intentions” BSD license. Werkzeug and Jinja2 are two of its dependencies. This means it will have little to no dependencies on external libraries.  It makes the framework light while there is a little dependency to update and fewer security bugs.

A session basically allows you to remember information from one request to another. In a flask, a session uses a signed cookie so the user can look at the session contents and modify. The user can modify the session if only it has the secret key Flask.secret_key.

Q74. Is Django better than Flask?

Ans: Django and Flask map the URL’s or addresses typed in the web browsers to functions in Python. 

Flask is much simpler compared to Django but, Flask does not do a lot for you meaning you will need to specify the details, whereas Django does a lot for you wherein you would not need to do much work. Django consists of prewritten code, which the user will need to analyze whereas Flask gives the users to create their own code, therefore, making it simpler to understand the code. Technically both are equally good and both contain their own pros and cons.

Q75. Mention the differences between Django, Pyramid and Flask.

Ans: 

  • Flask is a “microframework” primarily build for a small application with simpler requirements. In flask, you have to use external libraries. Flask is ready to use.
  • Pyramid is built for larger applications. It provides flexibility and lets the developer use the right tools for their project. The developer can choose the database, URL structure, templating style and more. Pyramid is heavy configurable.
  • Django can also be used for larger applications just like Pyramid. It includes an ORM.

Q76. Discuss Django architecture.

Ans: Django MVT Pattern:

Django Architecture - Python Interview Questions - EdurekaFigure:  Python Interview Questions – Django Architecture

The developer provides the Model, the view and the template then just maps it to a URL and Django does the magic to serve it to the user.

Q77. Explain how you can set up the Database in Django.

Ans: You can use the command edit mysite/setting.py, it is a normal python module with module level representing Django settings.

Django uses SQLite by default; it is easy for Django users as such it won’t require any other type of installation. In the case your database choice is different that you have to the following keys in the DATABASE ‘default’ item to match your database connection settings.

  • Engines: you can change the database by using ‘django.db.backends.sqlite3’ , ‘django.db.backeneds.mysql’, ‘django.db.backends.postgresql_psycopg2’, ‘django.db.backends.oracle’ and so on
  • Name: The name of your database. In the case if you are using SQLite as your database, in that case, database will be a file on your computer, Name should be a full absolute path, including the file name of that file.
  • If you are not choosing SQLite as your database then settings like Password, Host, User, etc. must be added.

Django uses SQLite as a default database, it stores data as a single file in the filesystem. If you do have a database server—PostgreSQL, MySQL, Oracle, MSSQL—and want to use it rather than SQLite, then use your database’s administration tools to create a new database for your Django project. Either way, with your (empty) database in place, all that remains is to tell Django how to use it. This is where your project’s settings.py file comes in.

We will add the following lines of code to the setting.py file:

DATABASES = {
     'default': {
          'ENGINE' : 'django.db.backends.sqlite3',
          'NAME' : os.path.join(BASE_DIR, 'db.sqlite3'),
     }
}

Q78. Give an example how you can write a VIEW in Django?

Ans: This is how we can use write a view in Django:

from django.http import HttpResponse
import datetime

def Current_datetime(request):
     now = datetime.datetime.now()
     html = "<html><body>It is now %s</body></html> % now
     return HttpResponse(html)

Returns the current date and time, as an HTML document

Q79. Mention what the Django templates consist of.

Ans: The template is a simple text file.  It can create any text-based format like XML, CSV, HTML, etc.  A template contains variables that get replaced with values when the template is evaluated and tags (% tag %) that control the logic of the template.

Django Template - Python Interview Questions - EdurekaFigure: Python Interview Questions – Django Template

Q80. Explain the use of session in Django framework?

Ans: Django provides a session that lets you store and retrieve data on a per-site-visitor basis. Django abstracts the process of sending and receiving cookies, by placing a session ID cookie on the client side, and storing all the related data on the server side.

Django Framework - Python Interview Questions - EdurekaFigure: Python Interview Questions – Django Framework

So the data itself is not stored client side. This is nice from a security perspective.

Q81.  List out the inheritance styles in Django.

Ans: In Django, there are three possible inheritance styles:

  1. Abstract Base Classes: This style is used when you only want parent’s class to hold information that you don’t want to type out for each child model.
  2. Multi-table Inheritance: This style is used If you are sub-classing an existing model and need each model to have its own database table.
  3. Proxy models: You can use this model, If you only want to modify the Python level behavior of the model, without changing the model’s fields.

Web Scraping – Python Interview Questions

Q82. How To Save An Image Locally Using Python Whose URL Address I Already Know?

Ans: We will use the following code to save an image locally from an URL address

import urllib.request
urllib.request.urlretrieve("URL", "local-filename.jpg")

Q83. How can you Get the Google cache age of any URL or web page?

Ans: Use the following URL format:

http://webcache.googleusercontent.com/search?q=cache:URLGOESHERE

Be sure to replace “URLGOESHERE” with the proper web address of the page or site whose cache you want to retrieve and see the time for. For example, to check the Google Webcache age of edureka.co you’d use the following URL:

http://webcache.googleusercontent.com/search?q=cache:edureka.co

Q84. You are required to scrap data from IMDb top 250 movies page. It should only have fields movie name, year, and rating.

Ans: We will use the following lines of code:

from bs4 import BeautifulSoup

import requests
import sys

url = 'http://www.imdb.com/chart/top'
response = requests.get(url)
soup = BeautifulSoup(response.text)
tr = soup.findChildren("tr")
tr = iter(tr)
next(tr)

for movie in tr:
title = movie.find('td', {'class': 'titleColumn'} ).find('a').contents[0]
year = movie.find('td', {'class': 'titleColumn'} ).find('span', {'class': 'secondaryInfo'}).contents[0]
rating = movie.find('td', {'class': 'ratingColumn imdbRating'} ).find('strong').contents[0]
row = title + ' - ' + year + ' ' + ' ' + rating

print(row)

The above code will help scrap data from IMDb’s top 250 list

Data Analysis – Python Interview Questions

Q85. What is map function in Python?

Ans: map function executes the function given as the first argument on all the elements of the iterable given as the second argument. If the function given takes in more than 1 arguments, then many iterables are given. #Follow the link to know more similar functions.

Q86. Is python numpy better than lists?

Ans: We use python numpy array instead of a list because of the below three reasons:

  1. Less Memory
  2. Fast
  3. Convenient

For more information on these parameters, you can refer to this section – Numpy Vs List.

Q87. How to get indices of N maximum values in a NumPy array?

Ans: We can get the indices of N maximum values in a NumPy array using the below code:

import numpy as np
arr = np.array([1, 3, 2, 4, 5])
print(arr.argsort()[-3:][::-1])

Output

[ 4 3 1 ]

Q88. How do you calculate percentiles with Python/ NumPy?

Ans: We can calculate percentiles with the following code

import numpy as np
a = np.array([1,2,3,4,5])
p = np.percentile(a, 50) #Returns 50th percentile, e.g. median
print(p)

Output

3

Q89. What is the difference between NumPy and SciPy?

Ans: 

  1. In an ideal world, NumPy would contain nothing but the array data type and the most basic operations: indexing, sorting, reshaping, basic elementwise functions, et cetera.
  2. All numerical code would reside in SciPy. However, one of NumPy’s important goals is compatibility, so NumPy tries to retain all features supported by either of its predecessors.
  3. Thus NumPy contains some linear algebra functions, even though these more properly belong in SciPy. In any case, SciPy contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms.
  4. If you are doing scientific computing with python, you should probably install both NumPy and SciPy. Most new features belong in SciPy rather than NumPy.

Q90. How do you make 3D plots/visualizations using NumPy/SciPy?

Ans: Like 2D plotting, 3D graphics is beyond the scope of NumPy and SciPy, but just as in the 2D case, packages exist that integrate with NumPy. Matplotlib provides basic 3D plotting in the mplot3d subpackage, whereas Mayavi provides a wide range of high-quality 3D visualization features, utilizing the powerful VTK engine.

Multiple Choice Questions (MCQ)

Q91. Which of the following statements create a dictionary? (Multiple Correct Answers Possible)

a) d = {}
b) d = {“john”:40, “peter”:45}
c) d = {40:”john”, 45:”peter”}
d) d = (40:”john”, 45:”50”)

Answer: b, c & d. 

Dictionaries are created by specifying keys and values.

Q92. Which one of these is floor division?

a) /
b) //
c) %
d) None of the mentioned

Answer: b) //

When both of the operands are integer then python chops out the fraction part and gives you the round off value, to get the accurate answer use floor division. For ex, 5/2 = 2.5 but both of the operands are integer so answer of this expression in python is 2. To get the 2.5 as the answer, use floor division using //. So, 5//2 = 2.5

Q93. What is the maximum possible length of an identifier?

a) 31 characters
b) 63 characters
c) 79 characters
d) None of the above

Answer: d) None of the above

Identifiers can be of any length.

Q94. Why are local variable names beginning with an underscore discouraged?

a) they are used to indicate a private variables of a class
b) they confuse the interpreter
c) they are used to indicate global variables
d) they slow down execution

Answer: a) they are used to indicate a private variable of a class

As Python has no concept of private variables, leading underscores are used to indicate variables that must not be accessed from outside the class.

Q95. Which of the following is an invalid statement?

a) abc = 1,000,000
b) a b c = 1000 2000 3000
c) a,b,c = 1000, 2000, 3000
d) a_b_c = 1,000,000

Answer: b) a b c = 1000 2000 3000

Spaces are not allowed in variable names.

Q96. What is the output of the following?

try:
    if '1' != 1:
        raise "someError"
    else:
        print("someError has not occured")
except "someError":
    print ("someError has occured")

a) someError has occured
b) someError has not occured
c) invalid code
d) none of the above

Answer: c) invalid code

A new exception class must inherit from a BaseException. There is no such inheritance here.

Q97. Suppose list1 is [2, 33, 222, 14, 25], What is list1[-1] ?

a) Error
b) None
c) 25
d) 2

Answer: c) 25

The index -1 corresponds to the last index in the list.

Q98. To open a file c:scores.txt for writing, we use

a) outfile = open(“c:scores.txt”, “r”)
b) outfile = open(“c:scores.txt”, “w”)
c) outfile = open(file = “c:scores.txt”, “r”)
d) outfile = open(file = “c:scores.txt”, “o”)

Answer: b) The location contains double slashes ( ) and w is used to indicate that file is being written to.

Q99. What is the output of the following?

f = None

for i in range (5):
    with open("data.txt", "w") as f:
        if i > 2:
            break

print f.closed

a) True
b) False
c) None
d) Error

Answer: a) True 

The WITH statement when used with open file guarantees that the file object is closed when the with block exits.

Q100. When will the else part of try-except-else be executed?

a) always
b) when an exception occurs
c) when no exception occurs
d) when an exception occurs into except block

Answer: c) when no exception occurs

The else part is executed when no exception occurs.

I hope this set of Python Interview Questions will help you in preparing for your interviews. All the best!

Got a question for us? Please mention it in the comments section and we will get back to you at the earliest.

If you wish to learn Python and gain expertise in quantitative analysis, data mining, and the presentation of data to see beyond the numbers by transforming your career into Data Scientist role, check out our interactive, live-online Python Certification Training. You will use libraries like Pandas, Numpy, Matplotlib, Scipy, Scikit, Pyspark and master the concepts like Python machine learning, scripts, sequence, web scraping and big data analytics leveraging Apache Spark. The training comes with 24*7 support to guide you throughout your learning period.

Top AWS Architect Interview Questions In 2019

$
0
0

Why AWS Architect Interview Questions?

For the 7th straight year, Gartner placed Amazon Web Services in the “Leaders” quadrant. Also Forbes reported, AWS Certified Solutions Architect Leads the 15 Top Paying IT Certifications. Undoubtedly, AWS Solution Architect position is one of the most sought after amongst IT jobs.

We at Edureka are committed to helping you upgrade your career in sync with industry requirements. That’s why we have created a list of AWS Architect Interview questions and answers that will most probably get asked during your interview. If you’ve attended an AWS Architect interview or have additional questions beyond what we have covered, we encourage you to post them in our QnA Forum. Our expert team will get back to you at the earliest.

In the meantime, you can maximize the Cloud computing career opportunities that are sure to come your way by taking AWS Architect online training with Edureka. You can write the AWS Architect certification exam after the course at edureka.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

The AWS Solution Architect Role: With regards to AWS, a Solution Architect would design and define AWS architecture for existing systems, migrating them to cloud architectures as well as developing technical road-maps for future AWS cloud implementations. So, through this AWS Architect interview questions article, I will bring you top and frequently asked AWS interview questions.

Now in every section, we will start with aws basic interview questions, and then move towards AWS interview questions and answers for experienced people which are more technically challenging,

AWS Interview Questions And Answers | Edureka

Section 1: What is Cloud Computing. Can you talk about and compare any two popular Cloud Service Providers?

For a detailed discussion on this topic, please refer our Cloud Computing blog. Following is the comparison between two of the most popular Cloud Service Providers:

Amazon Web Services Vs Microsoft Azure

Parameters AWS Azure
 Initiation  2006  2010
 Market Share 4x  x
 Implementation  Less Options  More Experimentation Possible
 Features  Widest Range Of Options  Good Range Of Options
 App Hosting  AWS not as good as Azure  Azure Is Better
 Development  Varied & Great Features  Varied & Great Features
 IaaS Offerings  Good Market Hold  Better Offerings than AWS

1. Try this AWS scenario based interview question. I have some private servers on my premises, also I have distributed some of my workload on the public cloud, what is this architecture called?

  1. Virtual Private Network
  2. Private Cloud
  3. Virtual Private Cloud
  4. Hybrid Cloud

Answer D.

Explanation: This type of architecture would be a hybrid cloud. Why? Because we are using both, the public cloud, and your on premises servers i.e the private cloud. To make this hybrid architecture easy to use, wouldn’t it be better if your private and public cloud were all on the same network(virtually). This is established by including your public cloud servers in a virtual private cloud, and connecting this virtual cloud with your on premise servers using a VPN(Virtual Private Network).

Section 2: Amazon EC2 Interview Questions

For a detailed discussion on this topic, please refer our EC2 AWS blog.

2. What does the following command do with respect to the Amazon EC2 security groups?

ec2-create-group CreateSecurityGroup

  1. Groups the user created security groups into a new group for easy access.
  2. Creates a new security group for use with your account.
  3. Creates a new group inside the security group.
  4. Creates a new rule inside the security group.

Answer B.

Explanation: A Security group is just like a firewall, it controls the traffic in and out of your instance. In AWS terms, the inbound and outbound traffic. The command mentioned is pretty straight forward, it says create security group, and does the same. Moving along, once your security group is created, you can add different rules in it. For example, you have an RDS instance, to access it, you have to add the public IP address of the machine from which you want access the instance  in its security group.

3. Here is aws scenario based interview question. You have a video trans-coding application. The videos are processed according to a queue. If the processing of a video is interrupted in one instance, it is resumed in another instance. Currently there is a huge back-log of videos which needs to be processed, for this you need to add more instances, but you need these instances only until your backlog is reduced. Which of these would be an efficient way to do it?

You should be using an On Demand instance for the same. Why? First of all, the workload has to be processed now, meaning it is urgent, secondly you don’t need them once your backlog is cleared, therefore Reserved Instance is out of the picture, and since the work is urgent, you cannot stop the work on your instance just because the spot price spiked, therefore Spot Instances shall also not be used. Hence On-Demand instances shall be the right choice in this case.

4. You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost effective way.

Which of the following will meet your requirements?

  1. Spot Instances
  2. Reserved instances
  3. Dedicated instances
  4. On-Demand instances

Answer: A

Explanation: Since the work we are addressing here is not continuous, a reserved instance shall be idle at times, same goes with On Demand instances. Also it does not make sense to launch an On Demand instance whenever work comes up, since it is expensive. Hence Spot Instances will be the right fit because of their low rates and no long term commitments.

5. How is stopping and terminating an instance different from each other?

Starting, stopping and terminating are the three states in an EC2 instance, let’s discuss them in detail:

  • Stopping and Starting an instance: When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a later time. You are not charged for additional instance hours while the instance is in a stopped state.
  • Terminating an instance: When an instance is terminated, the instance performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false. The instance itself is also deleted, and you can’t start the instance again at a later time.

6. If I want my instance to run on a single-tenant hardware, which value do I have to set the instance’s tenancy attribute to?

  1. Dedicated
  2. Isolated
  3. One
  4. Reserved

Answer A.

Explanation: The Instance tenancy attribute should be set to Dedicated Instance. The rest of the values are invalid.

7. When will you incur costs with an Elastic IP address (EIP)?

  1. When an EIP is allocated.
  2. When it is allocated and associated with a running instance.
  3. When it is allocated and associated with a stopped instance.
  4. Costs are incurred regardless of whether the EIP is associated with a running instance.

Answer C.

Explanation: You are not charged, if only one Elastic IP address is attached with your running instance. But you do get charged in the following conditions:

  • When you use more than one Elastic IPs with your instance.
  • When your Elastic IP is attached to a stopped instance.
  • When your Elastic IP is not attached to any instance.

8. How is a Spot instance different from an On-Demand instance or Reserved Instance?

First of all, let’s understand that Spot Instance, On-Demand instance and Reserved Instances are all models for pricing. Moving along, spot instances provide the ability for customers to purchase compute capacity with no upfront commitment, at hourly rates usually lower than the On-Demand rate in each region. Spot instances are just like bidding, the bidding price is called Spot Price. The Spot Price fluctuates based on supply and demand for instances, but customers will never pay more than the maximum price they have specified. If the Spot Price moves higher than a customer’s maximum price, the customer’s EC2 instance will be shut down automatically. But the reverse is not true, if the Spot prices come down again, your EC2 instance will not be launched automatically, one has to do that manually.  In Spot and On demand instance, there is no commitment for the duration from the user side, however in reserved instances one has to stick to the time period that he has chosen.

9. Are the Reserved Instances available for Multi-AZ Deployments?

  1. Multi-AZ Deployments are only available for Cluster Compute instances types
  2. Available for all instance types
  3. Only available for M3 instance types
  4. D. Not Available for Reserved Instances

Answer B.

Explanation: Reserved Instances is a pricing model, which is available for all instance types in EC2.

10. How to use the processor state control feature available on the  c4.8xlarge instance?

The processor state control consists of 2 states:

  • The C state – Sleep state varying from c0 to c6. C6 being the deepest sleep state for a processor
  • The P state – Performance state p0 being the highest and p15 being the lowest possible frequency.

Now, why the C state and P state. Processors have cores, these cores need thermal headroom to boost their performance. Now since all the cores are on the processor the temperature should be kept at an optimal state so that all the cores can perform at the highest performance.

Now how will these states help in that? If a core is put into sleep state it will reduce the overall temperature of the processor and hence other cores can perform better. Now the same can be  synchronized with other cores, so that the processor can boost as many cores it can by timely putting other cores to sleep, and thus get an overall performance boost.

Concluding, the C and P state can be customized in some EC2 instances like the c4.8xlarge instance and thus you can customize the processor according to your workload.

How to do it? You can refer this tutorial for the same.

11. What kind of network performance parameters can you expect when you launch instances in cluster placement group?

The network performance depends on the instance type and network performance specification, if launched in a placement group you can expect up to

  • 10 Gbps in a single-flow,
  • 20 Gbps in multiflow i.e full duplex
  • Network traffic outside the placement group will be limited to 5 Gbps(full duplex).

12. To deploy a 4 node cluster of Hadoop in AWS which instance type can be used?

First let’s understand what actually happens in a Hadoop cluster, the Hadoop cluster follows a master slave concept. The master machine processes all the data, slave machines store the data and act as data nodes. Since all the storage happens at the slave, a higher capacity hard disk would be recommended and since master does all the processing, a higher RAM and a much better CPU is required. Therefore, you can select the configuration of your machine depending on your workload. For e.g. – In this case c4.8xlarge will be preferred for master machine whereas for slave machine we can select i2.large instance. If you don’t want to deal with configuring your instance and installing hadoop cluster manually, you can straight away launch an Amazon EMR (Elastic Map Reduce) instance which automatically configures the servers for you. You dump your data to be processed in S3, EMR picks it from there, processes it, and dumps it back into S3.

13. Where do you think an AMI fits, when you are designing an architecture for a solution?

AMIs(Amazon Machine Images) are like templates of virtual machines and an instance is derived from an AMI. AWS offers pre-baked AMIs which you can choose while you are launching an instance, some AMIs are not free, therefore can be bought from the AWS Marketplace. You can also choose to create your own custom AMI which would help you save space on AWS. For example if you don’t need a set of software on your installation, you can customize your AMI to do that. This makes it cost efficient, since you are removing the unwanted things.

14. How do you choose an Availability Zone?

Let’s understand this through an example, consider there’s a company which has user base in India as well as in the US.

Let us see how we will choose the region for this use case :

S3AWSRegions - AWS Architect Interview Questions - Edureka

So, with reference to the above figure the regions to choose between are, Mumbai and North Virginia. Now let us first compare the pricing, you have hourly prices, which can be converted to your per month figure. Here North Virginia emerges as a winner. But, pricing cannot be the only parameter to consider. Performance should also be kept in mind hence, let’s look at latency as well. Latency basically is the time that a server takes to respond to your requests i.e the response time. North Virginia wins again!

So concluding, North Virginia should be chosen for this use case.

15. Is one Elastic IP address enough for every instance that I have running?

Depends! Every instance comes with its own private and public address. The private address is associated exclusively with the instance and is returned  to Amazon EC2 only when it is stopped or terminated. Similarly, the public address is associated exclusively with the instance until it is stopped or terminated. However, this can be replaced by the Elastic IP address, which stays with the instance as long as the user doesn’t manually detach it. But what if you are hosting multiple websites on your EC2 server, in that case you may require more than one Elastic IP address.

16. What are the best practices for Security in Amazon EC2?

There are several best practices to secure Amazon EC2. A few of them are given below:

  • Use AWS Identity and Access Management (IAM) to control access to your AWS resources.
  • Restrict access by only allowing trusted hosts or networks to access ports on your instance.
  • Review the rules in your security groups regularly, and ensure that you apply the principle of least
  • Privilege – only open up permissions that you require.
  • Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk.

Section 3: Amazon Storage

17. Another scenario based interview question. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?

  1. Set permissions on the object to public read during upload.
  2. Configure the bucket policy to set all objects to public read.
  3. Use AWS Identity and Access Management roles to set the bucket to public read.
  4. Amazon S3 objects default to public read, so no action is needed.

Answer B.

Explanation: Rather than making changes to every object, its better to set the policy for the whole bucket. IAM is used to give more granular permissions, since this is a website, all objects would be public by default.

18. A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named “company-backup”?

  1. A custom bucket policy limited to the Amazon S3 API in three Amazon Glacier archive “company-backup”
  2. A custom bucket policy limited to the Amazon S3 API in “company-backup”
  3. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive “company-backup”.
  4. A custom IAM user policy limited to the Amazon S3 API in “company-backup”.

Answer D.

Explanation: Taking queue from the previous questions, this use case involves more granular permissions, hence IAM would be used here.

19. Can S3 be used with EC2 instances, if yes, how?

Yes, it can be used for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their Amazon Machine Images (AMIs) into Amazon S3 and to move them between Amazon S3 and Amazon EC2.

Another use case could be for websites hosted on EC2 to load their static content from S3.

For a detailed discussion on S3, please refer our S3 AWS blog.

20. A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data?

  1. Restore by implementing a lifecycle policy on the Amazon S3 bucket.
  2. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours.
  3. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot.
  4. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance.

Answer C.

Explanation: The fastest way to do it would be launching a new storage gateway instance. Why? Since time is the key factor which drives every business, troubleshooting this problem will take more time. Rather than we can just restore the previous working state of the storage gateway on a new instance.

21. When you need to move data over long distances using the internet, for instance across countries or continents to your Amazon S3 bucket, which method or service will you use?

  1. Amazon Glacier
  2. Amazon CloudFront
  3. Amazon Transfer Acceleration
  4. Amazon Snowball

Answer C.

Explanation: You would not use Snowball, because for now, the snowball service does not support cross region data transfer, and since, we are transferring across countries, Snowball cannot be used. Transfer Acceleration shall be the right choice here as it throttles your data transfer with the use of optimized network paths and Amazon’s content delivery network upto 300% compared to normal data transfer speed.

22. How can you speed up data transfer in Snowball?

The data transfer can be increased in the following way:

  • By performing multiple copy operations at one time i.e. if the workstation is powerful enough, you can initiate multiple cp commands each from different terminals, on the same Snowball device.
  • Copying from multiple workstations to the same snowball.
  • Transferring large files or by creating a batch of small file, this will reduce the encryption overhead.
  • Eliminating unnecessary hops i.e. make a setup where the source machine(s) and the snowball are the only machines active on the switch being used, this can hugely improve performance.

Section 4: AWS VPC

23. If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP address you should:

  1. Launch the instance from a private Amazon Machine Image (AMI).
  2. Assign a group of sequential Elastic IP address to the instances.
  3. Launch the instances in the Amazon Virtual Private Cloud (VPC).
  4. Launch the instances in a Placement Group.

Answer C.

Explanation: The best way of connecting to your cloud resources (for ex- ec2 instances) from your own data center (for eg- private cloud) is a VPC. Once you connect your datacenter to the VPC in which your instances are present, each instance is assigned a private IP address which can be accessed from your datacenter. Hence, you can access your public cloud resources, as if they were on your own network.

24. Can I connect my corporate datacenter to the Amazon Cloud?

Yes, you can do this by establishing a VPN(Virtual Private Network) connection between your company’s network and your VPC (Virtual Private Cloud), this will allow you to interact with your EC2 instances as if they were within your existing network.

25. Is it possible to change the private IP addresses of an EC2 while it is running/stopped in a VPC?

Primary private IP address is attached with the instance throughout its lifetime and cannot be changed, however secondary private addresses can be unassigned, assigned or moved between interfaces or instances at any point.

26. Why do you make subnets?

  1. Because there is a shortage of networks
  2. To efficiently utilize networks that have a large no. of hosts.
  3. Because there is a shortage of hosts.
  4. To efficiently utilize networks that have a small no. of hosts.

Answer B.

Explanation: If there is a network which has a large no. of hosts, managing all these hosts can be a tedious job. Therefore we divide this network into subnets (sub-networks) so that managing these hosts becomes simpler.

27. Which of the following is true?

  1. You can attach multiple route tables to a subnet
  2. You can attach multiple subnets to a route table
  3. Both A and B
  4. None of these.

Answer B.

Explanation: Route Tables are used to route network packets, therefore in a subnet having multiple route tables will lead to confusion as to where the packet has to go. Therefore, there is only one route table in a subnet, and since a route table can have any no. of records or information, hence attaching multiple subnets to a route table is possible.

28. In CloudFront what happens when content is NOT present at an Edge location and a request is made to it?

  1. An Error “404 not found” is returned
  2. CloudFront delivers the content directly from the origin server and stores it in the cache of the edge location
  3. The request is kept on hold till content is delivered to the edge location
  4. The request is routed to the next closest edge location

Answer B. 

Explanation: CloudFront is a content delivery system, which caches data to the nearest edge location from the user, to reduce latency. If data is not present at an edge location, the first time the data may get transferred from the original server, but from the next time, it will be served from the cached edge.

29. If I’m using Amazon CloudFront, can I use Direct Connect to transfer objects from my own data center?

Yes. Amazon CloudFront supports custom origins including origins from outside of AWS. With AWS Direct Connect, you will be charged with the respective data transfer rates.

30. If my AWS Direct Connect fails, will I lose my connectivity?

If a backup AWS Direct connect has been configured, in the event of a failure it will switch over to the second one. It is recommended to enable Bidirectional Forwarding Detection (BFD) when configuring your connections to ensure faster detection and failover. On the other hand, if you have configured a backup IPsec VPN connection instead, all VPC traffic will failover to the backup VPN connection automatically. Traffic to/from public resources such as Amazon S3 will be routed over the Internet. If you do not have a backup AWS Direct Connect link or a IPsec VPN link, then Amazon VPC traffic will be dropped in the event of a failure.

Section 5: Amazon Database

31. If I launch a standby RDS instance, will it be in the same Availability Zone as my primary?

  1. Only for Oracle RDS types
  2. Yes
  3. Only if it is configured at launch
  4. No

Answer D.

Explanation: No, since the purpose of having a standby instance is to avoid an infrastructure failure (if it happens), therefore the standby instance is stored in a different availability zone, which is a physically different independent infrastructure.

32. When would I prefer Provisioned IOPS over Standard RDS storage?

  1. If you have batch-oriented workloads
  2. If you use production online transaction processing (OLTP) workloads.
  3. If you have workloads that are not sensitive to consistent performance
  4. All of the above

Answer A.

Explanation:  Provisioned IOPS deliver high IO rates but on the other hand it is expensive as well. Batch processing workloads do not require manual intervention they enable full utilization of systems, therefore a provisioned IOPS will be preferred for batch oriented workload.

33. How is Amazon RDS, DynamoDB and Redshift different?

  • Amazon RDS is a database management service for relational databases,  it manages patching, upgrading, backing up of data etc. of databases for you without your intervention. RDS  is a Db management service for structured data only.
  • DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with unstructured data.
  • Redshift, is an entirely different service, it is a data warehouse product and is used in data analysis.

34. If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with primary DB instance?

  1. Yes
  2. Only with MySQL based RDS
  3. Only for Oracle RDS instances
  4. No

Answer D.

Explanation: No, Standby DB instance cannot be used with primary DB instance in parallel, as the former is solely used for standby purposes, it cannot be used unless the primary instance goes down.

35. Your company’s branch offices are all over the world, they use a software with a multi-regional deployment on AWS, they use MySQL 5.6 for data persistence.

The task is to run an hourly batch process and read data from every region to compute cross-regional reports which will be distributed to all the branches. This should be done in the shortest time possible. How will you build the DB architecture in order to meet the requirements?

  1. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
  2. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
  3. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
  4. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region

Answer A.

Explanation: For this we will take an RDS instance as a master, because it will manage our database for us and since we have to read from every region, we’ll put a read replica of this instance in every region where the data has to be read from. Option C is not correct since putting a read replica would be more efficient than putting a snapshot, a read replica can be promoted if needed  to an independent DB instance, but with a Db snapshot it becomes mandatory to launch a separate DB Instance.

36. Can I run more than one DB instance for Amazon RDS for free?

Yes. You can run more than one Single-AZ Micro database instance, that too for free! However, any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances, across all eligible database engines and regions, will be billed at standard Amazon RDS prices. For example: if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price.

For a detailed discussion on this topic, please refer our RDS AWS blog.

37. Which AWS services will you use to collect and process e-commerce data for near real-time analysis?

  1. Amazon ElastiCache
  2. Amazon DynamoDB
  3. Amazon Redshift
  4. Amazon Elastic MapReduce

Answer B,C.

Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB, therefore can be fed any type of unstructured data, which can be data from e-commerce websites as well, and later, an analysis can be done on them using Amazon Redshift. We are not using Elastic MapReduce, since a near real time analyses is needed.

38. Can I retrieve only a specific element of the data, if I have a nested JSON data in DynamoDB?

Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.

39. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?

  1. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
  2. Amazon RDS for MySQL with Multi-AZ
  3. Amazon ElastiCache
  4. Amazon DynamoDB

Answer D.

Explanation: DynamoDB has the ability to scale more than RDS or any other relational database service, therefore DynamoDB would be the apt choice.

40. What happens to my backups and DB Snapshots if I delete my DB Instance?

When you delete a DB instance, you have an option of creating a final DB snapshot, if you do that you can restore your database from that snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted, also automated backups are deleted and only manually created DB Snapshots are retained.

41. Which of the following use cases are suitable for Amazon DynamoDB? Choose 2 answers

  1. Managing web sessions.
  2. Storing JSON documents.
  3. Storing metadata for Amazon S3 objects.
  4. Running relational joins and complex updates.

Answer C,D.

Explanation: If all your JSON data have the same fields eg [id,name,age] then it would be better to store it in a relational database, the metadata on the other hand is unstructured, also running relational joins or complex updates would work on DynamoDB as well.

42. How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?

You can load the data in the following two ways:

  • You can use the COPY command to load data in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
  • AWS Data Pipeline provides a high performance, reliable, fault tolerant solution to load data from a variety of AWS data sources. You can use AWS Data Pipeline to specify the data source, desired data transformations, and then execute a pre-written import script to load your data into Amazon Redshift.

43. Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day at a particular time the data is extracted into S3 on a per user basis and then your application is later used to visualize the data to the user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?

  1. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
  2. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
  3. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
  4. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.

Answer C.

Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process a person would use provisioned IO, but since it is expensive, using a ElastiCache memoryinsread to cache the results in the memory can reduce the provisioned read throughput and hence reduce cost without affecting the performance.

44. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)

  1. Deploy ElastiCache in-memory cache running in each availability zone
  2. Implement sharding to distribute load to multiple RDS MySQL instances
  3. Increase the RDS MySQL Instance size and Implement provisioned IOPS
  4. Add an RDS MySQL read replica in each availability zone

Answer A,C.

Explanation:  Since it does a lot of read writes, provisioned IO may become expensive. But we need high performance as well, therefore the data can be cached using ElastiCache which can be used for frequently reading the data. As for RDS since read contention is happening, the instance size should be increased and provisioned IO should be introduced to increase the performance.

45. A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month around 4GB of sensor data is generated. The company uses a load balanced auto scaled layer of EC2 instances and a RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at least  100K sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?

  1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
  2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
  3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
  4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

Answer C.
Explanation: A Redshift cluster would be preferred because it easy to scale, also the work would be done in parallel through the nodes, therefore is perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, therefore in 2 year, it should be around 96 GB. And since the servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence option C is the right answer.

Section 6: AWS Auto Scaling, AWS Load Balancer

46. Suppose you have an application where you have to render images and also do some general computing. From the following  services which service will best fit your need?

  1. Classic Load Balancer
  2. Application Load Balancer
  3. Both of them
  4. None of these

Answer B.

Explanation: You will choose an application load balancer, since it supports path based routing, which means it can take decisions based on the URL, therefore if your task needs image rendering it will route it to a different instance, and for general computing it will route it to a different instance.

47. What is the difference between Scalability and Elasticity?

Scalability is the ability of a system to increase its hardware resources to handle the increase in demand. It can be done by increasing the hardware specifications or increasing the processing nodes.

Elasticity is the ability of a system to handle increase in the workload by adding additional hardware resources when the demand increases(same as scaling) but also rolling back the scaled resources, when the resources are no longer needed. This is particularly helpful in Cloud environments, where a pay per use model is followed.

48. How will you change the instance type for instances which are running in your application tier and are using Auto Scaling. Where will you change it from the following areas?

  1.   Auto Scaling policy configuration
  2.   Auto Scaling group
  3.   Auto Scaling tags configuration
  4.   Auto Scaling launch configuration

Answer D.

Explanation: Auto scaling tags configuration, is used to attach metadata to your instances, to change the instance type you have to use auto scaling launch configuration.

49. You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance?

  1.   Create a load balancer, and register the Amazon EC2 instance with it
  2.   Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin
  3.   Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action
  4.   Create a launch configuration from the instance using the CreateLaunchConfigurationAction

Answer A.

Explanation:Creating alone an autoscaling group will not solve the issue, until you attach a load balancer to it. Once you attach a load balancer to an autoscaling group, it will efficiently distribute the load among all the instances. Option B – CloudFront is a CDN, it is a data transfer tool therefore will not help reduce load on the EC2 instance. Similarly the other option – Launch configuration is a template for configuration which has no connection with reducing loads.

50. When should I use a Classic Load Balancer and when should I use an Application load balancer?

A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while an Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

For a detailed discussion on Auto Scaling and Load Balancer, please refer our EC2 AWS blog.

51. What does Connection draining do?

  1.  Terminates instances which are not in use.
  2.  Re-routes traffic from instances which are to be updated or failed a health check.
  3.  Re-routes traffic from instances which have more workload to instances which have less workload.
  4.  Drains all the connections from an instance, with one click.

Answer B.

Explanation: Connection draining is a service under ELB which constantly monitors the health of the instances. If any instance fails a health check or if any instance has to be patched with a software update, it  pulls all the traffic from that instance and re routes them to other instances.

52. When an instance is unhealthy, it is terminated and replaced with a new one, which of the following services does that?

  1.  Sticky Sessions
  2.  Fault Tolerance
  3.  Connection Draining
  4.  Monitoring

Answer B.

Explanation: When ELB detects that an instance is unhealthy, it starts routing incoming traffic to other healthy instances in the region. If all the instances in a region becomes unhealthy, and if you have instances in some other availability zone/region, your traffic is directed to them. Once your instances become healthy again, they are re routed back to the original instances.

53. What are lifecycle hooks used for in AutoScaling?

  1.   They are used to do health checks on instances
  2.  They are used to put an additional wait time to a scale in or scale out event.
  3.  They are used to shorten the wait time to a scale in or scale out event
  4.  None of these

Answer B.

Explanation: Lifecycle hooks are used for putting wait time before any lifecycle action i.e launching or terminating an instance happens. The purpose of this wait time, can be anything from extracting log files before terminating an instance or installing the necessary softwares in an instance before launching it.

54. A user has setup an Auto Scaling group. Due to some issue the group has failed to launch a single instance for more than 24 hours. What will happen to Auto Scaling in this condition?

  1. Auto Scaling will keep trying to launch the instance for 72 hours
  2. Auto Scaling will suspend the scaling process
  3. Auto Scaling will start an instance in a separate region
  4. The Auto Scaling group will be terminated automatically

Answer B.

Explanation: Auto Scaling allows you to suspend and then resume one or more of the Auto Scaling processes in your Auto Scaling group. This can be very useful when you want to investigate a configuration problem or other issue with your web application, and then make changes to your application, without triggering the Auto Scaling process.

Section 7: CloudTrail, Route 53

55. You have an EC2 Security Group with several running EC2 instances. You changed the Security Group rules to allow inbound traffic on a new port and protocol, and then launched several new instances in the same Security Group. The new rules apply:

  1. Immediately to all instances in the security group.
  2. Immediately to the new instances only.
  3. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
  4. To all instances, but it may take several minutes for old instances to see the changes.

Answer A.

Explanation: Any rule specified in an EC2 Security Group applies immediately to all the instances, irrespective of when they are launched before or after adding a rule.

 

56. To create a mirror image of your environment in another region for disaster recovery, which of the following AWS resources do not need to be recreated in the second region? ( Choose 2 answers )

  1. Route 53 Record Sets
  2. Elastic IP Addresses (EIP)
  3. EC2 Key Pairs
  4. Launch configurations
  5. Security Groups

Answer A.

Explanation: Route 53 record sets are common assets therefore there is no need to replicate them, since Route 53 is valid across regions

57. A customer wants to capture all client connection information from his load balancer at an interval of 5 minutes, which of the following options should he choose for his application?

  1. Enable AWS CloudTrail for the loadbalancer.
  2. Enable access logs on the load balancer.
  3. Install the Amazon CloudWatch Logs agent on the load balancer.
  4. Enable Amazon CloudWatch metrics on the load balancer.

Answer A.

Explanation: AWS CloudTrail provides inexpensive logging information for load balancer and other AWS resources This logging information can be used for analyses and other administrative work, therefore is perfect for this use case.

58. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement?

  1. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
  2. Enable server access logging for all required Amazon S3 buckets.
  3. Enable the Requester Pays option to track access via AWS Billing
  4. Enable Amazon S3 event notifications for Put and Post.

Answer A.

Explanation: AWS CloudTrail has been designed for logging and tracking API calls. Also this service is available for storage, therefore should be used in this use case.

59. Which of the following are true regarding AWS CloudTrail? (Choose 2 answers)

  1. CloudTrail is enabled globally
  2. CloudTrail is enabled on a per-region and service basis
  3. Logs can be delivered to a single Amazon S3 bucket for aggregation.
  4. CloudTrail is enabled for all available services within a region.

Answer B,C.

Explanation: Cloudtrail is not enabled for all the services and is also not available for all the regions. Therefore option B is correct, also the logs can be delivered to your S3 bucket, hence C is also correct.

60. What happens if CloudTrail is turned on for my account but my Amazon S3 bucket is not configured with the correct policy?

CloudTrail files are delivered according to S3 bucket policies. If the bucket is not configured or is misconfigured, CloudTrail might not be able to deliver the log files.

61. How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?

You will need to get a list of the DNS record data for your domain name first, it is generally available in the form of a “zone file” that you can get from your existing DNS provider. Once you receive the DNS record data, you can use Route 53’s Management Console or simple web-services interface to create a hosted zone that will store your DNS records for your domain name and follow its transfer process. It also includes steps such as updating the nameservers for your domain name to the ones associated with your hosted zone. For completing the process you have to contact the registrar with whom you registered your domain name and follow the transfer process. As soon as your registrar propagates the new name server delegations, your DNS queries will start to get answered.

Section 8: AWS SQS, AWS SNS, AWS SES, AWS ElasticBeanstalk

62. Which of the following services you would not use to deploy an app?

  1. Elastic Beanstalk
  2. Lambda
  3. Opsworks
  4. CloudFormation

Answer B.

Explanation: Lambda is used for running server-less applications. It can be used to deploy functions triggered by events. When we say serverless, we mean without you worrying about the computing resources running in the background. It is not designed for creating applications which are publicly accessed.

63. How does Elastic Beanstalk apply updates?

  1. By having a duplicate ready with updates before swapping.
  2. By updating on the instance while it is running
  3. By taking the instance down in the maintenance window
  4. Updates should be installed manually

Answer A.

Explanation: Elastic Beanstalk prepares a duplicate copy of the instance, before updating the original instance, and routes your traffic to the duplicate instance, so that, incase your updated application fails, it will switch back to the original instance, and there will be no downtime experienced by the users who are using your application.

64. How is AWS Elastic Beanstalk different than AWS OpsWorks?

AWS Elastic Beanstalk is an application management platform while OpsWorks is a configuration management platform. BeanStalk is an easy to use service which is used for deploying and scaling web applications developed with Java, .Net, PHP, Node.js, Python, Ruby, Go and Docker. Customers upload their code and Elastic Beanstalk automatically handles the deployment. The application will be ready to use without any infrastructure or resource configuration.

In contrast, AWS Opsworks is an integrated configuration management platform for IT administrators or DevOps engineers who want a high degree of customization and control over operations.

65. What happens if my application stops responding to requests in beanstalk?

AWS Beanstalk applications have a system in place for avoiding failures in the underlying infrastructure. If an Amazon EC2 instance fails for any reason, Beanstalk will use Auto Scaling to automatically launch a new instance. Beanstalk can also detect if your application is not responding on the custom link, even though the infrastructure appears healthy, it will be logged as an environmental event( e.g a bad version was deployed) so you can take an appropriate action.

For a detailed discussion on this topic, please refer Lambda AWS blog.

Section 9: AWS OpsWorks, AWS KMS

66. How is AWS OpsWorks different than AWS CloudFormation?

OpsWorks and CloudFormation both support application modelling, deployment, configuration, management and related activities. Both support a wide variety of architectural patterns, from simple web applications to highly complex applications. AWS OpsWorks and AWS CloudFormation differ in abstraction level and areas of focus.

AWS CloudFormation is a building block service which enables customer to manage almost any AWS resource via JSON-based domain specific language. It provides foundational capabilities for the full breadth of AWS, without prescribing a particular model for development and operations. Customers define templates and use them to provision and manage AWS resources, operating systems and application code.

In contrast, AWS OpsWorks is a higher level service that focuses on providing highly productive and reliable DevOps experiences for IT administrators and ops-minded developers. To do this, AWS OpsWorks employs a configuration management model based on concepts such as stacks and layers, and provides integrated experiences for key activities like deployment, monitoring, auto-scaling, and automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application-oriented AWS resource types including Amazon EC2 instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

67. I created a key in Oregon region to encrypt my data in North Virginia region for security purposes. I added two users to the key and an external AWS account. I wanted to encrypt an object in S3, so when I tried, the key that I just created was not listed.  What could be the reason?  

  1. External aws accounts are not supported.
  2. AWS S3 cannot be integrated KMS.
  3. The Key should be in the same region.
  4. New keys take some time to reflect in the list.

Answer C.

Explanation: The key created and the data to be encrypted should be in the same region. Hence the approach taken here to secure the data is incorrect.

68.  A company needs to monitor the read and write IOPS for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this?

  1. Amazon Simple Email Service
  2. Amazon CloudWatch
  3. Amazon Simple Queue Service
  4. Amazon Route 53

Answer B.

Explanation: Amazon CloudWatch is a cloud monitoring tool and hence this is the right service for the mentioned use case. The other options listed here are used for other purposes for example route 53 is used for DNS services, therefore CloudWatch will be the apt choice.

69. What happens when one of the resources in a stack cannot be created successfully in AWS OpsWorks?

When an event like this occurs, the “automatic rollback on error” feature is enabled, which causes all the AWS resources which were created successfully till the point where the error occurred to be deleted. This is helpful since it does not leave behind any erroneous data, it ensures the fact that stacks are either created fully or not created at all. It is useful in events where you may accidentally exceed your limit of the no. of Elastic IP addresses or maybe you may not have access to an EC2 AMI that you are trying to run etc.

70. What automation tools can you use to spinup servers?

Any of the following tools can be used:

  • Roll-your-own scripts, and use the AWS API tools.  Such scripts could be written in bash, perl or other language of your choice.
  • Use a configuration management and provisioning tool like puppet or its successor Opscode Chef.  You can also use a tool like Scalr.
  • Use a managed solution such as Rightscale.

Overwhelmed with all these questions?

We at edureka! are here to help you with every step on your journey, for becoming a AWS Solution Architect, therefore besides this AWS Architect Interview Questions we have come up with a curriculum which covers exactly what you would need to crack the Solution Architect Exam! You can have a look at the course details for AWS training here

I hope you enjoyed these AWS Interview Questions. The topics that you learnt in this AWS Architect Interview questions blog are the most sought-after skill sets that recruiters look for in an AWS Solution Architect Professional. I have tried touching up on AWS interview questions and answers for freshers whereas you would also find AWS interview questions for people with 3-5 years of experience. However, for a more detailed study on AWS, you can refer our AWS Tutorial.

Got a question for us? Please mention it in the comments section of this AWS Architect Interview Questions and we will get back to you.

Related Posts:

Get started with AWS Architect Certification Training

Cloud Computing with AWS

Setting Up A Multi Node Cluster In Hadoop 2.X

$
0
0

Multi Node Cluster in Hadoop 2.x

From our previous blog in Hadoop Tutorial Series, we learnt how to setup a Hadoop Single Node Cluster. Now, I will show how to set up a Hadoop Multi Node Cluster. A Multi Node Cluster in Hadoop contains two or more DataNodes in a distributed Hadoop environment. This is practically used in organizations to store and analyze their Petabytes and Exabytes of data. Learning to set up a multi node cluster gears you closer to your much needed Hadoop certification.

Here, we are taking two machines – master and slave. On both the machines, a Datanode will be running.

Let us start with the setup of Multi Node Cluster in Hadoop.

Prerequisites

  • Cent OS 6.5
  • Hadoop-2.7.3
  • JAVA 8
  • SSH

Setup of Multi Node Cluster in Hadoop

We have two machines (master and slave) with IP:

Master IP: 192.168.56.102

Slave IP: 192.168.56.103

STEP 1: Check the IP address of all machines.

Command: ip addr show (you can use the ifconfig command as well)

Master IP Address - Hadoop Multi Node Cluster - Edureka

Slave IP Address - Hadoop Multi Node Cluster - Edureka

STEP 2: Disable the firewall restrictions.

Command: service iptables stop

Command: sudo chkconfig iptables off

Resolve Firewall Issues - Hadoop Multi Node Cluster - Edureka

STEP 3: Open hosts file to add master and data node with their respective IP addresses.

Command: sudo nano /etc/hosts

Same properties will be displayed in the master and slave hosts files.

Open hosts file - Hadoop Multi Node Cluster - Edureka

Master's hosts file cofiguration - Hadoop Multi Node Cluster - Edureka

STEP 4: Restart the sshd service.

Command: service sshd restart

ssh Service Restart - Hadoop Multi Node Cluster - Edureka

 

STEP 5: Create the SSH Key in the master node. (Press enter button when it asks you to enter a filename to save the key).

Command: ssh-keygen -t rsa -P “”

Generating ssh key on master node - Hadoop Multi Node Cluster - Edureka

STEP 6: Copy the generated ssh key to master node’s authorized keys.

Command: cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

ssh Authorizing localhost - Hadoop Multi Node Cluster - Edureka

STEP 7: Copy the master node’s ssh key to slave’s authorized keys.

Command: ssh-copy-id -i $HOME/.ssh/id_rsa.pub edureka@slave

Copy master node key to slave - Hadoop Multi Node Cluster - Edureka

STEP 8: Click here to download the Java 8 Package. Save this file in your home directory.

STEP 9: Extract the Java Tar File on all nodes.

Command: tar -xvf jdk-8u101-linux-i586.tar.gz

Extract java - Hadoop Multi Node Cluster - Edureka

STEP 10: Download the Hadoop 2.7.3 Package on all nodes.

Command: wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz

Download Hadoop tar - Hadoop Multi Node Cluster - Edureka

 

 

STEP 11: Extract the Hadoop tar File on all nodes.

Command: tar -xvf hadoop-2.7.3.tar.gz

Extract Hadoop tar file - Hadoop Multi Node Cluster - Edureka

STEP 12: Add the Hadoop and Java paths in the bash file (.bashrc) on all nodes.

Open. bashrc file. Now, add Hadoop and Java Path as shown below:

Command:  sudo gedit .bashrc

Open bashrc - Hadoop Multi Node Cluster - Edureka

bash file - Hadoop Multi Node Cluster - Edureka

Then, save the bash file and close it.

For applying all these changes to the current Terminal, execute the source command.

Command: source .bashrc

Source bash - Hadoop Multi Node Cluster - Edureka

To make sure that Java and Hadoop have been properly installed on your system and can be accessed through the Terminal, execute the java -version and hadoop version commands.

Command: java -version

java version - Hadoop Multi Node Cluster - Edureka

Command: hadoop version

hadoop version - Hadoop Multi Node Cluster - Edureka

Now edit the configuration files in hadoop-2.7.3/etc/hadoop directory.

STEP 13: Create masters file and edit as follows in both master and slave machines as below:

Command: sudo gedit masters

masters - Hadoop Multi Node Cluster - Edureka

STEP 14: Edit slaves file in master machine as follows:

Command: sudo gedit /home/edureka/hadoop-2.7.3/etc/hadoop/slaves

master node slaves file - Hadoop Multi Node Cluster - Edureka

STEP 15: Edit slaves file in slave machine as follows:

Command: sudo gedit /home/edureka/hadoop-2.7.3/etc/hadoop/slaves

slave node slaves file - Hadoop Multi Node Cluster - Edureka

STEP 16: Edit core-site.xml on both master and slave machines as follows:

Command: sudo gedit /home/edureka/hadoop-2.7.3/etc/hadoop/core-site.xml

open core-site - Hadoop Multi Node Cluster - Edureka

 

&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
&lt;property&gt;
&lt;name&gt;fs.default.name&lt;/name&gt;
&lt;value&gt;hdfs://master:9000&lt;/value&gt;
&lt;/property&gt;
&lt;/configuration&gt;

STEP 7: Edit hdfs-site.xml on master as follows:
Command: sudo gedit /home/edureka/hadoop-2.7.3/etc/hadoop/hdfs-site.xml

open hdfs-site Hadoop Multi Node Cluster - Edureka

 

&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
&lt;property&gt;
&lt;name&gt;dfs.replication&lt;/name&gt;
&lt;value&gt;2&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.permissions&lt;/name&gt;
&lt;value&gt;false&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.namenode.name.dir&lt;/name&gt;
&lt;value&gt;/home/edureka/hadoop-2.7.3/namenode&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.datanode.data.dir&lt;/name&gt;
&lt;value&gt;/home/edureka/hadoop-2.7.3/datanode&lt;/value&gt;
&lt;/property&gt;
&lt;/configuration&gt;

STEP 18: Edit hdfs-site.xml on slave machine as follows:

Command: sudo gedit /home/edureka/hadoop-2.7.3/etc/hadoop/hdfs-site.xml

&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
&lt;property&gt;
&lt;name&gt;dfs.replication&lt;/name&gt;
&lt;value&gt;2&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.permissions&lt;/name&gt;
&lt;value&gt;false&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;dfs.datanode.data.dir&lt;/name&gt;
&lt;value&gt;/home/edureka/hadoop-2.7.3/datanode&lt;/value&gt;
&lt;/property&gt;
&lt;/configuration&gt;

STEP 19: Copy mapred-site from the template in configuration folder and the edit mapred-site.xml on both master and slave machines as follows:

Command: cp mapred-site.xml.template mapred-site.xml

Command: sudo gedit /home/edureka/hadoop-2.7.3/etc/hadoop/mapred-site.xml

copy mapred-site from template - Hadoop Multi Node Cluster - Edureka

open yarn-site - Hadoop Multi Node Cluster - Edureka

 

&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
&lt;property&gt;
&lt;name&gt;mapreduce.framework.name&lt;/name&gt;
&lt;value&gt;yarn&lt;/value&gt;
&lt;/property&gt;
&lt;/configuration&gt;

STEP 20: Edit yarn-site.xml on both master and slave machines as follows:

Command: sudo gedit /home/edureka/hadoop-2.7.3/etc/hadoop/yarn-site.xml

open yarn-site - Hadoop Multi Node Cluster - Edureka

 

&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
&lt;property&gt;
&lt;name&gt;yarn.nodemanager.aux-services&lt;/name&gt;
&lt;value&gt;mapreduce_shuffle&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.nodemanager.auxservices.mapreduce.shuffle.class&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.mapred.ShuffleHandler&lt;/value&gt;
&lt;/property&gt;
&lt;/configuration&gt;

STEP 21: Format the namenode (Only on master machine).

Command: hadoop namenode -format

Namenode format - Hadoop Multi Node Cluster - Edureka

STEP 22: Start all daemons (Only on master machine).

Command: ./sbin/start-all.sh

start-all daemon - Hadoop Multi Node Cluster - Edureka

STEP 23: Check all the daemons running on both master and slave machines.

Command: jps

On master

jps - Hadoop Multi Node Cluster - Edureka

On slave

jps slave - Hadoop Multi Node Cluster - Edureka

 

At last, open the browser and go to master:50070/dfshealth.html on your master machine, this will give you the NameNode interface. Scroll down and see for the number of live nodes, if its 2, you have successfully setup a multi node Hadoop cluster. In case, it’s not 2, you might have missed out any of the steps which I have mentioned above. But no need to worry, you can go back and verify all the configurations again to find the issues and then correct them.

web UI - Hadoop Multi Node Cluster - Edureka

 

Here, we have only 2 DataNodes. If you want, you can add more DataNodes according to your needs, refer our blog on Commissioning and Decommissioning Nodes in a Hadoop Cluster.

I hope you would have successfully installed a Hadoop Multi Node Cluster. If you are facing any problem, you can comment below, we will be replying shortly. In our next blog of Hadoop Tutorial Series, you will learn some important HDFS commands and you can start playing with Hadoop.

Now that you have understood how to install Hadoop Multi Node Cluster, check out the Hadoop training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. The Edureka Big Data Hadoop Certification Training course helps learners become expert in HDFS, Yarn, MapReduce, Pig, Hive, HBase, Oozie, Flume and Sqoop using real-time use cases on Retail, Social Media, Aviation, Tourism, Finance domain.

Got a question for us? Please mention it in the comments section and we will get back to you.

Top DevOps Interview Questions You Must Prepare In 2019

$
0
0

Are you a DevOps engineer or thinking of getting into DevOps? Well then, the future is yours. Top research firm Forrester declared 2018 as the ‘Year of Enterprise DevOps’ and estimates that 50% of organisations around the world are implementing DevOps.

In this blog, I have listed out dozens of possible questions that interviewers ask potential DevOps hires. This list has been crafted based on the know-how of Edureka instructors who are industry experts and the experiences of nearly 30,000 Edureka DevOps learners from 60 countries.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.

The crucial thing to understand is that DevOps is not merely a collection of technologies but rather a way of thinking, a culture. DevOps requires a cultural shift that merges operations with development and demands a linked toolchain of technologies to facilitate collaborative change. Since the DevOps philosophy is still at a very nascent stage, application of DevOps as well as the bandwidth required to adapt and collaborate, varies from organization to organization. However, you can develop a portfolio of DevOps skills that can present you as a perfect candidate for any type of organization.

If you want to develop your DevOps skills in a thoughtful, structured manner and get certified as a DevOps Engineer, we would be glad to help. Once you finish the Edureka DevOps certification course, we promise that you will be able to handle a variety of DevOps roles in the industry.

What are the requirements to become a DevOps Engineer?

When looking to fill out DevOps roles, organisations look for a clear set of skills. The most important of these are:

  • Experience with infrastructure automation tools like Chef, Puppet, Ansible, SaltStack or Windows PowerShell DSC.
  • Fluency in web languages like Ruby, Python, PHP or Java.
  • Interpersonal skills that help you communicate and collaborate across teams and roles.

If you have the above skills, then you are ready to start preparing for your DevOps interview! If not, don’t worry – our DevOps certification course will help you master DevOps.

In order to structure the questions below, I put myself in your shoes. Most of the answers in this blog are written from your perspective, i.e. someone who is a potential DevOps expert.

If you have attended DevOps interviews or have any additional questions you would like answered, please do mention them in our Q&A Forum. We’ll get back to you at the earliest.

DevOps Interview Questions and Answers | Edureka

Top DevOps Interview Questions

These are the top questions you might face in a DevOps job interview:

General DevOps Interview Questions

This category will include questions that are not related to any particular DevOps stage. Questions here are meant to test your understanding about DevOps rather than focusing on a particular tool or a stage.

Q1. What are the fundamental differences between DevOps & Agile?

The differences between the two are listed down in the table below.

DevOps vs Agile

Features DevOps Agile
Agility Agility in both Development & Operations Agility in only Development
Processes/ Practices Involves processes such as CI, CD, CT, etc. Involves practices such as Agile Scrum, Agile Kanban, etc.
Key Focus Area Timeliness & quality have equal priority Timeliness is the main priority
Release Cycles/ Development Sprints Smaller release cycles with immediate feedback Smaller release cycles
Source of Feedback Feedback is from self (Monitoring tools) Feedback is from customers
Scope of Work Agility & need for Automation Agility only

Q2. What is the need for DevOps?

According to me, this answer should start by explaining the general market trend. Instead of releasing big sets of features, companies are trying to see if small features can be transported to their customers through a series of release trains. This has many advantages like quick feedback from customers, better quality of software etc. which in turn leads to high customer satisfaction. To achieve this, companies are required to:

  1. Increase deployment frequency
  2. Lower failure rate of new releases
  3. Shortened lead time between fixes
  4. Faster mean time to recovery in the event of new release crashing

DevOps fulfills all these requirements and helps in achieving seamless software delivery. You can give examples of companies like Etsy, Google and Amazon which have adopted DevOps to achieve levels of performance that were unthinkable even five years ago. They are doing tens, hundreds or even thousands of code deployments per day while delivering world class stability, reliability and security.

If I have to test your knowledge on DevOps, you should know the difference between Agile and DevOps. The next question is directed towards that.

Q3. How is DevOps different from Agile / SDLC?

I would advise you to go with the below explanation:

Agile is a set of values and principles about how to produce i.e. develop software. Example: if you have some ideas and you want to turn those ideas into working software, you can use the Agile values and principles as a way to do that. But, that software might only be working on a developer’s laptop or in a test environment. You want a way to quickly, easily and repeatably move that software into production infrastructure, in a safe and simple way. To do that you need DevOps tools and techniques.

You can summarize by saying Agile software development methodology focuses on the development of software but DevOps on the other hand is responsible for development as well as deployment of the software in the safest and most reliable way possible. Here’s a blog that will give you more information on the evolution of DevOps.

Now remember, you have included DevOps tools in your previous answer so be prepared to answer some questions related to that.

Q4. Which are the top DevOps tools? Which tools have you worked on?

The most popular DevOps tools are mentioned below:

  • Git : Version Control System tool
  • Jenkins : Continuous Integration tool
  • Selenium : Continuous Testing tool
  • Puppet, Chef, Ansible : Configuration Management and Deployment tools
  • Nagios : Continuous Monitoring tool
  • Docker : Containerization tool

You can also mention any other tool if you want, but make sure you include the above tools in your answer.
The second part of the answer has two possibilities:

  1. If you have experience with all the above tools then you can say that I have worked on all these tools for developing good quality software and deploying those softwares easily, frequently, and reliably.
  2. If you have experience only with some of the above tools then mention those tools and say that I have specialization in these tools and have an overview about the rest of the tools.

Our DevOps certification course includes hands-on training on the most popular DevOps tools. Find out when the next batch starts.

Q5. How do all these tools work together?

Given below is a generic logical flow where everything gets automated for seamless delivery. However, this flow may vary from organization to organization as per the requirement.

  1. Developers develop the code and this source code is managed by Version Control System tools like Git etc.
  2. Developers send this code to the Git repository and any changes made in the code is committed to this Repository.
  3. Jenkins pulls this code from the repository using the Git plugin and build it using tools like Ant or Maven.
  4. Configuration management tools like puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium.
  5. Once the code is tested, Jenkins send it for deployment on the production server (even production server is provisioned & maintained by tools like puppet).
  6. After deployment It is continuously monitored by tools like Nagios.
  7. Docker containers provides testing environment to test the build features.

devops tools - devops interview questions

Q6. What are the advantages of DevOps?

For this answer, you can use your past experience and explain how DevOps helped you in your previous job. If you don’t have any such experience, then you can mention the below advantages.

Technical benefits:

  • Continuous software delivery
  • Less complex problems to fix
  • Faster resolution of problems

Business benefits:

  • Faster delivery of features
  • More stable operating environments
  • More time available to add value (rather than fix/maintain)

Q7. What is the most important thing DevOps helps us achieve?

According to me, the most important thing that DevOps helps us achieve is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. This is the primary objective of DevOps. Learn more in this DevOps tutorial blog.
However, you can add many other positive effects of DevOps. For example, clearer communication and better working relationships between teams i.e. both the Ops team and Dev team collaborate together to deliver good quality software which in turn leads to higher customer satisfaction.

Q8. Explain with a use case where DevOps can be used in industry/ real-life.

There are many industries that are using DevOps so you can mention any of those use cases, you can also refer the below example:
Etsy is a peer-to-peer e-commerce website focused on handmade or vintage items and supplies, as well as unique factory-manufactured items. Etsy struggled with slow, painful site updates that frequently caused the site to go down. It affected sales for millions of Etsy’s users who sold goods through online market place and risked driving them to the competitor.
With the help of a new technical management team, Etsy transitioned from its waterfall model, which produced four-hour full-site deployments twice weekly, to a more agile approach. Today, it has a fully automated deployment pipeline, and its continuous delivery practices have reportedly resulted in more than 50 deployments a day with fewer disruptions.

Q9. Explain your understanding and expertise on both the software development side and the technical operations side of an organization you have worked with in the past.

For this answer, share your past experience and try to explain how flexible you were in your previous job. You can refer the below example:
DevOps engineers almost always work in a 24/7 business-critical online environment. I was adaptable to on-call duties and was available to take up real-time, live-system responsibility. I successfully automated processes to support continuous software deployments. I have experience with public/private clouds, tools like Chef or Puppet, scripting and automation with tools like Python and PHP, and a background in Agile.

Q10. What are the anti-patterns of DevOps?

A pattern is common usage usually followed. If a pattern commonly adopted by others does not work for your organization and you continue to blindly follow it, you are essentially adopting an anti-pattern. There are myths about DevOps. Some of them include:

  • DevOps is a process
  • Agile equals DevOps?
  • We need a separate DevOps group
  • Devops will solve all our problems
  • DevOps means Developers Managing Production
  • DevOps is Development-driven release management
    1. DevOps is not development driven.
    2. DevOps is not IT Operations driven.
  • We can’t do DevOps – We’re Unique
  • We can’t do DevOps – We’ve got the wrong people

Version Control System (VCS) Interview Questions

Now let’s look at some interview questions on VCS. If you would like to get hands-on training on a VCS like Git, it’s included in our DevOps Certification course.

Q1. What is Version control?

This is probably the easiest question you will face in the interview. My suggestion is to first give a definition of Version control. It is a system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where teammates can commit changes to a file or set of file. Then you can mention the uses of version control.

Version control allows you to:

  • Revert files back to a previous state.
  • Revert the entire project back to a previous state.
  • Compare changes over time.
  • See who last modified something that might be causing a problem.
  • Who introduced an issue and when.

Q2. What are the benefits of using version control?

I will suggest you to include the following advantages of version control:

  1. With Version Control System (VCS), all the team members are allowed to work freely on any file at any time. VCS will later allow you to merge all the changes into a common version.
  2. All the past versions and variants are neatly packed up inside the VCS. When you need it, you can request any version at any time and you’ll have a snapshot of the complete project right at hand.
  3. Every time you save a new version of your project, your VCS requires you to provide a short description of what was changed. Additionally, you can see what exactly was changed in the file’s content. This allows you to know who has made what change in the project.
  4. A distributed VCS like Git allows all the team members to have complete history of the project so if there is a breakdown in the central server you can use any of your teammate’s local Git repository.

Q3. Describe branching strategies you have used.

This question is asked to test your branching experience so tell them about how you have used branching in your previous job and what purpose does it serves, you can refer the below points:

  • Feature branching
    A feature branch model keeps all of the changes for a particular feature inside of a branch. When the feature is fully tested and validated by automated tests, the branch is then merged into master.
  • Task branching
    In this model each task is implemented on its own branch with the task key included in the branch name. It is easy to see which code implements which task, just look for the task key in the branch name.
  • Release branching
    Once the develop branch has acquired enough features for a release, you can clone that branch to form a Release branch. Creating this branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it is ready to ship, the release gets merged into master and tagged with a version number. In addition, it should be merged back into develop branch, which may have progressed since the release was initiated.

In the end tell them that branching strategies varies from one organization to another, so I know basic branching operations like delete, merge, checking out a branch etc.

Q4. Which VCS tool you are comfortable with?

You can just mention the VCS tool that you have worked on like this: “I have worked on Git and one major advantage it has over other VCS tools like SVN is that it is a distributed version control system.”
Distributed VCS tools do not necessarily rely on a central server to store all the versions of a project’s files. Instead, every developer “clones” a copy of a repository and has the full history of the project on their own hard drive.

Q5. What is Git?

I will suggest that you attempt this question by first explaining about the architecture of git as shown in the below diagram. You can refer to the explanation given below:

  • Git is a Distributed Version Control system (DVCS). It can track changes to a file and allows you to revert back to any particular change.
  • Its distributed architecture provides many advantages over other Version Control Systems (VCS) like SVN one major advantage is that it does not rely on a central server to store all the versions of a project’s files. Instead, every developer “clones” a copy of a repository I have shown in the diagram below with “Local repository” and has the full history of the project on his hard drive so that when there is a server outage, all you need for recovery is one of your teammate’s local Git repository.
  • There is a central cloud repository as well where developers can commit changes and share it with other teammates as you can see in the diagram where all collaborators are commiting changes “Remote repository”.

git architecture - devops interview questions

Q6. Explain some basic Git commands?

Below are some basic Git commands:

git commands - devops interview questions

Q7. In Git how do you revert a commit that has already been pushed and made public?

There can be two answers to this question so make sure that you include both because any of the below options can be used depending on the situation:

  • Remove or fix the bad file in a new commit and push it to the remote repository. This is the most natural way to fix an error. Once you have made necessary changes to the file, commit it to the remote repository for that I will use
    git commit -m “commit message”
  • Create a new commit that undoes all changes that were made in the bad commit.to do this I will use a command
    git revert <name of bad commit>

Q8. How do you squash last N commits into a single commit?

There are two options to squash last N commits into a single commit. Include both of the below mentioned options in your answer:

  • If you want to write the new commit message from scratch use the following command
    git reset –soft HEAD~N &&
    git commit
  • If you want to start editing the new commit message with a concatenation of the existing commit messages then you need to extract those messages and pass them to Git commit for that I will use
    git reset –soft HEAD~N &&
    git commit –edit -m”$(git log –format=%B –reverse .HEAD@{N})”

Q9. What is Git bisect? How can you use it to determine the source of a (regression) bug?

I will suggest you to first give a small definition of Git bisect, Git bisect is used to find the commit that introduced a bug by using binary search. Command for Git bisect is
git bisect <subcommand> <options>
Now since you have mentioned the command above, explain what this command will do, This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a “bad” commit that is known to contain the bug, and a “good” commit that is known to be before the bug was introduced. Then Git bisect picks a commit between those two endpoints and asks you whether the selected commit is “good” or “bad”. It continues narrowing down the range until it finds the exact commit that introduced the change.

Q10. What is Git rebase and how can it be used to resolve conflicts in a feature branch before merge?

According to me, you should start by saying git rebase is a command which will merge another branch into the branch where you are currently working, and move all of the local commits that are ahead of the rebased branch to the top of the history on that branch.
Now once you have defined Git rebase time for an example to show how it can be used to resolve conflicts in a feature branch before merge, if a feature branch was created from master, and since then the master branch has received new commits, Git rebase can be used to move the feature branch to the tip of master.
The command effectively will replay the changes made in the feature branch at the tip of master, allowing conflicts to be resolved in the process. When done with care, this will allow the feature branch to be merged into master with relative ease and sometimes as a simple fast-forward operation.

Q11. How do you configure a Git repository to run code sanity checking tools right before making commits, and preventing them if the test fails?

I will suggest you to first give a small introduction to sanity checking, A sanity or smoke test determines whether it is possible and reasonable to continue testing.
Now explain how to achieve this, this can be done with a simple script related to the pre-commit hook of the repository. The pre-commit hook is triggered right before a commit is made, even before you are required to enter a commit message. In this script one can run other tools, such as linters and perform sanity checks on the changes being committed into the repository.
Finally give an example, you can refer the below script:
#!/bin/sh
files=$(git diff –cached –name-only –diff-filter=ACM | grep ‘.go$’)
if [ -z files ]; then
exit 0
fi
unfmtd=$(gofmt -l $files)
if [ -z unfmtd ]; then
exit 0
fi
echo “Some .go files are not fmt’d”
exit 1
This script checks to see if any .go file that is about to be committed needs to be passed through the standard Go source code formatting tool gofmt. By exiting with a non-zero status, the script effectively prevents the commit from being applied to the repository.

Q12. How do you find a list of files that has changed in a particular commit?

For this answer instead of just telling the command, explain what exactly this command will do so you can say that, To get a list files that has changed in a particular commit use command
git diff-tree -r {hash}
Given the commit hash, this will list all the files that were changed or added in that commit. The -r flag makes the command list individual files, rather than collapsing them into root directory names only.
You can also include the below mention point although it is totally optional but will help in impressing the interviewer.
The output will also include some extra information, which can be easily suppressed by including two flags:
git diff-tree –no-commit-id –name-only -r {hash}
Here –no-commit-id will suppress the commit hashes from appearing in the output, and –name-only will only print the file names, instead of their paths.

Q13. How do you setup a script to run every time a repository receives new commits through push?

There are three ways to configure a script to run every time a repository receives new commits through push, one needs to define either a pre-receive, update, or a post-receive hook depending on when exactly the script needs to be triggered.

  • Pre-receive hook in the destination repository is invoked when commits are pushed to it. Any script bound to this hook will be executed before any references are updated. This is a useful hook to run scripts that help enforce development policies.
  • Update hook works in a similar manner to pre-receive hook, and is also triggered before any updates are actually made. However, the update hook is called once for every commit that has been pushed to the destination repository.
  • Finally, post-receive hook in the repository is invoked after the updates have been accepted into the destination repository. This is an ideal place to configure simple deployment scripts, invoke some continuous integration systems, dispatch notification emails to repository maintainers, etc.

Hooks are local to every Git repository and are not versioned. Scripts can either be created within the hooks directory inside the “.git” directory, or they can be created elsewhere and links to those scripts can be placed within the directory.

Q14. How will you know in Git if a branch has already been merged into master?

I will suggest you to include both the below mentioned commands:
git branch –merged lists the branches that have been merged into the current branch.
git branch –no-merged lists the branches that have not been merged.

Continuous Integration questions

Now, let’s look at Continuous Integration interview questions:

Q1. What is meant by Continuous Integration?

I will advise you to begin this answer by giving a small definition of Continuous Integration (CI). It is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
I suggest that you explain how you have implemented it in your previous job. You can refer the below given example:

Jenkins standalone architecture - devops interview questions

In the diagram shown above:

  1. Developers check out code into their private workspaces.
  2. When they are done with it they commit the changes to the shared repository (Version Control Repository).
  3. The CI server monitors the repository and checks out changes when they occur.
  4. The CI server then pulls these changes and builds the system and also runs unit and integration tests.
  5. The CI server will now inform the team of the successful build.
  6. If the build or tests fails, the CI server will alert the team.
  7. The team will try to fix the issue at the earliest opportunity.
  8. This process keeps on repeating.

Q2. Why do you need a Continuous Integration of Dev & Testing?

For this answer, you should focus on the need of Continuous Integration. My suggestion would be to mention the below explanation in your answer:
Continuous Integration of Dev and Testing improves the quality of software, and reduces the time taken to deliver it, by replacing the traditional practice of testing after completing all development. It allows Dev team to easily detect and locate problems early because developers need to integrate code into a shared repository several times a day (more frequently). Each check-in is then automatically tested.

Q3. What are the success factors for Continuous Integration?

Here you have to mention the requirements for Continuous Integration. You could include the following points in your answer:

  • Maintain a code repository
  • Automate the build
  • Make the build self-testing
  • Everyone commits to the baseline every day
  • Every commit (to baseline) should be built
  • Keep the build fast
  • Test in a clone of the production environment
  • Make it easy to get the latest deliverables
  • Everyone can see the results of the latest build
  • Automate deployment

Q4. Explain how you can move or copy Jenkins from one server to another?

I will approach this task by copying the jobs directory from the old server to the new one. There are multiple ways to do that; I have mentioned them below:
You can:

  • Move a job from one installation of Jenkins to another by simply copying the corresponding job directory.
  • Make a copy of an existing job by making a clone of a job directory by a different name.
  • Rename an existing job by renaming a directory. Note that if you change a job name you will need to change any other job that tries to call the renamed job.

Q5. Explain how can create a backup and copy files in Jenkins?

Answer to this question is really direct. To create a backup, all you need to do is to periodically back up your JENKINS_HOME directory. This contains all of your build jobs configurations, your slave node configurations, and your build history. To create a back-up of your Jenkins setup, just copy this directory. You can also copy a job directory to clone or replicate a job or rename the directory.

Q6. Explain how you can setup Jenkins job?

My approach to this answer will be to first mention how to create Jenkins job. Go to Jenkins top page, select “New Job”, then choose “Build a free-style software project”.
Then you can tell the elements of this freestyle job:

  • Optional SCM, such as CVS or Subversion where your source code resides.
  • Optional triggers to control when Jenkins will perform builds.
  • Some sort of build script that performs the build (ant, maven, shell script, batch file, etc.) where the real work happens.
  • Optional steps to collect information out of the build, such as archiving the artifacts and/or recording javadoc and test results.
  • Optional steps to notify other people/systems with the build result, such as sending e-mails, IMs, updating issue tracker, etc..

Q7. Mention some of the useful plugins in Jenkins.

Below, I have mentioned some important Plugins:

  • Maven 2 project
  • Amazon EC2
  • HTML publisher
  • Copy artifact
  • Join
  • Green Balls

These Plugins, I feel are the most useful plugins. If you want to include any other Plugin that is not mentioned above, you can add them as well. But, make sure you first mention the above stated plugins and then add your own.

Q8. How will you secure Jenkins?

The way I secure Jenkins is mentioned below. If you have any other way of doing it, please mention it in the comments section below:

  • Ensure global security is on.
  • Ensure that Jenkins is integrated with my company’s user directory with appropriate plugin.
  • Ensure that matrix/Project matrix is enabled to fine tune access.
  • Automate the process of setting rights/privileges in Jenkins with custom version controlled script.
  • Limit physical access to Jenkins data/folders.
  • Periodically run security audits on same.

Jenkins is one of the many popular tools that are used extensively in DevOps. Edureka’s DevOps Certification course will provide you hands-on training with Jenkins and high quality guidance from industry experts. Give it a look:

Here’s what Shyam Verma, one of our DevOps learners had to say about our DevOps Training course:

 

Continuous Testing Interview Questions:

Now let’s move on to the Continuous Testing questions.

Q1. What is Continuous Testing?

I will advise you to follow the below mentioned explanation:
Continuous Testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with in the latest build. In this way, each build is tested continuously, allowing Development teams to get fast feedback so that they can prevent those problems from progressing to the next stage of Software delivery life-cycle. This dramatically speeds up a developer’s workflow as there’s no need to manually rebuild the project and re-run all tests after making changes.

Q2. What is Automation Testing?

Automation testing or Test Automation is a process of automating the manual process to test the application/system under test. Automation testing involves use of separate testing tools which lets you create test scripts which can be executed repeatedly and doesn’t require any manual intervention.

Q3. What are the benefits of Automation Testing?

I have listed down some advantages of automation testing. Include these in your answer and you can add your own experience of how Continuous Testing helped your previous company:

  • Supports execution of repeated test cases
  • Aids in testing a large test matrix
  • Enables parallel execution
  • Encourages unattended execution
  • Improves accuracy thereby reducing human generated errors
  • Saves time and money

Q4. How to automate Testing in DevOps lifecycle?

I have mentioned a generic flow below which you can refer to:
In DevOps, developers are required to commit all the changes made in the source code to a shared repository. Continuous Integration tools like Jenkins will pull the code from this shared repository every time a change is made in the code and deploy it for Continuous Testing that is done by tools like Selenium as shown in the below diagram.
In this way, any change in the code is continuously tested unlike the traditional approach.

automate testing - devops interview questions

Q5. Why is Continuous Testing important for DevOps?

You can answer this question by saying, “Continuous Testing allows any change made in the code to be tested immediately. This avoids the problems created by having “big-bang” testing left to the end of the cycle such as release delays and quality issues. In this way, Continuous Testing facilitates more frequent and good quality releases.”

Want all your DevOps questions answered while you learn? Want 24×7 lifetime support and permanent access to all course material? Make sure you check out our DevOps Certification program.

Q6. What are the key elements of Continuous Testing tools?

Key elements of Continuous Testing are:

  • Risk Assessment: It Covers risk mitigation tasks, technical debt, quality assessment and test coverage optimization to ensure the build is ready to progress toward next stage.
  • Policy Analysis: It ensures all processes align with the organization’s evolving business and compliance demands are met.
  • Requirements Traceability: It ensures true requirements are met and rework is not required. An object assessment is used to identify which requirements are at risk, working as expected or require further validation.
  • Advanced Analysis: It uses automation in areas such as static code analysis, change impact analysis and scope assessment/prioritization to prevent defects in the first place and accomplishing more within each iteration.
  • Test Optimization: It ensures tests yield accurate outcomes and provide actionable findings. Aspects include Test Data Management, Test Optimization Management and Test Maintenance
  • Service Virtualization: It ensures access to real-world testing environments. Service visualization enables access to the virtual form of the required testing stages, cutting the waste time to test environment setup and availability.

Q7. Which Testing tool are you comfortable with and what are the benefits of that tool?

Here mention the testing tool that you have worked with and accordingly frame your answer. I have mentioned an example below:
I have worked on Selenium to ensure high quality and more frequent releases.

Some advantages of Selenium are:

  • It is free and open source
  • It has a large user base and helping communities
  • It has cross Browser compatibility (Firefox, chrome, Internet Explorer, Safari etc.)
  • It has great platform compatibility (Windows, Mac OS, Linux etc.)
  • It supports multiple programming languages (Java, C#, Ruby, Python, Pearl etc.)
  • It has fresh and regular repository developments
  • It supports distributed testing

Q8. What are the Testing types supported by Selenium?

Selenium supports two types of testing:
Regression Testing: It is the act of retesting a product around an area where a bug was fixed.
Functional Testing: It refers to the testing of software features (functional points) individually.

Q9. What is Selenium IDE?

My suggestion is to start this answer by defining Selenium IDE. It is an integrated development environment for Selenium scripts. It is implemented as a Firefox extension, and allows you to record, edit, and debug tests. Selenium IDE includes the entire Selenium Core, allowing you to easily and quickly record and play back tests in the actual environment that they will run in.
Now include some advantages in your answer. With autocomplete support and the ability to move commands around quickly, Selenium IDE is the ideal environment for creating Selenium tests no matter what style of tests you prefer.

Q10. What is the difference between Assert and Verify commands in Selenium?

I have mentioned differences between Assert and Verify commands below:

  • Assert command checks whether the given condition is true or false. Let’s say we assert whether the given element is present on the web page or not. If the condition is true, then the program control will execute the next test step. But, if the condition is false, the execution would stop and no further test would be executed.
  • Verify command also checks whether the given condition is true or false. Irrespective of the condition being true or false, the program execution doesn’t halts i.e. any failure during verification would not stop the execution and all the test steps would be executed.

Q11. How to launch Browser using WebDriver?

The following syntax can be used to launch Browser:
WebDriver driver = new FirefoxDriver();
WebDriver driver = new ChromeDriver();
WebDriver driver = new InternetExplorerDriver();

Q12. When should I use Selenium Grid?

For this answer, my suggestion would be to give a small definition of Selenium Grid. It can be used to execute same or different test scripts on multiple platforms and browsers concurrently to achieve distributed test execution. This allows testing under different environments and saving execution time remarkably.

Learn Automation testing and other DevOps concepts in live instructor-led online classes in our DevOps Certification course.

Configuration Management Interview Questions

Now let’s check how much you know about Configuration Management.

Q1. What are the goals of Configuration management processes?

The purpose of Configuration Management (CM) is to ensure the integrity of a product or system throughout its life-cycle by making the development or deployment process controllable and repeatable, therefore creating a higher quality product or system. The CM process allows orderly management of system information and system changes for purposes such as to:

  • Revise capability,
  • Improve performance,
  • Reliability or maintainability,
  • Extend life,
  • Reduce cost,
  • Reduce risk and
  • Liability, or correct defects.

Q2. What is the difference between Asset management and Configuration Management?

Given below are few differences between Asset Management and Configuration Management:

asset management configuration management - devops interview questions

Q3. What is the difference between an Asset and a Configuration Item?

According to me, you should first explain Asset. It has a financial value along with a depreciation rate attached to it. IT assets are just a sub-set of it. Anything and everything that has a cost and the organization uses it for its asset value calculation and related benefits in tax calculation falls under Asset Management, and such item is called an asset.
Configuration Item on the other hand may or may not have financial values assigned to it. It will not have any depreciation linked to it. Thus, its life would not be dependent on its financial value but will depend on the time till that item becomes obsolete for the organization.

Now you can give an example that can showcase the similarity and differences between both:
1) Similarity:
Server – It is both an asset as well as a CI.
2) Difference:
Building – It is an asset but not a CI.
Document – It is a CI but not an asset

Q4. What do you understand by “Infrastructure as code”? How does it fit into the DevOps methodology? What purpose does it achieve?

Infrastructure as Code (IAC) is a type of IT infrastructure that operations teams can use to automatically manage and provision through code, rather than using a manual process.
Companies for faster deployments treat infrastructure like software: as code that can be managed with the DevOps tools and processes. These tools let you make infrastructure changes more easily, rapidly, safely and reliably.

Q5. Which among Puppet, Chef, SaltStack and Ansible is the best Configuration Management (CM) tool? Why?

This depends on the organization’s need so mention few points on all those tools:
Puppet is the oldest and most mature CM tool. Puppet is a Ruby-based Configuration Management tool, but while it has some free features, much of what makes Puppet great is only available in the paid version. Organizations that don’t need a lot of extras will find Puppet useful, but those needing more customization will probably need to upgrade to the paid version.
Chef is written in Ruby, so it can be customized by those who know the language. It also includes free features, plus it can be upgraded from open source to enterprise-level if necessary. On top of that, it’s a very flexible product.
Ansible is a very secure option since it uses Secure Shell. It’s a simple tool to use, but it does offer a number of other services in addition to configuration management. It’s very easy to learn, so it’s perfect for those who don’t have a dedicated IT staff but still need a configuration management tool.
SaltStack is python based open source CM tool made for larger businesses, but its learning curve is fairly low.

Q6. What is Puppet?

I will advise you to first give a small definition of Puppet. It is a Configuration Management tool which is used to automate administration tasks.
Now you should describe its architecture and how Puppet manages its Agents. Puppet has a Master-Slave architecture in which the Slave has to first send a Certificate signing request to Master and Master has to sign that Certificate in order to establish a secure connection between Puppet Master and Puppet Slave as shown on the diagram below. Puppet Slave sends request to Puppet Master and Puppet Master then pushes configuration on Slave.
Refer the diagram below that explains the above description.

what is puppet - devops interview questions

Q7. Before a client can authenticate with the Puppet Master, its certs need to be signed and accepted. How will you automate this task?

The easiest way is to enable auto-signing in puppet.conf.
Do mention that this is a security risk. If you still want to do this:

  • Firewall your puppet master – restrict port tcp/8140 to only networks that you trust.
  • Create puppet masters for each ‘trust zone’, and only include the trusted nodes in that Puppet masters manifest.
  • Never use a full wildcard such as *.

Q8. Describe the most significant gain you made from automating a process through Puppet.

For this answer, I will suggest you to explain you past experience with Puppet. you can refer the below example:
I automated the configuration and deployment of Linux and Windows machines using Puppet. In addition to shortening the processing time from one week to 10 minutes, I used the roles and profiles pattern and documented the purpose of each module in README to ensure that others could update the module using Git. The modules I wrote are still being used, but they’ve been improved by my teammates and members of the community

Q9. Which open source or community tools do you use to make Puppet more powerful?

Over here, you need to mention the tools and how you have used those tools to make Puppet more powerful. Below is one example for your reference:
Changes and requests are ticketed through Jira and we manage requests through an internal process. Then, we use Git and Puppet’s Code Manager app to manage Puppet code in accordance with best practices. Additionally, we run all of our Puppet changes through our continuous integration pipeline in Jenkins using the beaker testing framework.

Q10. What are Puppet Manifests?

It is a very important question so make sure you go in a correct flow. According to me, you should first define Manifests. Every node (or Puppet Agent) has got its configuration details in Puppet Master, written in the native Puppet language. These details are written in the language which Puppet can understand and are termed as Manifests. They are composed of Puppet code and their filenames use the .pp extension.
Now give an exampl. You can write a manifest in Puppet Master that creates a file and installs apache on all Puppet Agents (Slaves) connected to the Puppet Master.

Q11. What is Puppet Module and How it is different from Puppet Manifest?

For this answer, you can go with the below mentioned explanation:
A Puppet Module is a collection of Manifests and data (such as facts, files, and templates), and they have a specific directory structure. Modules are useful for organizing your Puppet code, because they allow you to split your code into multiple Manifests. It is considered best practice to use Modules to organize almost all of your Puppet Manifests.

Puppet programs are called Manifests which are composed of Puppet code and their file names use the .pp extension.

Q12. What is Facter in Puppet?

You are expected to answer what exactly Facter does in Puppet so according to me, you should say, “Facter gathers basic information (facts) about Puppet Agent such as hardware details, network settings, OS type and version, IP addresses, MAC addresses, SSH keys, and more. These facts are then made available in Puppet Master’s Manifests as variables.”

Q13. What is Chef?

Begin this answer by defining Chef. It is a powerful automation platform that transforms infrastructure into code. Chef is a tool for which you write scripts that are used to automate processes. What processes? Pretty much anything related to IT.
Now you can explain the architecture of Chef, it consists of:

  • Chef Server: The Chef Server is the central store of your infrastructure’s configuration data. The Chef Server stores the data necessary to configure your nodes and provides search, a powerful tool that allows you to dynamically drive node configuration based on data.
  • Chef Node: A Node is any host that is configured using Chef-client. Chef-client runs on your nodes, contacting the Chef Server for the information necessary to configure the node. Since a Node is a machine that runs the Chef-client software, nodes are sometimes referred to as “clients”.
  • Chef Workstation: A Chef Workstation is the host you use to modify your cookbooks and other configuration data.

chef architecture - devops interview questions

Q14. What is a resource in Chef?

My suggestion is to first define Resource. A Resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.
You should explain about the functions of Resource for that include the following points:

  • Describes the desired state for a configuration item.
  • Declares the steps needed to bring that item to the desired state.
  • Specifies a resource type such as package, template, or service.
  • Lists additional details (also known as resource properties), as necessary.
  • Are grouped into recipes, which describe working configurations.

Q15. What do you mean by recipe in Chef?

For this answer, I will suggest you to use the above mentioned flow: first define Recipe. A Recipe is a collection of Resources that describes a particular configuration or policy. A Recipe describes everything that is required to configure part of a system.
After the definition, explain the functions of Recipes by including the following points:

  • Install and configure software components.
  • Manage files.
  • Deploy applications.
  • Execute other recipes.

Q16. How does a Cookbook differ from a Recipe in Chef?

The answer to this is pretty direct. You can simply say, “a Recipe is a collection of Resources, and primarily configures a software package or some piece of infrastructure. A Cookbook groups together Recipes and other information in a way that is more manageable than having just Recipes alone.”

Q17. What happens when you don’t specify a Resource’s action in Chef?

My suggestion is to first give a direct answer: when you don’t specify a resource’s action, Chef applies the default action.
Now explain this with an example, the below resource:

file ‘C:UsersAdministratorchef-reposettings.ini’ do
content ‘greeting=hello world’
end
is same as the below resource:
file ‘C:UsersAdministratorchef-reposettings.ini’ do
action :create
content ‘greeting=hello world’
end
because: create is the file Resource’s default action.

Q18. What is Ansible module?

Modules are considered to be the units of work in Ansible. Each module is mostly standalone and can be written in a standard scripting language such as Python, Perl, Ruby, bash, etc.. One of the guiding properties of modules is idempotency, which means that even if an operation is repeated multiple times e.g. upon recovery from an outage, it will always place the system into the same state.

Q19. What are playbooks in Ansible?

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Playbooks are designed to be human-readable and are developed in a basic text language.
At a basic level, playbooks can be used to manage configurations of and deployments to remote machines.

Q20. How do I see a list of all of the ansible_ variables?

Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:
Ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host.

Q21. How can I set deployment order for applications?

WebLogic Server 8.1 allows you to select the load order for applications. See the Application MBean Load Order attribute in Application. WebLogic Server deploys server-level resources (first JDBC and then JMS) before deploying applications. Applications are deployed in this order: connectors, then EJBs, then Web Applications. If the application is an EAR, the individual components are loaded in the order in which they are declared in the application.xml deployment descriptor.

Q22. Can I refresh static components of a deployed application without having to redeploy the entire application?

Yes, you can use weblogic.Deployer to specify a component and target a server, using the following syntax:
java weblogic.Deployer -adminurl http://admin:7001 -name appname -targets server1,server2 -deploy jsps/*.jsp

Q23. How do I turn the auto-deployment feature off?

The auto-deployment feature checks the applications folder every three seconds to determine whether there are any new applications or any changes to existing applications and then dynamically deploys these changes.

The auto-deployment feature is enabled for servers that run in development mode. To disable auto-deployment feature, use one of the following methods to place servers in production mode:

  • In the Administration Console, click the name of the domain in the left pane, then select the Production Mode checkbox in the right pane.
  • At the command line, include the following argument when starting the domain’s Administration Server:
    -Dweblogic.ProductionModeEnabled=true
  • Production mode is set for all WebLogic Server instances in a given domain.

Q24. When should I use the external_stage option?

Set -external_stage using weblogic.Deployer if you want to stage the application yourself, and prefer to copy it to its target by your own means.

Ansible and Puppet are two of the most popular configuration management tools among DevOps engineers. Learn them and more in our DevOps Masters Program designed by industry experts to certify you as a DevOps Engineer!

Continuous Monitoring Interview Questions

Let’s test your knowledge on Continuous Monitoring.

Q1. Why is Continuous monitoring necessary?

I will suggest you to go with the below mentioned flow:
Continuous Monitoring allows timely identification of problems or weaknesses and quick corrective action that helps reduce expenses of an organization. Continuous monitoring provides solution that addresses three operational disciplines known as:

  • continuous audit
  • continuous controls monitoring
  • continuous transaction inspection

Q2. What is Nagios?

You can answer this question by first mentioning that Nagios is one of the monitoring tools. It is used for Continuous monitoring of systems, applications, services, and business processes etc in a DevOps culture. In the event of a failure, Nagios can alert technical staff of the problem, allowing them to begin remediation processes before outages affect business processes, end-users, or customers. With Nagios, you don’t have to explain why an unseen infrastructure outage affect your organization’s bottom line.
Now once you have defined what is Nagios, you can mention the various things that you can achieve using Nagios.

By using Nagios you can:

  • Plan for infrastructure upgrades before outdated systems cause failures.
  • Respond to issues at the first sign of a problem.
  • Automatically fix problems when they are detected.
  • Coordinate technical team responses.
  • Ensure your organization’s SLAs are being met.
  • Ensure IT infrastructure outages have a minimal effect on your organization’s bottom line.
  • Monitor your entire infrastructure and business processes.

This completes the answer to this question. Further details like advantages etc. can be added as per the direction where the discussion is headed.

Q3. How does Nagios works?

I will advise you to follow the below explanation for this answer:
Nagios runs on a server, usually as a daemon or service. Nagios periodically runs plugins residing on the same server, they contact hosts or servers on your network or on the internet. One can view the status information using the web interface. You can also receive email or SMS notifications if something happens.

The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores the results of those scripts and will run other scripts if these results change.

Now expect a few questions on Nagios components like Plugins, NRPE etc..

Q4. What are Plugins in Nagios?

Begin this answer by defining Plugins. They are scripts (Perl scripts, Shell scripts, etc.) that can run from a command line to check the status of a host or service. Nagios uses the results from Plugins to determine the current status of hosts and services on your network.
Once you have defined Plugins, explain why we need Plugins. Nagios will execute a Plugin whenever there is a need to check the status of a host or service. Plugin will perform the check and then simply returns the result to Nagios. Nagios will process the results that it receives from the Plugin and take the necessary actions.

Q5. What is NRPE (Nagios Remote Plugin Executor) in Nagios?

For this answer, give a brief definition of Plugins. The NRPE addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor “local” resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.

I will advise you to explain the NRPE architecture on the basis of diagram shown below. The NRPE addon consists of two pieces:

  • The check_nrpe plugin, which resides on the local monitoring machine.
  • The NRPE daemon, which runs on the remote Linux/Unix machine.

There is a SSL (Secure Socket Layer) connection between monitoring host and remote host as shown in the diagram below.

nrpe architecture - devops interview questions

Q6. What do you mean by passive check in Nagios?

According to me, the answer should start by explaining Passive checks. They are initiated and performed by external applications/processes and the Passive check results are submitted to Nagios for processing.
Then explain the need for passive checks. They are useful for monitoring services that are Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis. They can also be used for monitoring services that are Located behind a firewall and cannot be checked actively from the monitoring host.

Q7. When Does Nagios Check for external commands?

Make sure that you stick to the question during your explanation so I will advise you to follow the below mentioned flow. Nagios check for external commands under the following conditions:

  • At regular intervals specified by the command_check_interval option in the main configuration file or,
  • Immediately after event handlers are executed. This is in addition to the regular cycle of external command checks and is done to provide immediate action if an event handler submits commands to Nagios.

Q8. What is the difference between Active and Passive check in Nagios?

For this answer, first point out the basic difference Active and Passive checks. The major difference between Active and Passive checks is that Active checks are initiated and performed by Nagios, while passive checks are performed by external applications.
If your interviewer is looking unconvinced with the above explanation then you can also mention some key features of both Active and Passive checks:
Passive checks are useful for monitoring services that are:

  • Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis.
  • Located behind a firewall and cannot be checked actively from the monitoring host.

The main features of Actives checks are as follows:

  • Active checks are initiated by the Nagios process.
  • Active checks are run on a regularly scheduled basis.

Q9. How does Nagios help with Distributed Monitoring?

The interviewer will be expecting an answer related to the distributed architecture of Nagios. So, I suggest that you answer it in the below mentioned format:
With Nagios you can monitor your whole enterprise by using a distributed monitoring scheme in which local slave instances of Nagios perform monitoring tasks and report the results back to a single master. You manage all configuration, notification, and reporting from the master, while the slaves do all the work. This design takes advantage of Nagios’s ability to utilize passive checks i.e. external applications or processes that send results back to Nagios. In a distributed configuration, these external applications are other instances of Nagios.

Q10. Explain Main Configuration file of Nagios and its location?

First mention what this main configuration file contains and its function. The main configuration file contains a number of directives that affect how the Nagios daemon operates. This config file is read by both the Nagios daemon and the CGIs (It specifies the location of your main configuration file).
Now you can tell where it is present and how it is created. A sample main configuration file is created in the base directory of the Nagios distribution when you run the configure script. The default name of the main configuration file is nagios.cfg. It is usually placed in the etc/ subdirectory of you Nagios installation (i.e. /usr/local/nagios/etc/).

Q11. Explain how Flap Detection works in Nagios?

I will advise you to first explain Flapping first. Flapping occurs when a service or host changes state too frequently, this causes lot of problem and recovery notifications.
Once you have defined Flapping, explain how Nagios detects Flapping. Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. Nagios follows the below given procedure to do that:

  • Storing the results of the last 21 checks of the host or service analyzing the historical check results and determine where state changes/transitions occur
  • Using the state transitions to determine a percent state change value (a measure of change) for the host or service
  • Comparing the percent state change value against low and high flapping thresholds

A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold. A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold.

Q12. What are the three main variables that affect recursion and inheritance in Nagios?

According to me the proper format for this answer should be:
First name the variables and then a small explanation of each of these variables:

  • Name
  • Use
  • Register

Then give a brief explanation for each of these variables. Name is a placeholder that is used by other objects. Use defines the “parent” object whose properties should be used. Register can have a value of 0 (indicating its only a template) and 1 (an actual object). The register value is never inherited.

Q13. What is meant by saying Nagios is Object Oriented?

Answer to this question is pretty direct. I will answer this by saying, “One of the features of Nagios is object configuration format in that you can create object definitions that inherit properties from other object definitions and hence the name. This simplifies and clarifies relationships between various components.”

Q14. What is State Stalking in Nagios?

I will advise you to first give a small introduction on State Stalking. It is used for logging purposes. When Stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully and log any changes it sees in the output of check results.
Depending on the discussion between you and interviewer you can also add, “It can be very helpful in later analysis of the log files. Under normal circumstances, the result of a host or service check is only logged if the host or service has changed state since it was last checked.”

Want to get trained in monitoring tools like Nagios? Want to certified as a DevOps Engineer? Make sure you check out our DevOps Masters Program.

Containerization and Virtualization Interview Questions

Let’s see how much you know about containers and VMs.

Q1. What are containers?

My suggestion is to explain the need for containerization first, containers are used to provide consistent computing environment from a developer’s laptop to a test environment, from a staging environment into production.
Now give a definition of containers, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Containerizing the application platform and its dependencies removes the differences in OS distributions and underlying infrastructure.

containers - devops interview questions

Q2. What are the advantages that Containerization provides over virtualization?

Below are the advantages of containerization over virtualization:

  • Containers provide real-time provisioning and scalability but VMs provide slow provisioning
  • Containers are lightweight when compared to VMs
  • VMs have limited performance when compared to containers
  • Containers have better resource utilization compared to VMs

Q3. How exactly are containers (Docker in our case) different from hypervisor virtualization (vSphere)? What are the benefits?

Given below are some differences. Make sure you include these differences in your answer:

docker vsphere - devops interview questions

Q4. What is Docker image?

I suggest that you go with the below mentioned flow:
Docker image is the source of Docker container. In other words, Docker images are used to create containers. Images are created with the build command, and they’ll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Tip: Be aware of Dockerhub in order to answer questions on pre-available images.

Q5. What is Docker container?

This is a very important question so just make sure you don’t deviate from the topic. I advise you to follow the below mentioned format:
Docker containers include the application and all of its dependencies but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.

Now explain how to create a Docker container, Docker containers can be created by either creating a Docker image and then running it or you can use Docker images that are present on the Dockerhub.
Docker containers are basically runtime instances of Docker images.

Q6. What is Docker hub?

Answer to this question is pretty direct. Docker hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

Q7. How is Docker different from other container technologies?

According to me, below points should be there in your answer:
Docker containers are easy to deploy in a cloud. It can get more applications running on the same hardware than other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.
If you have some more points to add you can do that but make sure the above the above explanation is there in your answer.

Q8. What is Docker Swarm?

You should start this answer by explaining Docker Swarn. It is native clustering for Docker which turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
I will also suggest you to include some supported tools:

  • Dokku
  • Docker Compose
  • Docker Machine
  • Jenkins

Q9. What is Dockerfile used for?

This answer according to me should begin by explaining the use of Dockerfile. Docker can build images automatically by reading the instructions from a Dockerfile.
Now I suggest you to give a small definition of Dockerfle. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

Now expect a few questions to test your experience with Docker.

Q10. Can I use json instead of yaml for my compose file in Docker?

You can use json instead of yaml for your compose file, to use json file with compose, specify the filename to use for eg:
docker-compose -f docker-compose.json up

Q11. Tell us how you have used Docker in your past position?

Explain how you have used Docker to help rapid deployment. Explain how you have scripted Docker and used Docker with other tools like Puppet, Chef or Jenkins. If you have no past practical experience in Docker and have past experience with other tools in similar space, be honest and explain the same. In this case, it makes sense if you can compare other tools to Docker in terms of functionality.

Q12. How to create Docker container?

I will suggest you to give a direct answer to this. We can use Docker image to create Docker container by using the below command:
docker run -t -i <image name> <command name>
This command will create and start container.

You should also add, If you want to check the list of all running container with status on a host use the below command:
docker ps -a

Q13. How to stop and restart the Docker container?

In order to stop the Docker container you can use the below command:
docker stop <container ID>
Now to restart the Docker container you can use:
docker restart <container ID>

Q14. How far do Docker containers scale?

Large web deployments like Google and Twitter, and platform providers such as Heroku and dotCloud all run on container technology, at a scale of hundreds of thousands or even millions of containers running in parallel.

Q15. What platforms does Docker run on?

I will start this answer by saying Docker runs on only Linux and Cloud platforms and then I will mention the below vendors of Linux:

  • Ubuntu 12.04, 13.04 et al
  • Fedora 19/20+
  • RHEL 6.5+
  • CentOS 6+
  • Gentoo
  • ArchLinux
  • openSUSE 12.3+
  • CRUX 3.0+

Cloud:

  • Amazon EC2
  • Google Compute Engine
  • Microsoft Azure
  • Rackspace

Note that Docker does not run on Windows or Mac.

Q16. Do I lose my data when the Docker container exits?

You can answer this by saying, no I won’t loose my data when Dcoker container exits. Any data that your application writes to disk gets preserved in its container until you explicitly delete the container. The file system for the container persists even after the container halts.

And, that’s it!

I hope these questions help you crack your DevOps interview. If you have any more questions for us, do mention them in the comments section and we will get back to you.

If you’re looking to build a strong resume for a DevOps role, check out the blog we’ve written for you.

You can also reach out to us on social on Facebook, Twitter, Instagram, YouTube, LinkedIn and even Pinterest!

Want to get started on your DevOps Learning journey? There’s no better place to do it than Edureka. Our Ridiculous Commitment guarantees it!

Want to get certified as a DevOps Lead Engineer and be recognised as a DevOps expert? Make sure you check out our DevOps Masters Program.

All the best for your interview!

Got a question for us? Please mention it in the comments section and we will get back to you.

Related Posts:-

Get started with DevOps

All You Need to Know About DevOps

10 Reasons to learn DevOps

100+ Java Interview Questions You Must Prepare In 2019

$
0
0

Java Interview Questions

In this Java Interview Questions blog, I am going to list some of the most important Java Interview Questions and Answers which will set you apart in the interview process. Java is used by approx 10 Million developers worldwide to develop applications for 15 Billion devices supporting Java. It is also used to create applications for trending technologies like Big Data to household devices like Mobiles and DTH boxes. And hence today, Java is used everywhere! This is the reason why Java Certification is the most in-demand certification in programming domain.

Edureka 2019 Tech Career Guide is out! Hottest job roles, precise learning paths, industry outlook & more in the guide. Download now.
We have compiled a list of top Java interview questions which are classified into 7 sections, namely:
  1. Basic Interview Questions
  2. OOPs Interview Questions
  3. JDBC Interview Questions
  4. Spring Interview Questions
  5. Hibernate Interview Questions
  6. JSP Interview Questions
  7. Exception and thread Interview Questions

Java Interview Questions and Answers | Edureka


As a Java professional, it is essential to know the right buzzwords, learn the right technologies and prepare the right answers to commonly asked Java Interview Questions. Here’s a definitive list of top Java Interview Questions that will guarantee a breeze-through to the next level.

In case you attended any Java interview recently, or have additional questions beyond what we covered, we encourage you to post them in our QnA Forum. Our expert team will get back to you at the earliest.  

So let’s get started with the first set of basic Java Interview Questions.

Basic Java Interview Questions

Q1. Explain JDK, JRE and JVM?

JDK vs JRE vs JVM

JDK JRE JVM
It stands for Java Development Kit. It stands for Java Runtime Environment. It stands for Java Virtual Machine.
It is the tool necessary to compile, document and package Java programs. JRE refers to a runtime environment in which Java bytecode can be executed. It is an abstract machine. It is a specification that provides a run-time environment in which Java bytecode can be executed.
It contains JRE + development tools. It’s an implementation of the JVM which physically exists. JVM follows three notations: Specification, Implementation, and Runtime Instance.

Q2. Explain public static void main(String args[]) in Java.

main() in Java is the entry point for any Java program. It is always written as public static void main(String[] args).

  • public: Public is an access modifier, which is used to specify who can access this method. Public means that this Method will be accessible by any Class.
  • static: It is a keyword in java which identifies it is class-based. main() is made static in Java so that it can be accessed without creating the instance of a Class. In case, main is not made static then the compiler will throw an error as main() is called by the JVM before any objects are made and only static methods can be directly invoked via the class. 
  • void: It is the return type of the method. Void defines the method which will not return any value.
  • main: It is the name of the method which is searched by JVM as a starting point for an application with a particular signature only. It is the method where the main execution occurs.
  • String args[]: It is the parameter passed to the main method.

Q3. Why Java is platform independent?

Java is called platform independent because of its byte codes which can run on any system irrespective of its underlying operating system.

Q4. Why Java is not 100% Object-oriented?

Java is not 100% Object-oriented because it makes use of eight primitive data types such as boolean, byte, char, int, float, double, long, short which are not objects.

Q5. What are wrapper classes in Java?

Wrapper classes convert the Java primitives into the reference types (objects). Every primitive data type has a class dedicated to it. These are known as wrapper classes because they “wrap” the primitive data type into an object of that class. Refer to the below image which displays different primitive type, wrapper class and constructor argument.

Q6. What are constructors in Java?

In Java, constructor refers to a block of code which is used to initialize an object. It must have the same name as that of the class. Also, it has no return type and it is automatically called when an object is created.

There are two types of constructors:

  1. Default Constructor: In Java, a default constructor is the one which does not take any inputs. In other words, default constructors are the no argument constructors which will be created by default in case you no other constructor is defined by the user. Its main purpose is to initialize the instance variables with the default values. Also, it is majorly used for object creation. 
  2. Parameterized Constructor: The parameterized constructor in Java, is the constructor which is capable of initializing the instance variables with the provided values. In other words, the constructors which take the arguments are called parameterized constructors.

Q7. What is singleton class in Java and how can we make a class singleton?

Singleton class is a class whose only one instance can be created at any given time, in one JVM. A class can be made singleton by making its constructor private.

Q8. What is the difference between Array list and vector in Java?

ArrayList Vector
Array List is not synchronized.  Vector is synchronized.
Array List is fast as it’s non-synchronized. Vector is slow as it is thread safe.
If an element is inserted into the Array List, it increases its Array size by 50%. Vector defaults to doubling size of its array.
Array List does not define the increment size. Vector defines the increment size.
Array List can only use Iterator for traversing an Array List. Vector can use both Enumeration and Iterator for traversing.

Q9. What is the difference between equals() and == in Java?

Equals() method is defined in Object class in Java and used for checking equality of two objects defined by business logic.

“==” or equality operator in Java is a binary operator provided by Java programming language and used to compare primitives and objects. public boolean equals(Object o) is the method provided by the Object class. The default implementation uses == operator to compare two objects. For example: method can be overridden like String class. equals() method is used to compare the values of two objects.

Q10. What are the differences between Heap and Stack Memory in Java?

The major difference between Heap and Stack memory are:

Features Stack Heap
Memory Stack memory is used only by one thread of execution. Heap memory is used by all the parts of the application.
Access Stack memory can’t be accessed by other threads. Objects stored in the heap are globally accessible.
Memory Management Follows LIFO manner to free memory. Memory management is based on the generation associated with each object.
Lifetime Exists until the end of execution of the thread. Heap memory lives from the start till the end of application execution.
Usage Stack memory only contains local primitive and reference variables to objects in heap space. Whenever an object is created, it’s always stored in the Heap space.

Q11. What is a package in Java? List down various advantages of packages.

Packages in Java, are the collection of related classes and interfaces which are bundled together. By using packages, developers can easily modularize the code and optimize its reuse. Also, the code within the packages can be imported by other classes and reused. Below I have listed down a few of its advantages:

  • Packages help in avoiding name clashes
  • They provide easier access control on the code
  • Packages can also contain hidden classes which are not visible to the outer classes and only used within the package
  • Creates a proper hierarchical structure which makes it easier to locate the related classes

Q12. Why pointers are not used in Java?

Java doesn’t use pointers because they are unsafe and increases the complexity of the program. Since, Java is known for its simplicity of code, adding the concept of pointers will be contradicting. Moreover, since JVM is responsible for implicit memory allocation, thus in order to avoid direct access to memory by the user,  pointers are discouraged in Java.

Q13. What is JIT compiler in Java?

JIT stands for Just-In-Time compiler in Java. It is a program that helps in converting the Java bytecode into instructions that are sent directly to the processor. By default, the JIT compiler is enabled in Java and is activated whenever a Java method is invoked. The JIT compiler then compiles the bytecode of the invoked method into native machine code, compiling it “just in time” to execute. Once the method has been compiled, the JVM summons the compiled code of that method directly rather than interpreting it. This is why it is often responsible for the performance optimization of Java applications at the run time.

Q14. What are access modifiers in Java?

In Java, access modifiers are special keywords which are used to restrict the access of a class, constructor, data member and method in another class. Java supports four types of access modifiers:

  1. Default
  2. Private
  3. Protected
  4. Public
Modifier Default Private Protected Public
Same class YES YES YES YES
Same Package subclass YES NO YES YES
Same Package non-subclass YES NO YES YES
Different package subclass NO NO YES YES
Different package non-subclass NO NO NO YES

Q15. Define a Java Class.

A class in Java is a blueprint which includes all your data.  A class contains fields (variables) and methods to describe the behavior of an object. Let’s have a look at the syntax of a class.

class Abc {
member variables // class body
methods}

Q16. What is an object in Java and how is it created?

An object is a real-world entity that has a state and behavior. An object has three characteristics:

  1. State
  2. Behavior
  3. Identity

An object is created using the ‘new’ keyword. For example:

ClassName obj = new ClassName();

Q17. What is Object Oriented Programming?

Object-oriented programming or popularly known as OOPs is a programming model or approach where the programs are organized around objects rather than logic and functions. In other words, OOP mainly focuses on the objects that are required to be manipulated instead of logic. This approach is ideal for the programs large and complex codes and needs to be actively updated or maintained.

Q18. What are the main concepts of OOPs in Java?

Object-Oriented Programming or OOPs is a programming style that is associated with concepts like:

  1. Inheritance: Inheritance is a process where one class acquires the properties of another.
  2. Encapsulation: Encapsulation in Java is a mechanism of wrapping up the data and code together as a single unit.
  3. Abstraction: Abstraction is the methodology of hiding the implementation details from the user and only providing the functionality to the users. 
  4. Polymorphism: Polymorphism is the ability of a variable, function or object to take multiple forms.

Q19. What is the difference between a local variable and an instance variable?

In Java, a local variable is typically used inside a method, constructor, or a block and has only local scope. Thus, this variable can be used only within the scope of a block. The best benefit of having a local variable is that other methods in the class won’t be even aware of that variable.

Example

if(x > 100)
{
String test = "Edureka";
}

 

Whereas, an instance variable in Java, is a variable which is bounded to its object itself. These variables are declared within a class, but outside a method. Every object of that class will create it’s own copy of the variable while using it. Thus, any changes made to the variable won’t reflect in any other instances of that class and will be bound to that particular instance only.

class Test{
public String EmpName;
public int empAge;
}

Q20. Differentiate between the constructors and methods in Java?

Methods Constructors
1. Used to represent the behavior of an object 1. Used to initialize the state of an object
2. Must have a return type 2. Do not have any return type
3. Needs to be invoked explicitly 3. Is invoked implicitly
4. No default method is provided by the compiler 4. A default constructor is provided by the compiler if the class has none
5. Method name may or may not be same as class name 5. Constructor name must always be the same as the class name

Q21. What is final keyword in Java?

final is a special keyword in Java that is used as a non-access modifier. A final variable can be used in different contexts such as:

  • final variable

When the final keyword is used with a variable then its value can’t be changed once assigned. In case the no value has been assigned to the final variable then using only the class constructor a value can be assigned to it.

  • final method

When a method is declared final then it can’t be overridden by the inheriting class.

  • final class

When a class is declared as final in Java, it can’t be extended by any subclass class but it can extend other class.

Q22. What is the difference between break and continue statements?

break continue
1. Can be used in switch and loop (for, while, do while) statements 1. Can be only used with loop statements
2. It causes the switch or loop statements to terminate the moment it is executed 2. It doesn’t terminate the loop but causes the loop to jump to the next iteration
3. It terminates the innermost enclosing loop or switch immediately 3. A continue within a loop nested with a switch will cause the next loop iteration to execute
Example break:
for (int i = 0; i < 5; i++)
{
if (i == 3)
{
break;
}
System.out.println(i);
}
Example continue:
for (int i = 0; i < 5; i++)
{
if(i == 2)
{
continue;
}
System.out.println(i);
}

Q23.What is an infinite loop in Java? Explain with an example.

An infinite loop is an instruction sequence in Java that loops endlessly when a functional exit isn’t met. This type of loop can be the result of a programming error or may also be a deliberate action based on the application behavior. An infinite loop will terminate automatically once the application exits.

For example:

public class InfiniteForLoopDemo
{
public static void main(String[] arg) {
for(;;)
System.out.println("Welcome to Edureka!");
// To terminate this program press ctrl + c in the console.
}
}

 

Q24. What is the difference between this() and super() in Java?

In Java, super() and this(), both are special keywords that are used to call the constructor. 

this() super()
1. this() represents the current instance of a class 1. super() represents the current instance of a parent/base class
2. Used to call the default constructor of the same class 2. Used to call the default constructor of the parent/base class
3. Used to access methods of the current class 3. Used to access methods of the base class
4.  Used for pointing the current class instance 4. Used for pointing the superclass instance
5. Must be the first line of a block 5. Must be the first line of a block

Q25. What is Java String Pool?

Java String pool refers to a collection of Strings which are stored in heap memory. In this, whenever a new object is created, String pool first checks whether the object is already present in the pool or not. If it is present, then the same reference is returned to the variable else new object will be created in the String pool and the respective reference will be returned.

String pool - Java Interview Questions - Edureka

Q26. Differentiate between static and non-static methods in Java.

Static Method Non-Static Method
1. The static keyword must be used before the method name 1. No need to use the static keyword before the method name
2. It is called using the class (className.methodName)  2. It is can be called like any general method
3. They can’t access any non-static instance variables or methods 3. It can access any static method and any static variable without creating an instance of the class

Q27. What is constructor chaining in Java?

In Java, constructor chaining is the process of calling one constructor from another with respect to the current object. Constructor chaining is possible only through legacy where a subclass constructor is responsible for invoking the superclass’ constructor first. There could be any number of classes in the constructor chain. Constructor chaining can be achieved in two ways:

  1. Within the same class using this()
  2. From base class using super()

Q28. Difference between String, String Builder, and String Buffer.

Factor String String Builder String Buffer
Storage Area Constant String Pool Heap Area Heap Area
Mutability Immutable Mutable Mutable
Thread Safety Yes Yes No
Performance Fast Slow Fast

Q29. What is a classloader in Java?

The Java ClassLoader is a subset of JVM (Java Virtual Machine) that is responsible for loading the class files. Whenever a Java program is executed it is first loaded by the classloader. Java provides three built-in classloaders:

  1. Bootstrap ClassLoader
  2. Extension ClassLoader
  3. System/Application ClassLoader

Q30. Why Java Strings are immutable in nature?

In Java, string objects are immutable in nature which simply means once the String object is created its state cannot be modified. Whenever you try to update the value of that object instead of updating the values of that particular object, Java creates a new string object. Java String objects are immutable as String objects are generally cached in the String pool. Since String literals are usually shared between multiple clients, action from one client might affect the rest. It enhances security, caching, synchronization, and performance of the application. 

Q31. What is the difference between an array and an array list?

Array ArrayList
Cannot contain values of different data types Can contain values of different data types.
Size must be defined at the time of declaration Size can be dynamically changed
Need to specify the index in order to add data No need to specify the index
Arrays are not type parameterized Arraylists are type 
Arrays can contain primitive data types as well as objects Arraylists can contain only objects, no primitive data types are allowed

Q32. What is a Map in Java?

In Java, Map is an interface of Util package which maps unique keys to values. The Map interface is not a subset of the main Collection interface and thus it behaves little different from the other collection types. Below are a few of the characteristics of Map interface: 

  1. Map doesn’t contain duplicate keys.
  2. Each key can map at max one value.

Q33. What is collection class in Java? List down its methods and interfaces.

In Java, the collection is a framework that acts as an architecture for storing and manipulating a group of objects. Using Collections you can perform various tasks like searching, sorting, insertion, manipulation, deletion, etc. Java collection framework includes the following:

  • Interfaces
  • Classes
  • Methods

The below image shows the complete hierarchy of the Java Collection.

FrameworkHierarchy - Java Collections - Edureka

 

In case you are facing any challenges with these java interview questions, please comment on your problems in the section below.

OOPS Java Interview Questions

Q1. What is Polymorphism?

Polymorphism is briefly described as “one interface, many implementations”. Polymorphism is a characteristic of being able to assign a different meaning or usage to something in different contexts – specifically, to allow an entity such as a variable, a function, or an object to have more than one form. There are two types of polymorphism:

  1. Compile time polymorphism
  2. Run time polymorphism

Compile time polymorphism is method overloading whereas Runtime time polymorphism is done using inheritance and interface.

Q2. What is runtime polymorphism or dynamic method dispatch?

In Java, runtime polymorphism or dynamic method dispatch is a process in which a call to an overridden method is resolved at runtime rather than at compile-time. In this process, an overridden method is called through the reference variable of a superclass. Let’s take a look at the example below to understand it better.

class Car {
void run()
{
System.out.println(&ldquo;car is running&rdquo;); 
}
}
class Audi extends Car {
void run()
{
System.out.prinltn(&ldquo;Audi is running safely with 100km&rdquo;);
}
public static void main(String args[])
{
Car b= new Audi();    //upcasting
b.run();
}
}

Q3. What is abstraction in Java?

Abstraction refers to the quality of dealing with ideas rather than events. It basically deals with hiding the details and showing the essential things to the user. Thus you can say that abstraction in Java is the process of hiding the implementation details from the user and revealing only the functionality to them. Abstraction can be achieved in two ways:

  1. Abstract Classes (0-100% of abstraction can be achieved)
  2. Interfaces (100% of abstraction can be achieved)

Q4. What do you mean by an interface in Java?

An interface in Java is a blueprint of a class or you can say it is a collection of abstract methods and static constants. In an interface, each method is public and abstract but it does not contain any constructor. Thus, interface basically is a group of related methods with empty bodies. Example:

public interface Animal {
  public void eat();
  public void sleep();
  public void run();
}

Q5. What is the difference between abstract classes and interfaces?

Abstract Class Interfaces
An abstract class can provide complete, default code and/or just the details that have to be overridden An interface cannot provide any code at all, just the signature
In the case of an abstract class, a class may extend only one abstract class A Class may implement several interfaces
An abstract class can have non-abstract methods All methods of an Interface are abstract
An abstract class can have instance variables An Interface cannot have instance variables
An abstract class can have any visibility: public, private, protected An Interface visibility must be public (or) none
If we add a new method to an abstract class then we have the option of providing default implementation and therefore all the existing code might work properly If we add a new method to an Interface then we have to track down all the implementations of the interface and define implementation for the new method
An abstract class can contain constructors An Interface cannot contain constructors
Abstract classes are fast Interfaces are slow as it requires extra indirection to find the corresponding method in the actual class

Q6. What is inheritance in Java?

Inheritance in Java is the concept where the properties of one class can be inherited by the other. It helps to reuse the code and establish a relationship between different classes. Inheritance is performed between two types of classes:

  1. Parent class (Super or Base class)
  2. Child class (Subclass or Derived class)

A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class.

Q7. What are the different types of inheritance in Java?

Java supports four types of inheritance which are:

  1. Single Inheritance: In single inheritance, one class inherits the properties of another i.e there will be only one parent as well as one child class.
  2. Multilevel Inheritance: When a class is derived from a class which is also derived from another class, i.e. a class having more than one parent class but at different levels, such type of inheritance is called Multilevel Inheritance.
  3. Hierarchical Inheritance: When a class has more than one child classes (subclasses) or in other words, more than one child classes have the same parent class, then such kind of inheritance is known as hierarchical.
  4. Hybrid Inheritance: Hybrid inheritance is a combination of two or more types of inheritance.

Q8. What is method overloading and method overriding?

Method Overloading :

  • In Method Overloading, Methods of the same class shares the same name but each method must have a different number of parameters or parameters having different types and order.
  • Method Overloading is to “add” or “extend” more to the method’s behavior.
  • It is a compile-time polymorphism.
  • The methods must have a different signature.
  • It may or may not need inheritance in Method Overloading.

Let’s take a look at the example below to understand it better.

class Adder {
Static int add(int a, int b)
{
return a+b;
}
Static double add( double a, double b)
{
return a+b;
}
public static void main(String args[])
{
System.out.println(Adder.add(11,11));
System.out.println(Adder.add(12.3,12.6));
}}

Method Overriding:  

  • In Method Overriding, the subclass has the same method with the same name and exactly the same number and type of parameters and same return type as a superclass.
  • Method Overriding is to “Change” existing behavior of the method.
  • It is a run time polymorphism.
  • The methods must have the same signature.
  • It always requires inheritance in Method Overriding.

Let’s take a look at the example below to understand it better.

class Car {
void run(){
System.out.println(&ldquo;car is running&rdquo;); 
}
Class Audi extends Car{
void run()
{
System.out.prinltn("Audi is running safely with 100km");
}
public static void main( String args[])
{
Car b=new Audi();
b.run();
}
}

Q9. Can you override a private or static method in Java?

You cannot override a private or static method in Java. If you create a similar method with the same return type and same method arguments in child class then it will hide the superclass method; this is known as method hiding. Similarly, you cannot override a private method in subclass because it’s not accessible there. What you can do is create another private method with the same name in the child class. Let’s take a look at the example below to understand it better.

class Base {
private static void display() {
System.out.println("Static or class method from Base");
}
public void print() {
System.out.println("Non-static or instance method from Base");
}
class Derived extends Base {
private static void display() {
System.out.println("Static or class method from Derived");
}
public void print() {
System.out.println("Non-static or instance method from Derived");
}
public class test {
public static void main(String args[])
{
Base obj= new Derived();
obj1.display();
obj1.print();
}
}

Q10. What is multiple inheritance? Is it supported by Java?

MultipleInheritance - Java Interview Questions - EdurekaIf a child class inherits the property from multiple classes is known as multiple inheritance. Java does not allow to extend multiple classes.

The problem with multiple inheritance is that if multiple parent classes have the same method name, then at runtime it becomes difficult for the compiler to decide which method to execute from the child class.

Therefore, Java doesn’t support multiple inheritance. The problem is commonly referred to as Diamond Problem.

Q11. What is encapsulation in Java?

Encapsulation is a mechanism where you bind your data(variables) and code(methods) together as a single unit. Here, the data is hidden from the outer world and can be accessed only via current class methods. This helps in protecting the data from any unnecessary modification. We can achieve encapsulation in Java by:

  • Declaring the variables of a class as private.
  • Providing public setter and getter methods to modify and view the values of the variables.

Q12. What is an association?

Association is a relationship where all object have their own lifecycle and there is no owner. Let’s take the example of Teacher and Student. Multiple students can associate with a single teacher and a single student can associate with multiple teachers but there is no ownership between the objects and both have their own lifecycle. These relationships can be one to one, one to many, many to one and many to many.

Q13. What do you mean by aggregation?

An aggregation is a specialized form of Association where all object has their own lifecycle but there is ownership and child object can not belong to another parent object. Let’s take an example of Department and teacher. A single teacher can not belong to multiple departments, but if we delete the department teacher object will not destroy. 

Q14. What is composition in Java?

Composition is again a specialized form of Aggregation and we can call this as a “death” relationship. It is a strong type of Aggregation. Child object does not have their lifecycle and if parent object deletes all child object will also be deleted. Let’s take again an example of a relationship between House and rooms. House can contain multiple rooms there is no independent life of room and any room can not belongs to two different houses if we delete the house room will automatically delete.

Q15. What is a marker interface?

A Marker interface can be defined as the interface having no data member and member functions. In simpler terms, an empty interface is called the Marker interface. The most common examples of Marker interface in Java are Serializable, Cloneable etc. The marker interface can be declared as follows.

public interface Serializable{
}

Q16. What is object cloning in Java?

Object cloning in Java is the process of creating an exact copy of an object. It basically means the ability to create an object with a similar state as the original object. To achieve this, Java provides a method clone() to make use of this functionality. This method creates a new instance of the class of the current object and then initializes all its fields with the exact same contents of corresponding fields. To object clone(), the marker interface java.lang.Cloneable must be implemented to avoid any runtime exceptions. One thing you must note is Object clone() is a protected method, thus you need to override it.

Q17. What is a copy constructor in Java?

Copy constructor is a member function that is used to initialize an object using another object of the same class. Though there is no need for copy constructor in Java since all objects are passed by reference. Moreover, Java does not even support automatic pass-by-value.

Q18. What is a constructor overloading in Java?

In Java, constructor overloading is a technique of adding any number of constructors to a class each having a different parameter list. The compiler uses the number of parameters and their types in the list to differentiate the overloaded constructors.

class Demo
{
int i;
public Demo(int a)
{
i=k;
}
public Demo(int a, int b)
{
//body
}
}

In case you are facing any challenges with these java interview questions, please comment on your problems in the section below. Apart from this Java Interview Questions Blog, if you want to get trained from professionals on this technology, you can opt for a structured training from edureka!

Servlets Interview Questions  

Q1. What is a servlet?

  • Java Servlet is server-side technologies to extend the capability of web servers by providing support for dynamic response and data persistence.
  • The javax.servlet and javax.servlet.http packages provide interfaces and classes for writing our own servlets.
  • All servlets must implement the javax.servlet.Servlet interface, which defines servlet lifecycle methods. When implementing a generic service, we can extend the GenericServlet class provided with the Java Servlet API. The HttpServlet class provides methods, such as doGet() and doPost(), for handling HTTP-specific services.
  • Most of the times, web applications are accessed using HTTP protocol and thats why we mostly extend HttpServlet class. Servlet API hierarchy is shown in below image.

Servlet - Java Interview Questions - Edureka

Q2. What are the differences between Get and Post methods?

Get Post
Limited amount of data can be sent because data is sent in header. Large amount of data can be sent because data is sent in body.
 Not Secured because data is exposed in URL bar.  Secured because data is not exposed in URL bar.
 Can be bookmarked  Cannot be bookmarked
 Idempotent  Non-Idempotent
 It is more efficient and used than Post  It is less efficient and used

Q3. What is Request Dispatcher?

RequestDispatcher interface is used to forward the request to another resource that can be HTML, JSP or another servlet in same application. We can also use this to include the content of another resource to the response.

There are two methods defined in this interface:

1.void forward()

2.void include()

ForwardMethod - Java Interview Questions - Edureka

IncludeMethod - Java Interview Questions - Edureka

Q4. What are the differences between forward() method and sendRedirect() methods?

forward() method SendRedirect() method
forward() sends the same request to another resource. sendRedirect() method sends new request always because it uses the URL bar of the browser.
 forward() method works at server side.  sendRedirect() method works at client side.
 forward() method works within the server only. sendRedirect() method works within and outside the server.

Q5. What is the life-cycle of a servlet?

There are 5 stages in the lifecycle of a servlet:LifeCycleServlet - Java Interview Questions - Edureka

  1. Servlet is loaded
  2. Servlet is instantiated
  3. Servlet is initialized
  4. Service the request
  5. Servlet is destroyed

Q6. How does cookies work in Servlets?

  • Cookies are text data sent by server to the client and it gets saved at the client local machine.
  • Servlet API provides cookies support through javax.servlet.http.Cookie class that implements Serializable and Cloneable interfaces.
  • HttpServletRequest getCookies() method is provided to get the array of Cookies from request, since there is no point of adding Cookie to request, there are no methods to set or add cookie to request.
  • Similarly HttpServletResponse addCookie(Cookie c) method is provided to attach cookie in response header, there are no getter methods for cookie.

Q7. What are the differences between ServletContext vs ServletConfig?

The difference between ServletContext and ServletConfig in Servlets JSP is in below tabular format.

ServletConfig ServletContext
Servlet config object represent single servlet It represent whole web application running on particular JVM and common for all the servlet
Its like local parameter associated with particular servlet Its like global parameter associated with whole application
It’s a name value pair defined inside the servlet section of web.xml file so it has servlet wide scope ServletContext has application wide scope so define outside of servlet tag in web.xml file.
getServletConfig() method is used to get the config object getServletContext() method is  used to get the context object.
for example shopping cart of a user is a specific to particular user so here we can use servlet config To get the MIME type of a file or application session related information is stored using servlet context object.

Q8. What are the different methods of session management in servlets?

Session is a conversational state between client and server and it can consists of multiple request and response between client and server. Since HTTP and Web Server both are stateless, the only way to maintain a session is when some unique information about the session (session id) is passed between server and client in every request and response.

Some of the common ways of session management in servlets are:

  1. User Authentication
  2. HTML Hidden Field
  3. Cookies
  4. URL Rewriting
  5. Session Management API

SessionManagement - Java Interview Questions - Edureka

In case you are facing any challenges with these java interview questions, please comment your problems in the section below. Apart from this Java Interview Questions Blog, if you want to get trained from professionals on this technology, you can opt for a structured training from edureka! Click below to know more.

JDBC Interview Questions 

1. What is JDBC Driver?

JDBC Driver is a software component that enables java application to interact with the database. There are 4 types of JDBC drivers:

  1. JDBC-ODBC bridge driver
  2. Native-API driver (partially java driver)
  3. Network Protocol driver (fully java driver)
  4. Thin driver (fully java driver)

2. What are the steps to connect to a database in java?

  • Registering the driver class
  • Creating connection
  • Creating statement
  • Executing queries
  • Closing connection

3. What are the JDBC API components?

The java.sql package contains interfaces and classes for JDBC API.

Interfaces:

  • Connection
  • Statement
  • PreparedStatement
  • ResultSet
  • ResultSetMetaData
  • DatabaseMetaData
  • CallableStatement etc.

Classes:

  • DriverManager
  • Blob
  • Clob
  • Types
  • SQLException etc.

4. What is the role of JDBC DriverManager class?

The DriverManager class manages the registered drivers. It can be used to register and unregister drivers. It provides factory method that returns the instance of Connection.

5. What is JDBC Connection interface?

The Connection interface maintains a session with the database. It can be used for transaction management. It provides factory methods that returns the instance of Statement, PreparedStatement, CallableStatement and DatabaseMetaData.

ConnectionInterface - Java Interview Questions - Edureka

6.  What is the purpose of JDBC ResultSet interface?

The ResultSet object represents a row of a table. It can be used to change the cursor pointer and get the information from the database.

7. What is JDBC ResultSetMetaData interface?

The ResultSetMetaData interface returns the information of table such as total number of columns, column name, column type etc.

8. What is JDBC DatabaseMetaData interface?

The DatabaseMetaData interface returns the information of the database such as username, driver name, driver version, number of tables, number of views etc.

9. What do you mean by batch processing in JDBC?

Batch processing helps you to group related SQL statements into a batch and execute them instead of executing a single query. By using batch processing technique in JDBC, you can execute multiple queries which makes the performance faster.

10. What is the difference between execute, executeQuery, executeUpdate?

Statement execute(String query) is used to execute any SQL query and it returns TRUE if the result is an ResultSet such as running Select queries. The output is FALSE when there is no ResultSet object such as running Insert or Update queries. We can use getResultSet() to get the ResultSet and getUpdateCount() method to retrieve the update count.

Statement executeQuery(String query) is used to execute Select queries and returns the ResultSet. ResultSet returned is never null even if there are no records matching the query. When executing select queries we should use executeQuery method so that if someone tries to execute insert/update statement it will throw java.sql.SQLException with message “executeQuery method can not be used for update”.

Statement executeUpdate(String query) is used to execute Insert/Update/Delete (DML) statements or DDL statements that returns nothing. The output is int and equals to the row count for SQL Data Manipulation Language (DML) statements. For DDL statements, the output is 0.

You should use execute() method only when you are not sure about the type of statement else use executeQuery or executeUpdate method.

Q11. What do you understand by JDBC Statements?

JDBC statements are basically the statements which are used to send SQL commands to the database and retrieve data back from the database. Various methods like execute(), executeUpdate(), executeQuery, etc. are provided by JDBC to interact with the database.

JDBC supports 3 types of statements:

  1. Statement: Used for general purpose access to the database and executes a static SQL query at runtime.
  2. PreparedStatement: Used to provide input parameters to the query during execution.
  3. CallableStatement: Used to access the database stored procedures and helps in accepting runtime parameters.

In case you are facing any challenges with these java interview questions, please comment your problems in the section below. Apart from this Java Interview Questions Blog, if you want to get trained from professionals on this technology, you can opt for a structured training from edureka!

Spring Interview Questions 

Q1. What is Spring?

Wikipedia defines the Spring framework as “an application framework and inversion of control container for the Java platform. The framework’s core features can be used by any Java application, but there are extensions for building web applications on top of the Java EE platform.” Spring is essentially a lightweight, integrated framework that can be used for developing enterprise applications in java.

Q2. Name the different modules of the Spring framework.

Some of the important Spring Framework modules are:

  • Spring Context – for dependency injection.
  • Spring AOP – for aspect oriented programming.
  • Spring DAO – for database operations using DAO pattern
  • Spring JDBC – for JDBC and DataSource support.
  • Spring ORM – for ORM tools support such as Hibernate
  • Spring Web Module – for creating web applications.
  • Spring MVC – Model-View-Controller implementation for creating web applications, web services etc.

SpringFramework - Java Interview Questions - EdurekaQ3. List some of the important annotations in annotation-based Spring configuration.

The important annotations are:

  • @Required
  • @Autowired
  • @Qualifier
  • @Resource
  • @PostConstruct
  • @PreDestroy

Q4. Explain Bean in Spring and List the different Scopes of Spring bean.

Beans are objects that form the backbone of a Spring application. They are managed by the Spring IoC container. In other words, a bean is an object that is instantiated, assembled, and managed by a Spring IoC container.

There are five Scopes defined in Spring beans.

SpringBean - Java Interview Questions - Edureka

  • Singleton: Only one instance of the bean will be created for each container. This is the default scope for the spring beans. While using this scope, make sure spring bean doesn’t have shared instance variables otherwise it might lead to data inconsistency issues because it’s not thread-safe.
  • Prototype: A new instance will be created every time the bean is requested.
  • Request: This is same as prototype scope, however it’s meant to be used for web applications. A new instance of the bean will be created for each HTTP request.
  • Session: A new bean will be created for each HTTP session by the container.
  • Global-session: This is used to create global session beans for Portlet applications.

Q5. Explain the role of DispatcherServlet and ContextLoaderListener.

DispatcherServlet is basically the front controller in the Spring MVC application as it loads the spring bean configuration file and initializes all the beans that have been configured. If annotations are enabled, it also scans the packages to configure any bean annotated with @Component, @Controller, @Repository or @Service annotations.

DispatcherServlet - Java Interview Questions - EdurekaContextLoaderListener, on the other hand, is the listener to start up and shut down the WebApplicationContext in Spring root. Some of its important functions includes tying up the lifecycle of Application Context to the lifecycle of the ServletContext and automating the creation of ApplicationContext.

ContextLoader - Java Interview Questions - Edureka

Q6. What are the differences between constructor injection and setter injection?

No. Constructor Injection Setter Injection
 1)  No Partial Injection  Partial Injection
 2)  Doesn’t override the setter property  Overrides the constructor property if both are defined.
 3) Creates a new instance if any modification occurs Doesn’t create a new instance if you change the property value
 4)   Better for too many properties  Better for a few properties.

Q7. What is autowiring in Spring? What are the autowiring modes?

Autowiring enables the programmer to inject the bean automatically. We don’t need to write explicit injection logic. Let’s see the code to inject bean using dependency injection.

  1. <bean id=“emp” class=“com.javatpoint.Employee” autowire=“byName” />  

The autowiring modes are given below:

No. Mode Description
 1)  no  this is the default mode, it means autowiring is not enabled.
 2)  byName  Injects the bean based on the property name. It uses setter method.
 3)  byType  Injects the bean based on the property type. It uses setter method.
 4)  constructor  It injects the bean using constructor

Q8. How to handle exceptions in Spring MVC Framework?

Spring MVC Framework provides the following ways to help us achieving robust exception handling.

Controller Based:

We can define exception handler methods in our controller classes. All we need is to annotate these methods with @ExceptionHandler annotation.

Global Exception Handler:

Exception Handling is a cross-cutting concern and Spring provides @ControllerAdvice annotation that we can use with any class to define our global exception handler.

HandlerExceptionResolver implementation: 

For generic exceptions, most of the times we serve static pages. Spring Framework provides HandlerExceptionResolver interface that we can implement to create global exception handler. The reason behind this additional way to define global exception handler is that Spring framework also provides default implementation classes that we can define in our spring bean configuration file to get spring framework exception handling benefits.

Q9. What are some of the important Spring annotations which you have used?

Some of the Spring annotations that I have used in my project are:

@Controller – for controller classes in Spring MVC project.

@RequestMapping – for configuring URI mapping in controller handler methods. This is a very important annotation, so you should go through Spring MVC RequestMapping Annotation Examples

@ResponseBody – for sending Object as response, usually for sending XML or JSON data as response.

@PathVariable – for mapping dynamic values from the URI to handler method arguments.

@Autowired – for autowiring dependencies in spring beans.

@Qualifier – with @Autowired annotation to avoid confusion when multiple instances of bean type is present.

@Service – for service classes.

@Scope – for configuring the scope of the spring bean.

@Configuration, @ComponentScan and @Bean – for java based configurations.

AspectJ annotations for configuring aspects and advices , @Aspect, @Before, @After, @Around, @Pointcut, etc.

Q10. How to integrate Spring and Hibernate Frameworks?

We can use Spring ORM module to integrate Spring and Hibernate frameworks if you are using Hibernate 3+ where SessionFactory provides current session, then you should avoid using HibernateTemplate or HibernateDaoSupport classes and better to use DAO pattern with dependency injection for the integration.

Also, Spring ORM provides support for using Spring declarative transaction management, so you should utilize that rather than going for hibernate boiler-plate code for transaction management. 

Q11. Name the types of transaction management that Spring supports.

Two types of transaction management are supported by Spring. They are:

  1. Programmatic transaction management: In this, the transaction is managed with the help of programming. It provides you extreme flexibility, but it is very difficult to maintain.
  2. Declarative transaction management: In this, transaction management is separated from the business code. Only annotations or XML based configurations are used to manage the transactions.

In case you are facing any challenges with these java interview questions, please comment your problems in the section below. Apart from this Java Interview Questions Blog, if you want to get trained from professionals on this technology, you can opt for structured training from edureka!

Hibernate Interview Questions

1. What is Hibernate Framework?

Object-relational mapping or ORM is the programming technique to map application domain model objects to the relational database tables. Hibernate is Java-based ORM tool that provides a framework for mapping application domain objects to the relational database tables and vice versa.

Hibernate provides a reference implementation of Java Persistence API, that makes it a great choice as ORM tool with benefits of loose coupling. We can use the Hibernate persistence API for CRUD operations. Hibernate framework provide option to map plain old java objects to traditional database tables with the use of JPA annotations as well as XML based configuration.

Similarly, hibernate configurations are flexible and can be done from XML configuration file as well as programmatically.

2. What are the important benefits of using Hibernate Framework?

Some of the important benefits of using hibernate framework are:

  1. Hibernate eliminates all the boiler-plate code that comes with JDBC and takes care of managing resources, so we can focus on business logic.
  2. Hibernate framework provides support for XML as well as JPA annotations, that makes our code implementation independent.
  3. Hibernate provides a powerful query language (HQL) that is similar to SQL. However, HQL is fully object-oriented and understands concepts like inheritance, polymorphism, and association.
  4. Hibernate is an open source project from Red Hat Community and used worldwide. This makes it a better choice than others because learning curve is small and there are tons of online documentation and help is easily available in forums.
  5. Hibernate is easy to integrate with other Java EE frameworks, it’s so popular that Spring Framework provides built-in support for integrating hibernate with Spring applications.
  6. Hibernate supports lazy initialization using proxy objects and perform actual database queries only when it’s required.
  7. Hibernate cache helps us in getting better performance.
  8. For database vendor specific feature, hibernate is suitable because we can also execute native sql queries.

Overall hibernate is the best choice in current market for ORM tool, it contains all the features that you will ever need in an ORM tool.

3. Explain Hibernate architecture.

HibernateArchitecture - Java Interview Questions - Edureka

4. What are the differences between get and load methods?

The differences between get() and load() methods are given below.

No. get() load()
 1)  Returns null if object is not found. Throws ObjectNotFoundException if an object is not found.
 2)  get() method always hit the database.  load() method doesn’t hit the database.
 3)  It returns a real object, not a proxy.  It returns a proxy object.
 4) It should be used if you are not sure about the existence of instance. It should be used if you are sure that the instance exists.

5. What are the advantages of Hibernate over JDBC?

Some of the important advantages of Hibernate framework over JDBC are:

  1. Hibernate removes a lot of boiler-plate code that comes with JDBC API, the code looks cleaner and readable.
  2. Hibernate supports inheritance, associations, and collections. These features are not present with JDBC API.
  3. Hibernate implicitly provides transaction management, in fact, most of the queries can’t be executed outside transaction. In JDBC API, we need to write code for transaction management using commit and rollback. 
  4. JDBC API throws SQLException that is a checked exception, so we need to write a lot of try-catch block code. Most of the times it’s redundant in every JDBC call and used for transaction management. Hibernate wraps JDBC exceptions and throw JDBCException or HibernateException un-checked exception, so we don’t need to write code to handle it. Hibernate built-in transaction management removes the usage of try-catch blocks.
  5. Hibernate Query Language (HQL) is more object-oriented and close to Java programming language. For JDBC, we need to write native SQL queries.
  6. Hibernate supports caching that is better for performance, JDBC queries are not cached hence performance is low.
  7. Hibernate provides option through which we can create database tables too, for JDBC tables must exist in the database.
  8. Hibernate configuration helps us in using JDBC like connection as well as JNDI DataSource for the connection pool. This is a very important feature in enterprise application and completely missing in JDBC API.
  9. Hibernate supports JPA annotations, so the code is independent of the implementation and easily replaceable with other ORM tools. JDBC code is very tightly coupled with the application.

In case you are facing any challenges with these Java interview questions, please comment on your problems in the section below. Apart from this Java Interview Questions Blog, if you want to get trained from professionals on this technology, you can opt for structured training from edureka!

Java Interview Questions: JSP

1. What are the life-cycle methods for a jsp?

Methods Description
 public void jspInit() It is invoked only once, same as init method of servlet.
public void _jspService(ServletRequest request,ServletResponse)throws ServletException,IOException It is invoked at each request, same as service() method of servlet.
 public void jspDestroy() It is invoked only once, same as destroy() method of servlet.

2. What are the JSP implicit objects?

JSP provides 9 implicit objects by default. They are as follows:

Object Type
1) out  JspWriter
2) request  HttpServletRequest
3) response  HttpServletResponse
4) config  ServletConfig
5) session  HttpSession
6) application  ServletContext
7) pageContext  PageContext
8) page  Object
9) exception  Throwable

3. What are the differences between include directive and include action?

include directive include action
The include directive includes the content at page translation time. The include action includes the content at request time.
The include directive includes the original content of the page so page size increases at runtime. The include action doesn’t include the original content rather invokes the include() method of Vendor provided class.
 It’s better for static pages.  It’s better for dynamic pages.

4. How to disable caching on back button of the browser?

<%
response.setHeader(“Cache-Control”,”no-store”);
response.setHeader(“Pragma”,”no-cache”);
response.setHeader (“Expires”, “0”);                    //prevents caching at the proxy server
%>   

5. What are the different tags provided in JSTL?

There are 5 type of JSTL tags.

  1. core tags
  2. sql tags
  3. xml tags
  4. internationalization tags
  5. functions tags

6. How to disable session in JSP?

  1. <%@ page session=“false” %>   

7.  How to delete a Cookie in a JSP?

The following code explains how to delete a Cookie in a JSP :

Cookie mycook = new Cookie("name1","value1");

response.addCookie(mycook1);

Cookie killmycook = new Cookie("mycook1","value1");

killmycook . set MaxAge ( 0 );

killmycook . set Path ("/");

killmycook . addCookie ( killmycook 1 );

8. Explain the jspDestroy() method.

jspDestry() method is invoked from javax.servlet.jsp.JspPage interface whenever a JSP page is about to be destroyed. Servlets destroy methods can be easily overridden to perform cleanup, like when closing a database connection.

9.  How is JSP better than Servlet technology?

JSP is a technology on the server’s side to make content generation simple. They are document-centric, whereas servlets are programs. A Java server page can contain fragments of Java program, which execute and instantiate Java classes. However, they occur inside an HTML template file. It provides the framework for the development of a Web Application.

10. Why should we not configure JSP standard tags in web.xml?

We don’t need to configure JSP standard tags in web.xml because when container loads the web application and find TLD files, it automatically configures them to be used directly in the application JSP pages. We just need to include it in the JSP page using taglib directive.

11. How will you use JSP EL in order to get the HTTP method name?

Using pageContext JSP EL implicit object you can get the request object reference and make use of the dot operator to retrieve the HTTP method name in the JSP page. The JSP EL code for this purpose will look like ${pageContext.request.method}.

In case you are facing any challenges with these java interview questions, please comment on your problems in the section below. Apart from this Java Interview Questions Blog, if you want to get trained from professionals on this technology, you can opt for structured training from edureka!

Exception and Thread Java Interview Questions

Q1. What is the difference between Error and Exception?

An error is an irrecoverable condition occurring at runtime. Such as OutOfMemory error. These JVM errors you cannot repair them at runtime. Though error can be caught in the catch block but the execution of application will come to a halt and is not recoverable.

While exceptions are conditions that occur because of bad input or human error etc. e.g. FileNotFoundException will be thrown if the specified file does not exist. Or a NullPointerException will take place if you try using a null reference. In most of the cases it is possible to recover from an exception (probably by giving the user feedback for entering proper values etc.

Q2. How can you handle Java exceptions?

There are five keywords used to handle exceptions in Java: 

  1. try
  2. catch
  3. finally
  4. throw
  5. throws

Q3. What are the differences between Checked Exception and Unchecked Exception?

Checked Exception

  • The classes that extend Throwable class except RuntimeException and Error are known as checked exceptions. 
  • Checked exceptions are checked at compile-time.
  • Example: IOException, SQLException etc.

Unchecked Exception

  • The classes that extend RuntimeException are known as unchecked exceptions. 
  • Unchecked exceptions are not checked at compile-time.
  • Example: ArithmeticException, NullPointerException etc.

Q4. What purpose do the keywords final, finally, and finalize fulfill? 

Final:

Final is used to apply restrictions on class, method, and variable. A final class can’t be inherited, final method can’t be overridden and final variable value can’t be changed. Let’s take a look at the example below to understand it better.

class FinalVarExample {
public static void main( String args[])
{
final int a=10;   // Final variable
a=50;             //Error as value can't be changed
}

Finally

Finally is used to place important code, it will be executed whether the exception is handled or not. Let’s take a look at the example below to understand it better.

class FinallyExample {
public static void main(String args[]){
try {
int x=100;
}
catch(Exception e) {
System.out.println(e);
}
finally {
System.out.println("finally block is executing");}
}}
}

Finalize

Finalize is used to perform clean up processing just before the object is garbage collected. Let’s take a look at the example below to understand it better.

class FinalizeExample {
public void finalize() {
System.out.println("Finalize is called");
}
public static void main(String args[])
{
FinalizeExample f1=new FinalizeExample();
FinalizeExample f2=new FinalizeExample();
f1= NULL;
f2=NULL;
System.gc();
}
}

 Q5. What are the differences between throw and throws? 

throw keyword throws keyword
Throw is used to explicitly throw an exception. Throws is used to declare an exception.
Checked exceptions can not be propagated with throw only. Checked exception can be propagated with throws.
Throw is followed by an instance. Throws is followed by class.
Throw is used within the method. Throws is used with the method signature.
You cannot throw multiple exception You can declare multiple exception e.g. public void method()throws IOException,SQLException.

Q6. What is exception hierarchy in java?

The hierarchy is as follows:

Throwable is a parent class of all Exception classes. There are two types of Exceptions: Checked exceptions and UncheckedExceptions or RunTimeExceptions. Both type of exceptions extends Exception class whereas errors are further classified into Virtual Machine error and Assertion error.

ExceptionHierarchy - Java Interview Questions - Edureka

Q7. How to create a custom Exception?

To create you own exception extend the Exception class or any of its subclasses.

  • class New1Exception extends Exception { }               // this will create Checked Exception
  • class NewException extends IOException { }             // this will create Checked exception
  • class NewException extends NullPonterExcpetion { }  // this will create UnChecked exception

Q8. What are the important methods of Java Exception Class?

Exception and all of it’s subclasses doesn’t provide any specific methods and all of the methods are defined in the base class Throwable.

  1. String getMessage() – This method returns the message String of Throwable and the message can be provided while creating the exception through it’s constructor.
  2. String getLocalizedMessage() – This method is provided so that subclasses can override it to provide locale specific message to the calling program. Throwable class implementation of this method simply use getMessage() method to return the exception message.
  3. Synchronized Throwable getCause() – This method returns the cause of the exception or null id the cause is unknown.
  4. String toString() – This method returns the information about Throwable in String format, the returned String contains the name of Throwable class and localized message.
  5. void printStackTrace() – This method prints the stack trace information to the standard error stream, this method is overloaded and we can pass PrintStream or PrintWriter as an argument to write the stack trace information to the file or stream.

Q9. What are the differences between processes and threads?

  Process Thread
Definition An executing instance of a program is called a process. A thread is a subset of the process.
Communication Processes must use inter-process communication to communicate with sibling processes. Threads can directly communicate with other threads of its process.
Control Processes can only exercise control over child processes. Threads can exercise considerable control over threads of the same process.
Changes Any change in the parent process does not affect child processes. Any change in the main thread may affect the behavior of the other threads of the process.
Memory Run in separate memory spaces. Run in shared memory spaces.
Controlled by Process is controlled by the operating system. Threads are controlled by programmer in a program.
Dependence Processes are independent. Threads are dependent.

Q10. What is a finally block? Is there a case when finally will not execute?

Finally block is a block which always executes a set of statements. It is always associated with a try block regardless of any exception that occurs or not. 
Yes, finally will not be executed if the program exits either by calling System.exit() or by causing a fatal error that causes the process to abort.

Q11. What is synchronization?

Synchronization refers to multi-threading. A synchronized block of code can be executed by only one thread at a time. As Java supports execution of multiple threads, two or more threads may access the same fields or objects. Synchronization is a process which keeps all concurrent threads in execution to be in sync. Synchronization avoids memory consistency errors caused due to inconsistent view of shared memory. When a method is declared as synchronized the thread holds the monitor for that method’s object. If another thread is executing the synchronized method the thread is blocked until that thread releases the monitor.

Synchronization - Java Interview Questions - Edureka

 Q12. Can we write multiple catch blocks under single try block? 

Yes we can have multiple catch blocks under single try block but the approach should be from specific to general. Let’s understand this with a programmatic example.


public class Example {
public static void main(String args[]) {
try {
int a[]= new int[10];
a[10]= 10/0;
}
catch(ArithmeticException e)
{
System.out.println("Arithmetic exception in first catch block");
}
catch(ArrayIndexOutOfBoundsException e)
{
System.out.println("Array index out of bounds in second catch block");
}
catch(Exception e)
{
System.out.println("Any exception in third catch block");
}
}

Q13. What are the important methods of Java Exception Class?

Methods are defined in the base class Throwable. Some of the important methods of Java exception class are stated below. 

  1. String getMessage() – This method returns the message String about the exception. The message can be provided through its constructor.
  2. public StackTraceElement[] getStackTrace() – This method returns an array containing each element on the stack trace. The element at index 0 represents the top of the call stack whereas the last element in the array represents the method at the bottom of the call stack.
  3. Synchronized Throwable getCause() – This method returns the cause of the exception or null id as represented by a Throwable object.

  4. String toString() – This method returns the information in String format. The returned String contains the name of Throwable class and localized message.
  5. void printStackTrace() – This method prints the stack trace information to the standard error stream. 

Q14. What is OutOfMemoryError in Java?

OutOfMemoryError is the subclass of java.lang.Error which generally occurs when our JVM runs out of memory.

Q15. What is a Thread?

A thread is the smallest piece of programmed instructions which can be executed independently by a scheduler. In Java, all the programs will have at least one thread which is known as the main thread. This main thread is created by the JVM when the program starts its execution. The main thread is used to invoke the main() of the program.

Q16. What are the two ways to create a thread?

In Java, threads can be created in the following two ways:- 

  • By implementing the Runnable interface.
  • By extending the Thread

Q17. What are the different types of garbage collectors in Java?

Garbage collection in Java a program which helps in implicit memory management. Since in Java, using the new keyword you can create objects dynamically, which once created will consume some memory. Once the job is done and there are no more references left to the object, Java using garbage collection destroys the object and relieves the memory occupied by it. Java provides four types of garbage collectors:

  • Serial Garbage Collector
  • Parallel Garbage Collector
  • CMS Garbage Collector
  • G1 Garbage Collector

In case you are facing any challenges with these java interview questions, please comment your problems in the section below. Apart from this Java Interview Questions Blog, if you want to get trained from professionals on this technology, you can opt for structured training from edureka!

So this brings us to the end of the Java interview questions blog. The topics that you learned in this Java Interview Questions blog are the most sought-after skill sets that recruiters look for in a Java Professional. These set of Java Interview Questions will definitely help you ace your job interview. Good luck with your interview!

Check out the Java Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. We are here to help you with every step on your journey, for becoming a besides this java interview questions, we come up with a curriculum which is designed for students and professionals who want to be a Java Developer. The course is designed to give you a head start into Java programming and train you for both core and advanced Java concepts along with various Java frameworks like Hibernate & Spring.

Got a question for us? Please mention it in the comments section of this “Java Interview Questions” and we will get back to you as soon as possible.


Flutter vs React Native – Which One You Should Learn?

$
0
0

The mobile application industry has experienced a major fork – Android and iOS. Polished and robust mobile applications increase engagement and keep businesses booming. Today, companies are trying to save resources by adopting a cross-platform approach to mobile application development. In this particular blog, I will be comparing the two hottest frameworks in the market for cross-platform mobile applications i.e Flutter vs React Native.

The following topics are covered in this blog –

Flutter React Native

Uses Dart

Uses JavaScript

Installation requires extra steps e.g. setting of PATH

Installed easily through NPM

Detailed and easy to follow documentation

Documentation lack a lot of vital information

Complete and independent architecture

Architecture depends on bridges resulting in poor performance

Rich in features and API has everything you need

Heavy reliability on third-party libraries

Productivity decreases with complexity

Encourages developer productivity

Smaller community than React

Huge and active community

Inbuilt testing support through modules

Testing is done through third-party applications

Release automation well documented

Release automation also dependent on third party applications

Inbuilt CI/CD support

Can be set up through third parties

What is Flutter?Flutter Logo - Flutter vs React Native - edureka


Flutter is Google’s response to the cross-platform development problem discussed above. Google has been churning resources into Flutter’s development for quite a few years, before releasing it to the public in 2017, during their Shanghai Keynote.

Flutter can be used to create mobile applications for iOS and Android, quickly and efficiently. The major factor that convinced several developers to shift to Flutter, is that the project will have a single codebase. Despite the single codebase, the Flutter framework provides enough flexibility to embrace the differences in both the platforms.

If you want to learn more about Flutter, you could check out my Flutter Tutorial.

What is React Native?

React Logo - Flutter vs React Native - edureka

React Native is another cross-platform mobile application development framework. It’s been around for a greater time than Flutter, ergo has a bigger community too. React Native was created by a software developer called Jordan Walke, a Facebook employee. He drew major inspirations from XHP, an HTML component framework for PHP. It was first implemented for the Facebook news-feed in 2011 and later on the Instagram application.

Now that we have a brief idea about the lineage and usage of both frameworks, let us begin the battle between the two: Flutter vs React Native.

Flutter vs React Native – Programming Language

Flutter is built upon the Dart language, which was made by Google too. The language is considered a niche in the developer community but is in no way a tough language. If you have any experience with object-oriented programming, then learning Dart will be a cakewalk. A thorough guide, along with examples can be very easily found at the official documentation.

React Native, on the other hand, uses JavaScript as its base language. Over the years, JavaScript has gained immense popularity due to its easy learning curve and widespread usage. If someone is well versed with JavaScript, they can start developing applications using React Native without wasting much time getting accustomed to the framework.

Therefore, we can say that in terms of programming language, React Native takes the point as it is much easier to get into, compared to Dart and Flutter.

Flutter vs React Native - Language - edureka

Flutter vs React Native – Installation

Installation is a big part when comparing two frameworks. Installation should be easy, and devoid of complicated configuration processes. Above that, the whole installation process should be well documented in the official documentation.

Flutter can be installed by downloading the binaries for your respective platform from their official page and then setting a few path-variables. Meanwhile, React Native can be easily downloaded using NPM. Developers with a JavaScript background will find it extremely simple to get React Native up and running. On the MacOS side of things, React Native has to be downloaded using HomeBrew.

Both Flutter and React Native lack one-liner installation with native package managers for a specific OS, but Flutter installation seems to require extra steps for adding the binary to PATH and downloading it from the source code. Hence, my point goes to React Native again. 

Flutter vs React Native - Installation - edureka

Flutter vs React Native – Documentation

Prior to beginning a project, there are a lot of essential steps that include configuration of the framework in use along with its peripheral requirements. This can include trivial tasks like setting up an IDE for proper syntax highlighting. Even trivial tasks like these can become extremely hard to figure out without proper documentation.

Flutter makes up for its infancy as a framework and choice of niche language for development with a well structured, detailed and easy to understand documentation. If one were to religiously follow the documentation, no stone would be left unturned. The getting started guide provides intricate instructions on IDE setup, project configuration, etc. Flutter also has a command line based tool called Flutter Doctor, which guides the user through the whole configuration and setup process by showing what has been installed, and the elements that still need to be installed.

React Native, doesn’t even come close to Flutter when it comes to structured documentation. Comparing the ‘beginner’s guide‘ to both, we find that React Native assumes a lot about how much the developer knows and hence seems to lack essential information. For example, there is little to no information on the Xcode command line tools. The documentation directly jumps to the step of creating a new project.

Flutter vs React Native - Documentation - edureka

Flutter vs React Native – Architecture

When making an educated decision on the choice of development framework, an important criterion is always architecture. Having a working knowledge of architecture often helps in performing custom tweaks that might be needed for the project. Above that, the architecture should be independent i.e. it shouldn’t rely much on third-party services to act as a whole.

Flutter uses the Dart framework which has most of the components inbuilt. So, it’s bigger in size and often does not require the bridge to communicate with the native modules. The Dart framework uses Skia C++ engine which has all the protocols, compositions and channels. The architecture of the Flutter engine is explained in a detailed manner in their GitHub Wiki. In short, Flutter has everything needed for app development in the Flutter engine itself.

React Native architecture heavily relies on JS runtime environment architecture, also known as JavaScript Bridge. The JavaScript code is compiled into native code at runtime. React Native uses the Flux architecture from Facebook. There is a detailed article on the core architecture of React Native. In short, React Native uses the JavaScript Bridge to communicate with the native modules.

Flutter engine has most of the native components in the framework itself and it doesn’t always need a bridge to communicate with the native components. React Native, however, uses the JavaScript Bridge to communicate with native modules, which results in poor performance.

Flutter vs React Native - Architecture - edureka

Flutter vs React Native – Features and API Components

Choosing a cross-platform approach to develop an application comes with a few compromises, the biggest one being no direct access to native features like Bluetooth, various sensors, etc. Most cross-platform frameworks provide supplement APIs that provide access to these features.

In the case of Flutter, its API is rich in everything that is needed to access these native features. The framework is bundled with UI rendering components, device API access, navigation, testing, stateful management and loads of libraries. If you happen to choose flutter as your choice of framework, then you will find everything you need in Flutter itself, without relying on third-party applications. 

React Native provides the bare minimum when it comes to features and API components. With React Native, a developer is just provided with UI rendering and device access modules. For native features, React Native is heavily dependent on third-party libraries and modules. This clearly makes Flutter the winner when it comes to Features and API.

Flutter vs React Native - Features - edureka

Flutter vs React Native – Developer Productivity

The framework of choice should always encourage a developer to be productive. This ensures the delivery of a quality application in a stipulated period of time.

Flutter is a relatively new framework. So,  while the easier stuff can be achieved in almost no time, as your application gets more complicated, it becomes a tedious job to keep up for some developers. If the developer is not accustomed to Dart, he/she might have to spend a good chunk of time researching and learning how to implement the things he wants.

On the other hand, React Native is based on JavaScript and hence thrives from the resources available. Also, since JavaScript is considerably easier to learn, any new feature that might need to be implemented, can be easily learned and quickly implemented.

So, React Native is the way to go if your focus is developer productivity.

Flutter vs React Native - Developer Productivity - edureka

Flutter vs React Native – Community Support

An active community means quicker bug reports, more creative feature suggestions and most of all, a good chance at troubleshooting complicated doubts.

Flutter being the newer of the two frameworks, obviously has a smaller community than React Native. None the less, the Flutter community is going through tremendous growth. With the backing of a company like Google, Flutter is also garnering the trust of many to implement it in their projects. At this moment, React Native surely has a much more diverse community that even hosts international meet-ups. Though, given enough time, I think Flutter will emerge with an equally thriving and active community too.

Flutter vs React Native - community - edureka

Flutter vs React Native – Testing 

Proper testing of an application on all its fronts, is imperative to its success. If a framework comes with inbuilt testing modules and libraries, the job becomes easy to orchestrate and can be efficiently executed.

Flutter, being the feature-rich framework off the two, also comes along with testing modules. This helps in unit testing, widget testing and also integration testing. Above that, the usage of these modules is clearly explained in the documentation. React Native isn’t so expansive when it comes to testing. It provides a few unit testing functionalities through JavaScript frameworks and snapshot testing can be done using tools like Jest. For other sorts of testing, applications built using React Native heavily depend on third-party applications like Appium.

Ergo, Flutter is much more convenient if the testing of your application is a major concern.

Flutter vs React Native - Testing - edurekaFlutter vs React Native – Release Automation Support

Releasing a native application to the respective application store portals – Play Store for Android, and App Store for iOS, is a tedious process. With a cross-platform application, it can become a grueling task and automating the process can be tons of help.

Flutter has a strong command line interface. We can create a binary of the app by using the command line tools and following the instructions in Flutter documentation for building and releasing Android and iOS apps. On top of this, Flutter has officially documented the deployment process with Fastlane.

React Native, natively (haha) doesn’t support any sort of release automation. The process found on their documentation is a completely manual one. None the less, third-party applications like Fastlane can be used for automating the build and release process.Flutter vs React Native - Release Automation - edureka

Flutter vs React Native – CI/CD Support

The DevOps cycle has brought about a major change in how applications are released and then maintained so that they comply with industry standards. An essential process in the DevOps cycle is continuous integration/continuous delivery or CI/CD in short.

Flutter has a section on Continuous Integration and Testing which includes links to external sources. However, Flutter’s rich command line interface allows us to set up CI/CD easily. React Native doesn’t have any official documentation on setting up CI/CD. However, there are some articles which describe CI/CD for React Native apps.

Flutter vs React Native - CICD - edurekaReact Native and Flutter both have their pros and cons, but Flutter came out as the winner in this match. Many industry experts have actually predicted that Flutter will bring about a revolution in the mobile application development industry. Given the comparison we performed on various parameters, we can conclude that the prediction has a good possibility of becoming a reality. All we have to do is wait and watch! 

If you have any doubts regarding this blog, feel free to post a comment in the comment section of this “Flutter vs React Native” blog, and we will get back to you. Also, feel free to share where you agree and disagree with my point of view.

What is Enumeration in Java? A Beginners Guide

$
0
0

Enumeration is nothing but a set of named constants which helps in defining its own data types. When you can identify the type of variables in the program, it becomes easy to define them. So, Enum is used when you are already aware of all the values at the compile time. In this article, I will tell you how to define Enumeration in Java with the help of examples.

I will be covering below topics in this article:

Let’s get started!

What is Enumeration in Java?

Enumeration is basically a list of named constant. In Java, it defines a class type. It can have constructors, methods and instance variables. It is created using the enum keyword. By default, each enumeration constant is public, static and final. Even though enumeration defines a class type and has constructors, you do not need to instantiate an enum using the new variable. Enumeration variables are used and declared in the same way as that of a primitive variable.

Now let’s get into the details of Enumeration and understand its syntax and declaration.

Defining Enumeration in Java

Enum declaration can be done either outside a Class or inside a Class. But, we cannot declare Enum inside the method. Let’s take a small example to understand its declaration. First, I will tell you how to declare enum outside a class.

1. Declaring an Enumeration in Java outside a Class

enum Directions{ // enum keyword is used instead of class keyword
NORTH, SOUTH, EAST, WEST;
}
public class enumDeclaration {
public static void main(String[] args) {
Directions d1 = Directions.EAST; // new keyword is not required to create a new object reference
System.out.println(d1);
}
}

Output:

EAST

2. Declaring an Enumeration in Java inside a Class

public class enumDeclaration {
enum Directions{
NORTH, SOUTH, EAST, WEST;
}
public static void main(String[] args) {
Directions d1 = Directions.EAST; // new keyword is not required to create a new object reference
System.out.println(d1);
}
}

Output:

EAST

The first line inside the enum type should be a list of constants. Then, you can use methods, variables, and constructor. Basically, enum represents a group of variables and constants.

Note: 

  • Enum basically improves type safety.
  • It can be diversely used in switch case examples.
  • Enum can be easily traversed.
  • The enum has fields, constructors and methods.
  • Enum basically implements many interfaces but, cannot extend any class because it internally extends Enum class.

Now that you know how to declare and use enum in your program, let’s understand how to implement this with switch case statements.

Enumeration using the Switch statement

Enumeration value can also be used to control a switch statement. It is necessary that all the case statements must use constants from the same enum as used by the switch statement. Below example demonstrates the same.

package Edureka;
import java.util.*;
enum Directions {
NORTH,
SOUTH,
EAST,
WEST
}
public class Test1{
public static void main(String[] args) {
Directions d= Directions.SOUTH;
switch(d) { //The name of the enumertion constants are used without their enumeration
case NORTH: //only constants defined under enum Directions can be used
System.out.println("North direction");
break;
case SOUTH:
System.out.println("South direction");
break;
case EAST:
System.out.println("East directiion");
break;
case WEST:
System.out.println("West directiion");
break;
}

 

Output:

South Direction

I hope you understood how to implement a switch statement using an enum. Now let’s move further and understand what is Values( ) and ValueOf( ) method and the difference between them.

Values( ) and ValueOf( ) method

Values(): When you create an enum, the Java compiler internally adds the values() method. This method returns an array containing all the values of the enum.

Syntax:

public static enum-type[ ] values()

ValueOf(): This method is used to return the enumeration constant whose value is equal to the string passed as an argument while calling this method.

Syntax:

public static enum-type valueOf (String str)

Now let’s write a program to understand these methods in a more detailed manner.

enum Colors{
black, red blue, pink, white
}
class Test {
public static void main(String args[]) {
Colors c;
System.out.println("All constants of enum type Colors are:");
Colors cArray[] = Colors.values(); //returns an array of constants of type Colors
for(Colors a : cArray) //using foreach loop
System.out.println(a);
c = Colors.valueOf("red");
System.out.println("I Like" + c);
}
}

Output:

All constants of enum type Colors are:
black
red
blue
pink
white
I Like red

That’s how you can use Values() method to return the array that contains all the enum present in the method and Valueof() to return the enumeration constant. I hope you understood this concept.

Now let’s move further and understand the implementation of Enumeration in Java with the constructor, instance variable and method.

Enumeration with Constructor, instance variable and Method

Basically, Enumeration can contain constructor and it is executed separately for each enum constant at the time of enum class loading. Not just that, an enum can create concrete methods also. Let’s write a code to understand the Enumeration implementation with Constructor, instance variable and Method.

enum Student {
mack(11), Birdie(10), Son(13), Victor(9);
private int age; //variable defined in enum Student
int getage { return age; } //method defined in enum Student
public Student(int age) //constructor defined in enum
{ this.age= age; }
}
class EnumDemo {
public static void main( String args[] ) {
Student S;
System.out.println("Age of Victor is " +Student.Victor.getage()+ "years");
}
}

Output:

Age of Victor is 9 years

Here, as soon as we declare an enum variable(Student S), the constructor is called once, and it initializes the age parameter for every enumeration constant with values specified with them in parenthesis. So, that’s how it works.

This brings us to the end of the article on Enumeration in Java.  I hope you found it informative.

Check out the Java Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. We are here to help you with every step on your journey, for becoming a besides this java interview questions, we come up with a curriculum which is designed for students and professionals who want to be a Java Developer. 

Got a question for us? Please mention it in the comments section of this “Enumeration in Java ”article and we will get back to you as soon as possible.

Everything You Need To Know About Instances In AWS

$
0
0

AWS has become a lifesaver for many organizations like Adobe Systems, 21st Century Fox, AirAsia, Airbnb and a lot more. One of the biggest reasons behind the success stories of these organizations is using AWS as their infrastructure. What does Amazon provide that has grabbed so many user’s attention? Funny question, because they have a lot to serve and one of them is Amazon Elastic Cloud Compute(EC2). This article on Instances in AWS gives a detailed explanation of Amazon EC2.

Agenda for this article:

Introduction To Instances In AWS

In school, when I had just started learning Linux, I had labs that provided systems with Linux OS. I’m sure none of us really had Linux on our personal laptops and neither did I. Every time I had to practice, I had to use the school labs. A similar situation occurred while learning MySql or for a matter of fact any new technology.

We wanted to concentrate on learning how to work with the technology instead of spending time on the “setting up” part. How I wish I was introduced to Amazon EC2 back then. Never knew getting Linux OS on my personal system would be as easy as launching an EC2 instance.

Developers face a very similar issue. They would rather spend their time developing instead of setting up the development environment. That’s when Instances in AWS came to their rescue. If the developers needed an environment with Linux and MySql, all they had to do was launch an instance. Honestly, Amazon provides a machine image for almost every requirement.

Instances in AWS are basically virtual environments. These virtual environments are isolated from the underlying base OS. It’s an On-demand service, i.e. a user can rent the virtual server(instances) on an hourly base and deploy their applications on it. EC2 Instances are highly scalable, meaning, you can scale up or scale down based on your requirement dynamically. Using EC2 Instances as your cloud computing environment eliminates the need to invest in hardware and software dependencies.

Types Of Instances In AWS

Amazon provides a wide range of instances to fit different use cases. You can select the one that suits you the best. Let’s have a look at different types of instances that’ll help you select your perfect match.

General Purpose Instances

It’s the most widely used instance type. Its mainly used for web-servers and running deployment environments for mobile or gaming applications. It’s perfect if you’re a newbie. General purpose instances include – A1, M5, M5a, M4, T3, T3a, T2

Compute Optimized Instances

Compute optimized instance types are perfect when you have to prioritize raw compute power, such as gaming servers, scientific modeling, high-performance web servers, and media transcoding. They are faster but more expensive(cost based on memory, CPU, instance storage, network, and EBS bandwidth). Compute Optimized instances include – C5, C5n, C4

Memory Optimized Instances

These instances are ideal for memory sensitive application, such as real-time big data analytics, high-performance databases, etc. Memory optimized instances include – R5, R5a, R4, X1e, X1, Z1d, High Memory

Accelerated Computing Instances

Accelerated computing instances use separate Graphical Processing Unit or Field Programmable Gate Arrays for graphic sensitive calculations. Accelerated Computing instances include – P3, P2, G3, F1

Storage Optimized Instances

These kinds of instances provide high sequential read-writes for large data sets. These instances are used when a user requires high SSD storage. Storage optimized instances include – I3, I3en, D2, H1

Let’s further have a look at features of instances in AWS to relate to the hype.

Features Of Instances In AWS

  1. Instances consist of Amazon Machine Images(AMI) which are pre-configured templates that wraps everything you need for your server including the operating system.
  2. Multiple options for configuring your instance memory, CPU, storage and networking, making various Instance Types available for you.
  3. Very secure login as it uses key pairs.
  4. It provides a storage volume called Instance Store Volumes for temporarily storing your instance data. These storage volumes are deleted when the instance stops.
  5. Provides persistent storage for your data using Amazon EBS Volumes.
  6. Provides multiple physical locations for your resources. These resources could be anything – instances, Amazon EBS Volumes, etc. This is possible because of Regions and Availability Zones.
  7. It provides a firewall that lets you specify ports, protocols, and source IP range that reaches your instance. This is configured using the Security Groups.
  8. By default, the instances you create have dynamic IP. You can provide your instance with a static IP using Elastic IP Addresses.
  9. You can provide metadata for your instances in the form of Tags.
  10. It lets you configure your network such that its isolated from the rest of the AWS Cloud using Virtual Private Clouds(VPC).

Demo: To launch a free tier(t2.micro) EC2 Instance

Goto AWS Management Console and type EC2. Click on the EC2 service that pops up.

ec2 - Instances In AWS - Edureka

You’ll see EC2 dashboard. Click on Launch Instance to launch it.

launch - Instances In AWS - Edureka

You’ll then be prompted to select the AMI. AMI stands for Amazon Machine Image. Select the one that you need. Make sure of what you choose as these instances cost. So if you’re a beginner it’s better to go for the Free Tier one. For this demo, I’ve selected the Ubuntu Server 18.04.

ami - Instances In AWS - Edureka

Next step is to choose an instance type. You have different options as I’ve mentioned earlier. For this demo, I’ve selected a free tier one(t2.micro).

t2.micro - Instances In AWS - Edureka

Go ahead and click on Next: Configure Instance Details. Add the number of instances you wish to create, the network and subnet you’d like. I’ve selected from the available Network and Subnet and kept the rest as it is.

configure - Instances In AWS - Edureka

Once you’re done configuring your instance, go ahead and click on Next: Add Storage. I’ve used the 8GiB General Purpose SSD.

storage - Instances In AWS - Edureka

Once you’ve added storage, click on Next: Add Tags. Tags are basically meta description for your instance. If you wish to add, you can click on Add Tag on the bottom left.

tags - Instances In AWS - Edureka

Once you’re done adding tags, click on Next: Configure Security Group. Add rules by clicking on Add Rule.

secgroup - Instances In AWS - Edureka

Once you’re done adding security groups, click on Review and Launch.

review - Instances In AWS - Edureka

Review your instance details and click on Launch. You’ll see the launch status like in the image below.

launchstatus - Instances In AWS - Edureka

Click on the Instance ID as highlighted in the above image.

launched - Instances In AWS - Edureka

Yayyy! Your instance is in the running state. This is just the initial stage. Once you’ve created an instance you can use these instances with other AWS services and do a lot of things from running a small docker image to handling an entire organization’s deployment.

This brings us to the end of Instances In AWS article. I hope this blog was helpful. For more such blogs, visit “Edureka Blog“.

If you wish to learn more about Cloud Computing and build a career in Cloud Computing, then check out our Cloud Computing Courses which comes with instructor-led live training and real-life project experience. This training will help you understand Cloud Computing in depth and help you achieve mastery over the subject.

Got a doubt? Please mention it in the comments section or post it on Edureka Community and we will get back to you. At Edureka Community we have more than 1,00,000+ tech-fanatics ready to help.

Swing In Java : Know How To Create GUI With Examples

$
0
0

Swing in java is part of Java foundation class which is lightweight and platform independent. It is used for creating window based applications. It includes components like button, scroll bar, text field etc. Putting together all these components makes a graphical user interface. In this article, we will go through the concepts involved in the process of building applications using swing in Java. Following are the concepts discussed in this article:

What is Swing In Java?

Swing in Java is a lightweight GUI toolkit which has a wide variety of widgets for building optimized window based applications. It is a part of the JFC( Java Foundation Classes). It is build on top of the AWT API and entirely written in java. It is platform independent unlike AWT and has lightweight components.

It becomes easier to build applications since we already have GUI components like button, checkbox etc. This is helpful because we do not have to start from the scratch.

Container Class

Any class which has other components in it is called as a container class. For building GUI applications at least one container class is necessary.

Following are the three types of container classes:

  1. Panel – It is used to organize components on to a window

  2. Frame – A fully functioning window with icons and titles

  3. Dialog – It is like a pop up window but not fully functional like the frame

Difference Between AWT and Swing

AWT SWING
  • Platform Dependent
  • Platform Independent
  • Does not follow MVC
  • Follows MVC
  • Lesser Components
  • More powerful components
  • Does not support pluggable look and feel
  • Supports pluggable look and feel
  • Heavyweight
  • Lightweight

Java Swing Class Hierarchy

hierarchy-swing in java-edureka

Explanation: All the components in swing like JButton, JComboBox, JList, JLabel are inherited from the JComponent class which can be added to the container classes. Containers are the windows like frame and dialog boxes. Basic swing components are the building blocks of any gui application. Methods like setLayout override the default layout in each container. Containers like JFrame and JDialog can only add a component to itself. Following are a few components with examples to understand how we can use them.

JButton Class

It is used to create a labelled button. Using the ActionListener it will result in some action when the button is pushed. It inherits the AbstractButton class and is platform independent.

Example:


import javax.swing.*;
public class example{
public static void main(String args[]) {
JFrame a = new JFrame("example");
JButton b = new JButton("click me");
b.setBounds(40,90,85,20);
a.add(b);
a.setSize(300,300);
a.setLayout(null);
a.setVisible(true);
}
}

Output:
JButton - Java Swing - Edureka

JTextField Class

It inherits the JTextComponent class and it is used to allow editing of single line text.

Example:

import javax.swing.*;
public class example{
public static void main(String args[]) {
JFrame a = new JFrame("example");
JTextField b = new JTextField("edureka");
b.setBounds(50,100,200,30);
a.add(b);
a.setSize(300,300);
a.setLayout(null);
a.setVisible(true);
}
}

Output:
output-java swing-edureka

JScrollBar Class

It is used to add scroll bar, both horizontal and vertical.

Example:

import javax.swing.*;
class example{
example(){
JFrame a = new JFrame("example");
JScrollBar b = new JScrollBar();
b.setBounds(90,90,40,90);
a.add(b);
a.setSize(300,300);
a.setLayout(null);
a.setVisible(true);
}
public static void main(String args[]){
new example();
}
}

Output:
output - java swing- edureka

JPanel Class

It inherits the JComponent class and provides space for an application which can attach any other component.

import java.awt.*;
import javax.swing.*;
public class Example{
Example(){
JFrame a = new JFrame("example");
JPanel p = new JPanel();
p.setBounds(40,70,200,200);
JButton b = new JButton("click me");
b.setBounds(60,50,80,40);
p.add(b);
a.add(p);
a.setSize(400,400);
a.setLayout(null);
a.setVisible(true);
}
public static void main(String args[])
{
new Example();
}
}

Output:

JMenu Class

It inherits the JMenuItem class, and is a pull down menu component which is displayed from the menu bar.

import javax.swing.*;
class Example{
JMenu menu;
JMenuItem a1,a2;
Example()
{
JFrame a = new JFrame("Example");
menu = new JMenu("options");
JMenuBar m1 = new JMenuBar();
a1 = new JMenuItem("example");
a2 = new JMenuItem("example1");
menu.add(a1);
menu.add(a2);
m1.add(menu);
a.setJMenuBar(m1);
a.setSize(400,400);
a.setLayout(null);
a.setVisible(true);
}
public static void main(String args[])
{
new Example();
}
}

Output:

JList Class

It inherits JComponent class, the object of JList class represents a list of text items.

import javax.swing.*;
public class Example
{
Example(){
JFrame a  = new JFrame("example");
DefaultListModel<String> l = new DefaultListModel< >();
l.addElement("first item");
l.addElement("second item");
JList<String> b = new JList< >(l);
b.setBounds(100,100,75,75);
a.add(b);
a.setSize(400,400);
a.setVisible(true);
a.setLayout(null);
}
public static void main(String args[])
{
new Example();
}
}

Output:

JLabel Class

It is used for placing text in a container. It also inherits JComponent class.

import javax.swing.*;
public class Example{
public static void main(String args[])
{
JFrame a = new JFrame("example");
JLabel b1;
b1 = new JLabel("edureka");
b1.setBounds(40,40,90,20);
a.add(b1);
a.setSize(400,400);
a.setLayout(null);
a.setVisible(true);
}
}

Output:

JComboBox Class

It inherits the JComponent class and is used to show pop up menu of choices.

import javax.swing.*;
public class Example{
JFrame a;
Example(){
a = new JFrame("example");
string courses[] = { "core java","advance java", "java servlet"};
JComboBox c = new JComboBox(courses);
c.setBounds(40,40,90,20);
a.add(c);
a.setSize(400,400);
a.setLayout(null);
a.setVisible(true);
}
public static void main(String args[])
{
new Example();
}
}  

Output:

Layout Manager

To arrange the components inside a container we use the layout manager. Following are several layout managers:

  1. Border layout

  2. Flow layout

  3. GridBag layout

Border Layout

The default layout manager for every JFrame is BorderLayout. It places components in upto five places which is top, bottom, left, right and center.

BorderLayout-Java Swing-Edureka

Flow Layout

FlowLayout simply lays the components in a row one after the other, it is the default layout manager for every JPanel.

Flowlayout-swing in java-edureka

GridBag Layout

GridBagLayout places the components in a grid which allows the components to span more than one cell.

Gridlayout-Swing in Java- edureka

Example: Chat Frame

import javax.swing.*;
import java.awt.*;
class Example {
    public static void main(String args[]) {

      
        JFrame frame = new JFrame("Chat Frame");
        frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        frame.setSize(400, 400);

        JMenuBar ob = new JMenuBar();
        JMenu ob1 = new JMenu("FILE");
        JMenu ob2 = new JMenu("Help");
        ob.add(ob1);
        ob.add(ob2);
        JMenuItem m11 = new JMenuItem("Open");
        JMenuItem m22 = new JMenuItem("Save as");
        ob1.add(m11);
        ob1.add(m22);

        
        JPanel panel = new JPanel(); // the panel is not visible in output
        JLabel label = new JLabel("Enter Text");
        JTextField tf = new JTextField(10); // accepts upto 10 characters
        JButton send = new JButton("Send");
        JButton reset = new JButton("Reset");
        panel.add(label); // Components Added using Flow Layout
        panel.add(label); // Components Added using Flow Layout
        panel.add(tf);
        panel.add(send);
        panel.add(reset);
        JTextArea ta = new JTextArea();

        frame.getContentPane().add(BorderLayout.SOUTH, panel);
        frame.getContentPane().add(BorderLayout.NORTH, tf);
        frame.getContentPane().add(BorderLayout.CENTER, ta);
        frame.setVisible(true);
    }
}

This is a simple example for creating a GUI using swing in Java.

In this article we have discussed swing in Java and hierarchy of Java swing classes. With all the components which comes with swing in Java, it becomes easier to build optimized GUI applications. Java programming language is a structured programming language and with the increasing demand it becomes extremely important to master all the concepts in Java programming. To kick-start your learning and to become an expert in java programming, enroll to Edureka’s Java certification program.

Got a question for us? please mention this in the comments section of this ‘Swing In Java’ article and we will get back to you as soon as possible.

What Is JSP In Java? Know All About Java Web Applications

$
0
0

Thinking what is a JSP and its usage? Well, you have landed at the right place! Java Server Pages, commonly termed as JSP technology is one of the Java web technologies. It is a server-side technology basically used for creating web applications. Let me discuss the concept of JSP in depth with you all.

In this article, I will be covering the following pointers:

Beginning with simplifying the concept of JSP technology, let me introduce you with the basics. 

JSP technology is basically a language widely used to develop JSP pages. This technology creates web content which consists of both dynamic as well as static components.

Now, let me explain what exactly is a JSP page!

What is a JSP page?

A JSP page is a  text document. It contains two types of text: static content and dynamic content. Static content can be expressed in any text-based format, say, HTML. Whereas, the dynamic content comprises of the Java code. JSP technology here combines the static content with the Java code, hence making it a dynamic web page. The file extension for the source file of a JSP page is supposed to be .jsp. The extension for the source file of a fragment of a JSP page is .jspf.

Now that you are familiar with the concept of JSP pages and JSP technology, let’s proceed and understand the features of JSP!

Features of JSP technology

1. Easy coding

JSP allows tag-based programming. Hence, there is no need for expertise in the Java language. HTML tags are easy to use, hence the code is easily readable.

2. Interactive web pages

The building of dynamic web pages that are able to interact with the users in a real-time environment.

3. Easy connection to a database

It allows us an easy connection to the database as it mainly connects with the server.

After studying about the features, let’s move further and look out for the lifecycle of a JSP page.

Lifecycle of a JSP page

JSp Life Cycle - JSP in Java - Edureka

Let me explain the steps involved in the above-shown diagram.

1. JSP page translation:

A Java servlet file is created from the JSP source file. In the translation phase, the container validates the correctness of the JSP pages and tag files.

2. JSP page compilation:

The created java servlet file is compiled into a Java servlet class.

3. Class loading:

The java servlet class that was compiled from the JSP source is now loaded into the container.

4. Execution phase:

In the execution phase, the container creates one or more instances of this class in response to the requests. The interface JsP Page contains jspInit() and jspDestroy(). JSP provides special interface HttpJspPage for JSP pages specifically for the HTTP requests and this interface contains _jspService().

5. Initialization:

jspInit() method is called immediately after the instance is created. 

6. jspDestroy() execution:

This method is called when JSP is destroyed. With this call, the servlet completes its purpose and goes into the garbage collection This ends the JSP life cycle.

There are certain life cycle methods provided in JSP, these are: jspInit(), _jspService() and jspDestroy(), explained above.

Learning about the lifecycle is important. It gives you an insight into the actual functioning. Now, let us see and understand the syntax used in creating a JSP page.

Syntax of JSP

The syntax for the following in JSP:

1. JSP Expression

<%= expression %>

Example:

&<%marks1 = marks1 + marks2 %>

2. Declaration Tag

<%! Dec var %>

Example:

<%! int var = 50; %>

3. JSP scriptlet

<% java code %>

Here, you can insert the respective Java code.

4. JSP comments

<% -- JSP Comments %>

As we are all acquainted with the syntax of JSP, now let me brief you about the term ‘Java servlet’.

What is a servlet?

Java servlets were the first attempt to get access to the full power of Java in Web applications. They are written in Java. To make you more familiar with servlets let me show you the code. For more information, kindly go through the ‘Introduction to Java Servlets’ blog!

Now, let me show you a code which will teach you to create a JSP page.

A Simple JSP page

<HTML>
<HEAD>
<TITLE>A Web Page</TITLE>
</HEAD>
<BODY>
<% out.println ("JSP in JAVA") ; %>
</BODY>
</HTMl>

As you can see in the above code, how easily a JSP page is created. This easier approach has helped JSP to take off so well. Simple HTML tags have been used. An additional element <% out.println (“JSP in JAVA”) ; %> can be seen. This element is called a scriptlet! It includes a java code used in an HTML-JSP code.

Moving further, let us dive in and learn how to run a JSP page.

How to run a JSP page

Execution of JSP involves several steps. These are mentioned below:

  1. Firstly, create an HTML file, say, ana.html, from here a request will be sent to the server.

  2. Secondly, create a .jsp file, say, ana1.jsp, this would tackle the request of the user.

  3. Thirdly, create a project folder structure.

  4. Now, you need to create an XML file and then a WAR file.

  5. After that, start Tomcat

  6. Finally, you are ready to run the application.

On executing the above-written code in JSP file, the output looks like as shown below:

JSP in Java Output - JSP in Java - Edureka

With this, we have reached towards the end of this article. I hope the content you read was informative and helpful. We will keep exploring the Java world with more topics. Stay tuned!

Check out the Java training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. Edureka’s Java J2EE and SOA training and certification course is designed for students and professionals who want to be a Java Developer. The course is designed to give you a head start into Java programming and train you for both core and advanced Java concepts along with various Java frameworks like Hibernate & Spring.

Got a question for us? Please mention it in the comments section of this “JSP in Java” blog and we will get back to you as soon as possible.

Viewing all 1753 articles
Browse latest View live