Wednesday 30 November 2016

What is Net Frameworks and .Net Framework Architecture

.NET FRAMEWORK VERSIONS:

The development of .Net has been started in late 90’s and the first version of framework has been launched in the year 2000 as 1.0 Beta (Trial) and officially it has launched into the market as 1.0 RTM (Release To Manufacturing) in 2002.

2000  ——  .Net Framework 1.0 Beta

2002  ——  .Net Framework 1.0 RTM

2003  ——  .Net Framework 1.1

2005  ——  .Net Framework 2.0

2006  ——  .Net Framework 3.0

2007  ——  .Net Framework 3.5

2009  ——  .Net Framework 4.0

2012  ——  .Net Framework 4.5

2013  ——  .Net Framework 4.5.1

2014  ——  .Net Framework 4.5.2

2015  ——  .Net Framework 4.6

.NET FRAMEWORK ARCHITECTURE:

Following is the figure of .Net Framework Architecture:

MOBILITY MODELS

Node mobility in MIRACLE is ruled by two classes named BMPosition and GMPosition, derived from Position class. The former utilizes a Basic Movement (BM) model and the latter uses a Gauss-Markov (GM) model. 
These models are those we provide with the MIRACLE release, but you can feel free to create your own! 

Saturday 26 November 2016

STATIC KEYWORD

If  a keyword Static is present before a variable or method then it is called as static variable or static method respectively.The keyword static is mainly used for memory management in Java.The variable declared with  static keyword belongs to the class rathar than the object of class .It is also possible to define main() method without the keyword static.

Friday 25 November 2016

CONSTRUCTORS IN JAVA

Constructor is a special type of method in a class that is used initialise an object, that is a constructor is called automatically when an object of a class is created.A constructor is mainly used to initiallise the values to the variables in the functions.
The constructor has some special  properties that are:
  •     Constructor cannot have any return type ,not even void.
  • The name of the constructor should be same as the class name.

The most common types of constructors in Java programming are:
Default constructor and
Parameterised constructor

Thursday 24 November 2016

Types of MANET

Vehicular Ad-hoc Networks (VANETs) are acclimated for advice a part of cars and amid cars and roadside equipment. For example, a university bus system, if the buses are connected. The buses biking to altered locations of a city-limits to aces up or bead off students, and accomplish an ad-hoc network. 

Tuesday 22 November 2016

METHOD OVERLOADING IN JAVA

When a Java Program contains more than one method with same name ,but with different number and type of parameters ,then it is said to be overloaded .
Suppose we have a one function with name sum that is sum(),the the sum function can be overloaded by using different number and type of

Monday 21 November 2016

Just In Time (JIT) Compilation in JVM

The Just In Time (JIT) compilation in Java is essentially a process that improves the performance of Java applications at run time. JIT compiler is a component of Java Runtime Environment (JRE) and is enabled by default. JIT is a type of dynamic compilation whereas javac is static compilation. With static compilers, the input code is interpreted once. Unless you make changes to your original source code and recompile the code, the output would result in the

ARCHITECTURE OF JAVA LANGUAGE

Compilation and Interpretation :
Java is both a compiled and interpreted language.Firstly Java Compiler comes into picture and converts the Java Source Code into a byte code that can be executed on any platform.Finally Java Virtual Machine also called as JVM comes into picture and

Saturday 19 November 2016

JAVA BUZZWORDS

Features of Java is also called as Java Buzzwords.There are some most important functions of java which are given below:
Simple :
Java is very simple and easy to understandable language,The reason we say Java is simple is that it is based on  C++.Also the complex features like pointers are not present in Java which makes it simple and hence easy to understand .Here Java Garbage Collector performs the function of Destructor.

Friday 18 November 2016

.Net Framework and Development of .Net Framework

.Net Framework

  1. .Net Framework is a software required for .Net applications on any machine, which masks the functionalities of an operating system and executes the .Net languages compiled code i.e. CIL code under its control by providing the features like:
  • Portability (Platform Independency)
  • Security
  • Automatic Memory Management
     2. In case of platform dependent languages like C, C++ etc., the compiled code (Machine Code)

.Net Framework and Development of .Net Framework

.Net Framework

  1. .Net Framework is a software required for .Net applications on any machine, which masks the functionalities of an operating system and executes the .Net languages compiled code i.e. CIL code under its control by providing the features like:
  • Portability (Platform Independency)
  • Security
  • Automatic Memory Management
     2. In case of platform dependent languages like C, C++ etc., the compiled code (Machine Code)

CLOUD COMPUTING

There is a lot of buzz these days about cloud computing. But I see a lot of technocrats who still have wrong intuition about cloud computing. Let me throw some light on it.

Cloud computing is a category of computing solutions in which a technology and/or service lets users access computing resources on demand, as needed, whether the resources are physical or virtual, dedicated, or shared, and no matter

PLATFORM INDEPENDENCY FEATURE IN JAVA

Platform independence means that the program written on one system or platform can be executed on any other platform.The birth of object oriented programming took place with the concept of encapsulation which means to wrap up data and functions together.

DATA TYPES IN JAVA

The data type of a variable is an attribute that tells what kind of data that a value can have.
Java programming supports eight primitive types under the following four categories which are as under:

  1. Integers:The datatype integer is used to store the numbers other than real point numbers.
                  Under integer sub types are:                 
      • byte: The size of byte is 1 byte.
      • int:   The size of int is 4 bytes
      • long: The size of long is 8 bytes.
      • short:The size of short is 2 bytes.     

What is .Net? Why .Net?

.Net is a product of Microsoft designed to be Platform Independent (Portable) and Object-Oriented (Security and Re-usability) which can be used in the development of various kinds of applications like:

  • Desktop Applications
  • Character User Interface (CUI)
  • Graphical User Interface (GUI)
  • Web Applications
  • Mobile Applications

To develop the above applications, we are provided with the following things using .Net:

Wednesday 16 November 2016

EXECUTION OF JAVA PROGRAM

Execution of Java program can take place in two ways that are as follows:

  • Static Loading and
  • Dynamic Loading
In the process of Static Loading the block of code is loaded into the memory before it is executed that is if the code is loaded into the RAM (Random Access Memory).the code may or may not get executed.The process of static loading takes place in execution of Structural programming languages like C programming language which follows top down approach.


while on the other hand in Dynamic loading the block of code is loaded into the memory only when it needs to be executed.Java program execution follows Dynamic Loading.
The following steps are followed while executing Java program:

KEYWORDS AND IDENTIFIERS IN JAVA

The keywords of any programming language identifies that programming language.In a Java Program there can be any number of keywords and each keyword has its own meaning .Keywords are the reserve words that cannot be used as variable names or identifiers.These are the special words that were designed at the time of development of Java Programming language.

What is Platform, Platform Dependent Applications and Platform Independent Applications?

Platform:

  1. It is an environment under which an application executes.
  2. A platform is a combination of operating system and microprocessor.
Fig: Platform

Platform Dependent Applications:

Applications that are developed by using languages that are existing in the market before 1995 are platform dependent applications only i.e. these applications when developed targeting in operating system cannot execute on other operating system.
Example: C++ Language (Windows Operating System

Source Code –> Compiler –> Machine Code (.exe)

Tuesday 15 November 2016

MAP REDUCE

Map reduce  is a technique for processing the huge data stored in Hadoop distributed file system.The Map Reduce algorithm contains two important tasks, namely Map and Reduce. The component Map takes a data set and converts it into another data set where individual elements are splitted into key, value pairs Then the reducer comes in picture whose task is to take the output from maps as input and combine those inputs to generate final output. The number of maps will be equal to the number of input splits.
There are basically four formats of a file:
1              TextInput Format
2              KeyValueTextInput Format
3              SequencefileInput Format
4              SequencefileAsTextIput Format .

5              TextInput Format is the default format and the other three are explicitely specified in driver code for record reader understanding. If  file format is TextInput Format then the record reader reads one line at a time from its corresponding input split and it is converted into Block offset,entire line pair as key, value pair. If file format is KeyValueTextInput Format then it splits that key as per the basis of tab character.

 


Architecture of Map Reduce

FEATURES AND ARCHITECTURE OF HDFS

FEATURES:

1.      It is suitable for the distributed storage and processing.
2.      To interact with HDFS , there is a command line.
3.      The built-in servers of name node and data node help users to easily check the status of cluster.
4.      Streaming access to file system data.
5.      HDFS provides file permissions and authentication.



For accessing of data stored in HDFS, Map-Reduce comes in picture which is discussed after HDFS. The architecture of HDFS is shown below-





HDFS (HADOOP DISTRIBUTED FILE SYSTEM)

HDFS is an important component of Hadoop. HDFS is a specially designed file system for storing huge data-sets with a cluster of commodity hardware and with streaming access patterns .Here commodity hardware refers to the cheap hardware. HDFS Uses a block size of 64 MB that can be extended up to 128 MB depending upon the need and type of applications. Normally file systems uses a block size of 4 KB which results in a loss of memory, HDFS by default uses 64 MB .Another reason for using 64 MB block is that meta data would be increased if 4 KB block is used .For Example if we want to store 200 MB of data, whole data will be splitted into 4 files, three files of 64 MB and a single file of 8 MB.

HDFS uses five type of services-
1                    Name Node
2                    Secondary Name Node
3                    Job Tracker
4                    Data Node
5                    Task Tracker

Name Node, Secondary Name Node, Job Tracker are also called as Master Services or Master Daemons or Master Nodes  and Data Node , Task Tracker are called as Slave Services or Slave Nodes or Slave Daemons.

Every Master Service can talk to each other, similarly every Slave Service can talk to each other.
Name Node talks to Data Node and Job Tracker talks to Task Tracker no more combinations of talking between these possible.

Data Node is a commodity hardware and it is a cheap hardware, we need not to implement Data Node as hardware of high quality as HDFS by default makes  3 replicas of each file and there is a no need to worry about file loss. Name Node is a highly reliable hardware as it acts as master and handles all the data nodes.


When a client needs to store the data in HDFS, it approaches Name Node and asks for the space. Name Node also maintains a Meta data which contains all the information about data, space allotted to client for storage, which replica is stored in which data node, file size and so on. This Meta data a wide role to play in HDFS. Name Node then assigns data nodes for storage and maintains by default 3 replicas and the complete information is stored in Meta data file. Each data node gives block report and heartbeat to name node to make sure that data nodes are alive and working properly. If data node gives no block report to name node it is considered dead and the data is maintained at other data node and related information is stored in Meta data. If Name Node fails the whole system would be damaged that is why highly reliable hardware is used for name node and it is called as single point of failure. 


HOW ARE WE GETTING BIG DATA?

There are different data generators like sensors,   CCTV, online shopping, airlines, hospitality data, social networks like Facebook, twitter, linked in, and e bloggers and so on
There are many real examples of such a huge data.
Social media such as Facebook generates more than 500 TB of data on a single day, New York Stock Exchange generates more than 1 TB data per day. These are some of examples of Big Data.
If we are living in 100 percent of data world, 90 percent of data has been generated for the last two years and the remaining 10 percent of the data has been generated for the long back when these systems were getting introduced.

In fact, big data is about more than just the “bigness” of the data. Its key characteristics, coined by industry analysts are the “Three V’s,” which include volume (size) as well as velocity (speed) and variety (type). As far as we are getting so much of big data, we must be in a position to process that much of huge data in less time.  With the time as data has increased but processing speed has not been increased to synchronise with such a data.
So our processing power must be equalise to our big data, in that sense Hadoop has been introduced as a best solution to big data. Hadoop knows very well how to store and process huge data in less time.



Monday 14 November 2016

HISTORY OF HADOOP

We are all aware of google ,a great web search engine in web world .as these google people have done a great work in 1990s ,they had to come up with more data that time they started thinking that how to store huge data and how to process it ,so to get proper solution for that it has been taken 13 years for them and in the tear 2003 they had given one conclusion to store the data as GFS called as Google file system, a technique to store the data and in the year 2004 they came up with one more technique called as Map Reduce .As GFS is a technique to store so much of huge data ,Map Reduce is a technique to process that much of huge data but the problem with Google is they had  just given these techniques as description in some white paper but never implemented that. Later yahoo ,a largest search engine in web world introduced a technique called HDFS (Hadoop distributed file system) by using the concept of Google file system in the year 2006 and Map reduce in the year 2007 .

Before understanding Hadoop and its core concepts (HDFS and Map-Reduce), we need to have some knowledge about Big Data.

Big Data :

Right now we are living in data world, so everywhere we are seeing is only data so the important thing is how to store the data and how to process the data. Exactly to say what is Big Data?
We can define big data as data which is beyond to the storage capacity and which is beyond to the storage power, that data we are calling here is the big data. In other words Big Data is nothing but an assortment of such a complex and huge data that it becomes tedious to capture, store process, retrieve and analyse it with the help of traditional data base management techniques

What is Hadoop?

Hadoop is an open source technology or framework which is written in java by Apache software foundation. This framework is used to write software applications which requires to process vast amount of data (typically terabytes of data). This framework functions in parallel on large clusters and each cluster may have thousands of nodes. Hadoop processes the data very reliably and in a fault tolerant manner using simple programming models.  Hadoop is designed to scale up from a single server to thousands of the machines, each offering us local computation and storage.

There are two core concepts in Hadoop i.e., HDFS (Hadoop distributed file system) and Map-Reduce. Hadoop distributed is provided as file system which is capable of storing huge amount of data. The Map-Reduce technology was introduced for processing of such a huge data. So Hadoop is a combination of HDFS and Map-Reduce. HDFS can also be defined as a specially designed file system for storing huge data sets with cluster of commodity hardware and with streaming access patterns.
As Java uses the slogan “Write once run anywhere”, which means program written in Java can be executed on any platform provided there is a Java environment on that platform. HDFS also uses a slogan “Streaming access patterns” which means write once, read any number of times and don’t try to change the contents of file, once you are keeping data in HDFS.

This Technology of Hadoop was introduced by Doug Cutting. A Hadoop doesn’t have any expanding version like oops .The charming elephant we see is basically named after Doug Cutting’s son toy elephant.

 
                                             Hadoop Logo


Hadoop operates on massive datasets by horizontally scaling (aka scaling out), the processing across very large numbers of servers through an approach called MapReduce. Vertical scaling (aka scaling up), i.e., running on the most powerful single server available, is both very expensive and limiting. There is no single server available today or in the foreseeable future that has the necessary power to process so much data in a timely manner.
Map-Reduce is a frame work for processing such a vast amount of data by assigning data to number of different processors which works in  parallel and gives the result in a timely manner.

Sunday 13 November 2016

CREATING TOPOLOGY AND MARKING FLOWS IN NS2

We use the following code in order to have some control over the layout of the network which is to be drawn in network simulator.
Its syntax is as:
$ns duplex-link-op $n0 $n2 orient right-down
$ns duplex-link-op $n1 $n2 orient right-up

$ns duplex-link-op $n2 $n3 orient right




 Assuming we have four nodes n0,n1,n2,n3 which are created using syntax as stated earlier.
Also we assume that the nodes are connected as n0n2,n1n2,n3n2 and using duplex links which can also be written using syntax .

MARKING FLOWS:

In order to mark the flows coming from different agents i.e. to recognize which packets are coming from which nodes, we use the following code:
$udp set class- 1
$udp set class- 2
We assume that these udp agents are applied to nodes in our network.
In order to assign colors to packets coming from different nodes we use the following code:
$ns color 1 blue
$ns color 2 red


SENDING AND RECEIVING DATA BETWEEN NODES IN NS2

The following procedure shows how to send and receive data between nodes in NS2:

1 The first step is to communicate data between the nodes n0 and n1.Data is always from one agent to another. Thus we have to create agent objects to send and receive data .Following code explains sending data from node n0 to n1.
set udp0 [new Agent/UDP]
$ns attach-agent $n0 $udp0

2 The next step is to attach a traffic generator to the agent .Different examples of traffic generators are ftp (file transfer protocol),cbr(constant bit rate), poisson etc.
It can be written as:
set cbr0 [new Application/Traffic/CBR ]
1      $cbr0 set packetsize_500
2      $cbr0 set interval_0.005
3      $cbr0 attach-agent $udp0
The above written code attaches a CBR traffic generator to the UDP agent.
Line 1 sets packetsize to  500 bytes
Line 2 means that a packet will be sent every 0.005 seconds
Line3 attaches agent UDP to cbr0

3 The next step is to create a null agent which acts as traffic sink and attach it to node n1.Its syntax is as:
set null0 [new Agent/Null]
$ns attach-agent $n1 $null0
4 Now the next step is to connect two agents to each other and it can be done as:
$ns connect $udp0 $null0
5 The next step is to tell the cbr agent when to send data and when to stop sending .
It can be written as:
$ns at 0.5 “$cbr start”
$ns at 4.5 “$cbr stop”



NOTE: We should put the above lines just before the line ‘$ns at 5.0 “ finish”’

CREATING NODES AND LINKS IN NETWORK SIMULATOR WITH TCL SCRIPTING

The code which we are going to describe further is to be written in the .tcl file which we described above.
1 A new node object is created with the command $ns node and it is written as:
set n0 [$ns node]
set n1 [$ns node]
The above written code creates two nodes and assigns them to handles n0 and n1.
2 Next step is to create links between the nodes to connect the nodes which is done using the following code, since we have created the nodes n0 and n1 above so we are going to connect them as:
$ns duplex-link $n0 $n1 1Mb 10ms DropTail
This line tells the simulator object to connect the two nodes n0 and n1 with duplex link having bandwidth 1 Mega bit , a delay of 10 ms and a DropTail queue.


                                               


The figure above shows the two nodes namely 1 and 2 with link between them.

The next step is how to send and receive data .


                                           

Friday 11 November 2016

Network Simulator

Network Simulator (ns) is a name for a series of discrete event network simulators. Various Versions of ns are ns1, ns2 and ns3. These are discrete event computer network simulators which are used in research and teaching. It is an open source software and is publically available under the GNU GPL v2 license for research, development and use. The aim of ns project is to create an open simulation environment for computer networking research.

Tool Command Language(TCL) scripting : 

We can write TCL scripts in any text editor like notepad++,VI Editor etc

WORKING WITH TCL SCRIPTS:

1 To begin with, we first need to create an object of simulator class.
Syntax is as:
set ns [new Simulator]
2 Next step is to open a file for writing nam trace data.
Its syntax is as:
a   set nf [open out.nam w]
b   $ns namtrace-all $nf
Line (a) opens the file out.nam and gives it a file handle  ‘nf’
Line (b) tells the simulator object ‘ns’ to write all simulation data to the nam file ‘out.nam’ which is refrenced by ‘nf’
3 The next step in TCL scripting is to add a ‘finish’ procedure that is used to close the trace file and start the nam file.
Syntax is as:
1                      proc finish { }     
2                      global ns nf
3                     $ns flush-trace
4                      close $nf
5                     exec nam out.nam &
6                  exit(0)
}
finish{} is the name of a procedure having code as shown above.
Line(2) specifies that objects ‘ns’ and ‘nf’ can be used inside the finish{} procedure using keyword global.
Line (3) writes all the data to the trace file through object ‘ns’
Line(4) closes the object ‘ns’ and all the links opened by it.
Line (5) finally executes the nam file using keyword ‘exec’ having name out.nam
Line(6) exits the procedure.

4 Next line in the TCL script is as:
$ns at 5.0 “finish”
This line tells the simulator object ‘ ns ’to execute the finish procedure after 5.0 seconds of simulation time.
5 Finally the last line which actually starts the simulation is written as:
$ns run
After writing the script we have to save the file with filename.tcl extension .

NS2

Welcome to Network Simulator

Introduction to java Programming

Java is one of the programming language or technology used for developing distributed applications by making use of client/server architecture.Java language developed at Sun Microsystems in the year 1990 under the guidance of James Goshling and others
Originally Sun Microsystems is one of the academic

GIS

Welcome to Remote Sensing and GIS Tutorials

Gate for CSE

Welcome to Gate Help Forum

Java

Welcome to Java Tutorials

C Sharp

Welcome to C Sharp Tutorials

Wednesday 9 November 2016

Variables in Java Script

Like other programming languages ,Java Script also supports the variables. Variable is simply a place holder for a value.It can also be called as the container where we can place our values.Thus in order to store any value ,we need to register a space in the memory  that is to declare a variable and then initializing it with some value. It is necessary to declare a variable before using it.
Variables are declared in Java Script with keyword var is as follows:

<html>
<head><title>Java Script Variables</title>

<script type="text/javascript">
var a;
var b;
</script>

</head>
</html>
The process of assigning values to a variables is called as initialization process.Variables can be initialized at any point of time in program before using it.It can also be initialized  at the time of creation of variables.
Now we assign 10 to variable a and value 20 to variable b as follows:

<script type="text/javascript">
var a=10;
var b=20;
</script>

In Java Script it is not required to specify the data type for  a variable .A variable declared with Keyword 'var ' accepts all the values whether it is a character or decimal point or integer value.
Thats  why Java Script is also called as Untyped language.

In Java Script variables are of two types:
Local Variables and Global Variables

The variables which are declared with in the function block is called local variables and the scope of these variables lies only with in the function block.

While the variables declared outside the bock of function are called Global variables.These variables can be accessed anywhere with in the program.

Keywords in Java Script

Keywords are the reserved words that cannot be used as variable names in our code that means keywords are the special words that are reserved and were designed at the time of development of language.
Some of the keywords in Java Script are as follows:
abstract
boolean
 byte
char
class
const
debugger
double
enum
export
extends
final
float
goto
implements
import
int
interface
long
native
package
private
protected
 public
short
static
super
synchronized
throws
transient
volatile



Tuesday 8 November 2016

Java Script Statements and Comments

Like other programming languages ,in Java Script a code consists of number of statements and a statement may be defined as the instruction or a command given to a system to do something ,this is how the programming language works .We give the system a set of instructions or statements which in turn performs the required task.
Comments may be defined as the use of small understandable notes in the code.It is a good practice to keep lot of comments in your programs as it improves the readability of the code and also it becomes easy for other programmers to understand the code easily. It is also possible to imitate some of the statements by the use of comments.

In Java Script there are two kinds of comments:
Single line Comments and
Multi line Comments
Single line Comments are used in single line and  acts as small notes in code while on the other hand multi line comments can imitate the effect of a complete block of code in a program that is the instructions written under the //......// will not be executed at all.

A single line Comment begins with:

<!-- ...............CODE ..............-->

while multi line Comment begins with
//  ...............CODE ............... //

The following program shows the use of single line comment and multi line comment in a program

<html>
<head>
<title>JavaScript</title>
<script type ="text/javascript">
<!--Prints message -->
document.write(''welcome to JavaScript");

</head>
</html>

Monday 7 November 2016

Java Script




What is Java Script ?

Java script is the most popular Client Side Scripting language at the moment that was developed by Netscape in 1995. Here  by Client Side Scripting language what we mean that all the code gets executed on the  Client's computer or user's computer.



Image result for client server architecture



This is how we use Internet everyday ,we being the client request pages from the server which in turn gives back the response desired by the client or user.

Basically there are two different types of scripting languages viz:
          Client Side Scripting language which gets executed on  client's system like Java script
          Server Side Scripting language which gets executed on Server System like PHP , ASP etc

Java Script is totally different from server side technologies because server side scripting is the way we actually connect to database and have shopping carts ,web based mail programs and so on.These two  have completely different purposes but they are often used together

The main use of Java Script is to change the appearance of web pages that is to design highly interactive web pages that cannot be achieved by Simple HTML language.
If we are using Java Script then we can have animations ,sliding images  and other interactive features.
Java Script can also be used for form Validation which is very important.
Another application of Java Script is that we can create simple applications like calculators and even Games as well.

How to include Java Script in our code?

Basically Java Script code is included between Script tag  which is then included in head section of HTML as:
<html>
<head>
<title>Tutorial on JavaScript</title>
<script type ="text/javascript">
    document.write("Welcome to Java Script");
<script>
</head>
<body>
</body>
</html>


Even we have not included any thing in Body section but when we execute this code it will show output as :
Welcome to Java Script






Sunday 6 November 2016

Introduction to Softwares

what is Software?

As per the industrial standard, a digitalised automated process is called as software.
When we are converting a manual process in the form of automated system, it is called as software.
Here digitalised means without human intervention or interaction process will be completed.
A software can also be defined as the set of programs or instructions that are designed for a specific task. Thus several programs combining together like a single unit,it is called a software component.
Types of Softwares:
Softwares can be broadly  classified into two types:
System software
Application software

System Softwares:

A software which is designed for general purpose and does not have any limitations is called system software.
System softwares are further classified into three types:
Operating-Systems like linux,Windows,Unix,DOS
Translators like compilers and interpreters
Packages like linkers,loaders,editors

Application softwares:

 A software which is designed for specific task only and having the limitations is called as application software.
All client specific projects are application softwares only because for what purpose we are developing the project for some purpose.
Application Softwares can be:
Application Packagers such as MS Office,Oracle
Special Purpose softwares like Tally
Microsoft Office is a Microsoft product which maintains the information in the form of documents.
Oracle is a database which maintains the important information in the form of tables.
Tally is a application software which maintains account related information.