Oct 30, 2015

Tree data structure Interview questions - Detailed explanation and sample code in Java

Oct 25, 2015

Binary trees - Different types of binary trees and its properties

A Tree is a non-linear data structure (as contrast to linked list, queue, stack are linear data structure) which forms hierarchical structure and order of elements are not important.In tree data structure, one node is parent node and all others nodes inserted are child nodes. Each node of tree can have zero or more child nodes.
A tree is termed as binary tree if each node can have zero, one or two child nodes. Binary tree 

Oct 23, 2015

Oct 22, 2015

Print Even and Odd number using two different thread and Thread interrupt() mechanism

Printing even and odd numbers using two different thread is very basic java interview question from Thread handling. We have discussed it here - Print Even and Odd number using two different thread . The main agenda of this post is to display even and odd numbers using Thread interrupt() mechanism. In other words, two threads will be sleeping for infinite time and and interrupt is send periodically to both thread and on interruption they wake-up and complete the task. Below are the sample code lines discussing the same.
package com.devinline.thread;

public class EvenOddUsingInterrupt {
 public static volatile int counter;

 public static void main(String[] args) throws Exception {
  Thread even = new Thread(new EvenProducer(), "Even");
  Thread odd = new Thread(new OddProducer(), "Odd");
  even.start();
  odd.start();
  while (true) {
   counter++;
   even.interrupt();
   odd.interrupt();
   Thread.sleep(1000L);
  }
 }

 private static class EvenProducer implements Runnable {
  public void run() {
   int oldNum = 0;
   while (true) {
    try {
     Thread.sleep(Long.MAX_VALUE);
    } catch (InterruptedException e) {
     // System.out.println("Interrupted even thread");
    }
    int num = counter;
    if (num != oldNum && num % 2 == 0) {
     System.out.println(Thread.currentThread().getName()
       + " thread produced  " + num);
     oldNum = num;
    }
   }
  }
 }

 private static class OddProducer implements Runnable {
  public void run() {
   int oldNum = 0;
   while (true) {
    try {
     Thread.sleep(Long.MAX_VALUE);
    } catch (InterruptedException e) {
     // System.out.println("Interrupted odd thread");
    }
    int num = counter;
    if (oldNum != num && num % 2 == 1) {
     System.out.println(Thread.currentThread().getName()
       + " thread produced  " + num);
     oldNum = num;
    }
   }
  }
 }
}
=====Sample output=======
Odd thread produced  1
Even thread produced  2
Odd thread produced  3
Even thread produced  4
Odd thread produced  5
Even thread produced  6
Odd thread produced  7
Even thread produced  8
.......
=====================

Oct 19, 2015

Textual description of firstImageUrl

Mapreduce program in eclipse - Generate hadoop2x-eclipse-plugin and configure with eclipse

In order to write map-reduce program in eclipse, we need to place hadoop2x-eclipse-plugin jar inside plugin directory of eclipse installation.The main agenda of this post is to generate hadoop2x-eclipse-plugin and run a sample hadoop program in eclipse.This post has been divided into three parts, install eclipse into Ubuntu 13.04, generate hadoop2x-eclipse-plugin jar and finally run a sample map- reduce program in eclipse. Part 2 of this post may be skipped, as I have generated hadoop2x-eclipse-plugin jar, Download hadoop2x-eclipse-plugin jar.

Install eclipse in Ubuntu 13.04

1. First check whether you require 64 bit/32 bit distribution by running following command and  Download eclipse distribution accordingly.Place downloaded eclipse distribution as per your convenience.To check 32 bit or 64 bit machine run following command (uname -m  Or getconf LONG_BIT) :
zytham@ubuntu:~$ uname -m
x86_64
zytham@ubuntu:~$ getconf LONG_BIT
64
2. Now extract the downloaded distribution (eclipse-jee-juno-SR2-linux-gtk-x86_64.tar.gz) using following command. It creates a directory eclipse in current directory.
zytham@ubuntu:~$ tar -zxvf eclipse-jee-juno-SR2-linux-gtk-x86_64.tar.gz
3. Move the extracted folder "eclipse" to /opt. Use following command.It will create a new directory  /opt/eclipse
zytham@ubuntu:~$ sudo mv eclipse /opt/
4. We have set-up eclipse in our machine and it can be launched from shell using following command:
zytham@ubuntu:/opt$ /opt/eclipse/eclipse -clean &

5. For appending eclipse in unity launcher refer this.

Generate hadoop2x-eclipse-plugin jar

1. Download hadoop2x-eclipse-plugin project and extract it at some convenient location, say hadoop2x-eclipse-plugin-master is your extracted directory name.
zytham@ubuntu:~/Downloads$ tar -zxvf hadoop2x-eclipse-plugin-master.tar
2. Now using "ant" building  tool, we build downloaded project and generate hadoop jar for eclipse.
zytham@ubuntu:~/Downloads$ cd hadoop2x-eclipse-plugin-master/
zytham@ubuntu:~/Downloads/hadoop2x-eclipse-plugin-master$ cd src/contrib/eclipse-plugin
zytham@ubuntu:~/Downloads/hadoop2x-eclipse-plugin-master/src/contrib/eclipse-plugin$ ant jar -Dversion=2.6.1 -Dhadoop.version=2.6.1 -Declipse.home=/opt/eclipse -Dhadoop.home=/usr/local/hadoop2.6.1
It will take some time and once build process succeeded, final jar will be generated at following location:-  hadoop2x-eclipse-plugin-master/build/contrib/eclipse-plugin/hadoop-eclipse-plugin-2.6.1.jar
Note:- If you do not want to build eclipse plugin jar or your build failed,  Download hadoop-eclipse-plugin-2.6.1.jar

Run sample map reduce program in eclipse

  1. Add hadoop-eclipse-plugin-2.6.1.jar in plugin directory of eclipse installation (/opt/eclipse/plugins). now start tart eclipse, using this command:-   /opt/eclipse/eclipse -clean &
  2.  If you have added hadoop-eclipse-plugin correctly, right after opening eclipse you should see "DFS Locations" node in project explorer section(Shown in following diagram). 
  1. Create a map reduce project in eclipse. Go to File -> New - > Projects. Select map/Reduce project type from wizard.as shown in above diagram(right side).
    Give a valid project name and configure hadoop installation directory. Click next and in Java settings page mark check box "Allow output folders in source folders"(as highlighted in following diagram). Click finish and we will have a map-reduce project in  project explorer.
  1. Here we are going to run , word count example. Create a class (say WordCountSampleExample.java) in the give project and copy following word count example.

import java.io.IOException;
import java.util.*;
        
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
        
public class WordCountSampleExample {
/*Map class which job will use and execute it map method*/      
 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
        
    public void map(LongWritable key, Text value, Context context

                   throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }
 
 /*Reduce class which job will use and execute it reduce method*/         
 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

    public void reduce(Text key, Iterable<IntWritable> values, Context context
      throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
 }
        
 public static void main(String[] argsthrows Exception {
    Configuration conf = new Configuration();
    
  /*Created a job with name wordCountExample*/
    Job job = new Job(conf, "wordCountExample");
    
  /*Handler string and int in hadoop way: for string hadoop uses 

   Text class and for int uses IntWritable*/
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
  
    /*Configure map and reducer class, based on which it uses map and reduce mehtod*/
    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);
    
  /*Input and output format set as TextInputFormat*/    
    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);
    
  /*addInputPath - passes input file path to job - here passed as program parameter */
    FileInputFormat.addInputPath(job, new Path(args[0]));
  /*setOutputPath - passes output path to job - here 
passed as program parameter  */
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    
  /*Submit the job to the cluster and wait for it to finish.*/
    job.waitForCompletion(true);
 }
        
}

Input to this map-reduce program is input.txt (download from here and place in project home directory) and output is stored in output directory configured next.

passing input and output as program arguments:-
 Right click on the project , Go to Run as -> Run configurations. Click on Arguments tab and add input.txt output(separated by space) in it(as shown in following diagram).
Read in detail how to pass program arguments and VM arguments in Eclipse.

Run map-reduce program :-  Right click on the class and Run as -> Run on hadoop.
After successful execution, an output directory will be created and word count is stored in file part-r-0000. Below is the input and output file content and Key is 3 times in input, key 3 is displayed in output, similarly = is 6 times in input files so it is indicated by output.

HDFS location access via eclipse plugin:- 

1. Open 'Map/Reduce' perspective.
    Goto Window --> Open Perspective --> Other and select 'Map/Reduce' perspective.
2. Right click on Map/Reduce Locations tab and create New Hadoop location.
3. Configure DFS location in following window as follows:-
  • Location name - Give any valid name.
  • Map/Reduce(V2) master : Address of the Map/Reduce master node (where Job Tracker running).
    Host name - Find IP address of node(machine) where hadoop service is running using ifconfig.
    hduser1@ubuntu:/usr/local/hadoop2.6.1/bin$ ifconfig
    Or If hadoop is installed locally use localhost for host.
    Port:- For finding port associated with Job tracker, hit the url http://192.168.213.133:8088/conf  or http://localhost:8088/conf in browser and search for property name "mapreduce.jobtracker.http.address" and value associate with it will give port address. For me it  is like this, port no is 50030.
    <property>
     <name>mapreduce.jobtracker.http.address</name>
     <value>0.0.0.0:50030</value>
     <source>mapred-default.xml</source>
    </property>
    
  • DFS master:- Address of the Distributed FileSystem Master node (where Name Node is running).
    Host name:- By default, it will take same address as Map/Reduce(V2) master host name, change accordingly if File system is running at some different node.
    Port :- For finding port number, search for property name "fs.defaultFS" in http://192.168.213.133:8088/conf or http://localhost:8088/conf and value associated with it gives DFS Master port address. For me it appears like this, port address is 54310.
    <property>
     <name>fs.defaultFS</name>
     <value>hdfs://hostname:54310</value>
     <source>core-site.xml</source>
    </property>
    
Refer following diagram and configure accordingly.Once we have configured we are connected with th DFS and view files/tree structure of stored files.

Oct 15, 2015

Textual description of firstImageUrl

Pros and Cons of Lock (java.util.concurrent.locks) over synchronized methods and statements

An explicit lock(Lock) and implicit lock(synchronize method & statement) is a tool for controlling access to a shared resource by multiple threads.Every object in Java has implicit monitor lock and synchronize(method & statements/blocks) take advantage of that and JVM provide automatic lock management, however explicit locking is achieved using Lock/ReadWriteLock introduced in Java 1.5, with intention to give programmer more flexibility and control over Lock. The main agenda of this post is to discuss pros and cons of explicit locking over synchronize methods/statements .

Pro and Cons of explicit locking(Lock/ReentrantLock) over implicit locking(synchronize methods and statements):- 

1. High flexibility and programmatic control  -
Pros:- In synchronize method and statement, JVM does automatic management of lock(release of lock) and developer does not have any control over it. Lock interface gives programmatic control and flexibility over Lock by providing API for non-blocking attempt to acquire a lock(tryLock(), tryLock(long, TimeUnit)), getting hold on lock (lokcRef.lock()), release lock(lockRef.release()), etc.
Cons:- Lock is not easy to use as synchronized methods and statements(JVM does automatic release of lock and optimization too- just acquire implicit lock and forget).However,when dealing with explicit lock , it is developer responsibility to acquire lock and release appropriately.
Note:- Brian Goetz, author of  "Java Concurrency in Practice" states that, if you forget to wrap the unlock() call in finally block,your code will probably appear to run properly, but you've created a time bomb that may well hurt innocent bystanders. So, handle explicit lock with caution.

2. Chain locking supported -
Pros:- synchronize method and statement allows multiple locking, does not allow chain locking.In other words,all locks released by JVM in the same lexical scope in which they were acquired and  in opposite order.Refer following diagram for better understanding, locks acquired in critical section 3 must be released in that scope and returns to critical section 2.
Implicit monitor lock acquire and release cycle for multiple locks
However, Lock supports chain locking, it is not mandatory to release lock in the same lexical scope in which they were acquired, thus allows multiple locks to be acquired and released in any order.So, lock acquired in critical section 2 can be released in critical section 3 and critical section 3 lock can be released in somewhere else.
Cons:- The scoping mechanism for synchronized methods and statements makes it much easier to program with monitor locks and helps avoid many common programming errors involving locks. However, with Lock programmer responsibility increases("with great power comes great responsibility, here version has changed, with this increased flexibility comes additional responsibility). When locking and unlocking occur in different scopes, all code that is executed while the lock is held is protected by try-finally or try-catch to ensure that the lock is released when necessary.
Lock lockObj = new ReentrantLock();// or others classes 
lockObj.lock();
 try {
  // access the resource protected by this lock
 } finally {
  lockObj.unlock();
 }

3. Performance improvement:- ReentrantLock (a concrete implementation of Lock interface) offered better performance than intrinsic locking in Java 1.5 and in Java 1.6 this performance gap was minimized.Refer following diagram showing advantage of explicit locking over Intrinsic Locking - ReentrantLock Performance on Java 5.0 and Java6.
Note:- Even tough performance of explicit locking is better than implicit locking, but performance should not be criteria to select it over implicit locking.
Intrinsic Locking Versus ReentrantLock Performance on Java 5 and Java 6 (Diagram reference :Java Concurrency in Practice by Brian Goetz)

4. Fairness locking support:-
ReentrantLock constructor provides flexibility to create two types of lock: a fair lock and unfair lock(default ).Threads acquire a fair lock in the order in which they requested it, whereas a non-fair lock permits forceful lock acquisition. By default implicit locking provide unfair locking(using semaphore fair locking can be created).
Note:- 
  • When non-fair lock is created, any thread can jump the queue and acquire lock on the given object, if that moment lock is free and available to be acquired.
  • With a fair lock, a newly requesting thread is queued if the lock is held by another thread or if threads are queued waiting for the lock. However, with a non-fair lock, the thread is queued only if the lock is currently held.
  • Performance of non-fair lock is better than fair lock. In fair lock, pause of one thread and start another thread causes a substantial overhead and with increase threads count performance degraded.Refer following diagram, which shows throughput of fair and non-fair lock. 
Fair Versus Non-fair Lock Performance (Diagram reference  :Java Concurrency in Practice by Brian Goetz)

What we should choose - Implicit lock and explicit lock ?

Implicit lock is very easy to use and JVM does lock management for it. However, ReentrantLock has an edge over implicit lock in terms of performance gain.
It is recommended to prefer implicit locking(synchronized methods and statements) over explicit locking unless we need to use explicit locking advanced features: timed, polled, or interruptible lock acquisition, fair queuing, or non block structured locking.
Note:-
  • If we are using Java 5, the threading problems (like deadlocked threads) cannot be debugged using ReentrantLock because JVM does not have any information about which threads hold ReentrantLocks(this issue was addressed in Java 1.6). However,with implicit lock using thread dumps can detect and identify deadlocked threads because thread dumps shows mapping between call frames and locks.
  • With performance improvement synchronized is going to get more benefit over ReentrantLock because synchronized is built into JVM and JVM manages it lock internally. So, it is not a good idea to choose ReentrantLock over synchronized for performance reasons.

Oct 13, 2015

Textual description of firstImageUrl

Java I/O - Internal details of Java Input and Output class (java.io.*)

Java support Input and Output(I/O) operations with characters/text and binary stream. Java I/O model is highly flexible so that it can accommodate data sources like File, Arrays, piped streams, etc. Since, ASCII (8 bit) character encoding was not sufficient to cover all possible character sets, Java uses 16 bit Unicode to represent characters.

Unicode and UTF-8 encoding in Java:- 

In Java, text(collection of characters) is represented as two-byte UNICODE characters. UNICODE use two bytes to represent characters from various character sets throughout the world(Japanese, Chinese, etc ). Since Java is platform independent language and each platform has its own native character set(which usually has some mapping to the UNICODE standard). Java needs some way to map the native character set to UNICODE. It is Java's I/O classes that translate the native characters to and from UNICODE. In each platform dependent JDK, there is a "default mapping" that is used for translations.
Java internally uses UTF-8 encoding scheme to store Strings in class files. UTF-8 is a simple encoding of UNICODE characters and strings that is optimized for the ASCII characters.

Along with Character data, Java provides various classes(java.io.*) and API's to deal with binary stream. Java I/O model can be broadly classified in two categories:-
1. Character/Text Input and Output - native encoding to two byte Unicode mapping
2. Binary Input and Output - No mapping required, binary data I/O in form of stream.
Following diagram depicts a holistic view of Java I/O and classes involved to support both text I/O and binary I/O.
Block diagram of Java I/O Classes interdependency
Java I/O classification, class hierarchy and its interdependency 
  • For handling character I/O Java provides Reader and Writer abstract class and for byte stream InputStream and OutputStream - which provides flexible read() and write() methods and gives an abstraction capability to subclasses.
  • For File related operation (character and byte stream read/write)- Java provides concrete classes like FileReader, FileWriter, FileInputStream and FileOutputStream.
  • For handling byte array/character array- ByteArrayInputStream, ByteArrayOutputStream / CharArrayReader, CharArrayWriter
  • For String object -  read and write operation is carried out with StringReader and StringWriter.
  • String representation of primitive and object values are performed using PrintStream and PrintWriter.(Not shown in above diagram)
Note:-
  1. Interconversion of character and byte stream is achieved using bridge classes- InputStreamReader - Binary to Character and OutputStreamWriter - Character to Binary. 
  2. Java I/O provides Buffered classes (BufferedInputStream,BufferedOutputStream,BufferedReader,BufferedWriter) which minimizes read and write operation time. Instead of reading/wrirting one byte at a time buffered classes allows buffering large chunks of bytes in its buffer and allows read/write large chunks at a time. Refer why buffering is recommended for all character/byte stream I/O - notice the differences in write operation execution time with and without buffer. 

Inner details of each of the above classes, methods supported and sample code how to use it : 

InputStream and OutputStream:- 
  1. InputStream(IS) is an abstract class (superclass of all classes which represents input stream of bytes). It gives flexibility to concrete classes(class which extends IS) by providing overload read() methods (abstract and concrete implementation), so that concrete classes can define their own version of read() method and can also use read() of IS. The signature of abstract and concrete read method of InputStream is as follows:-
    //Reads the next byte of data from the input stream and each concrete class must implement a method returning next byte of stream
    public abstract int read() throws IOException;
    
    public int read(byte b[]) throws IOException {
     // b is byte buffer into which maximum b.length bytes can be read.
    }
    
    public int read(byte b[], int off, int len) throws IOException {
     // reads bytes from stream and start storing each byte from starting index = off
     // and maximum no of bytes read = len
    }
    
  2. Note:- Internally all read methods(concrete implementation provided in IS ) execute abstract read() method, so it is responsibility of concrete class to implement abstract read() method.
  3. OutputStream(OS) is an abstract class (superclass of all classes which represents output stream of bytes. It provides an abstract method write() and overloaded write() method. Signature of write() method is as follows :-  
    // Writes the specified byte(b bytes) to this output stream.
    public abstract void write(int b) throws IOException;
    
    public void write(byte b[]) throws IOException {
     //Writes b.length bytes from the specified byte buffer(array b) to the output stream.
    }
    
    public void write(byte b[], int off, int len) throws IOException {
    // Writes len bytes from the specified byte array starting from offset off to the output stream.
    //Internally it executes write(len) method and write len of byte at a time. 
    }
    
  4. Note:- Internally all write methods (concrete implementation provided in OS ) execute abstract write() method, so it is responsibility of concrete class to implement abstract read() method.
FileInputStream, FileReader , FileOutputStream and FileWriter:-
  1. FileInputStream(FIS) read input bytes from a file and it is commonly  used for reading streams of raw bytes such as image data. The constructor of FIS is responsible for opening an connection with the abstract file name(binding stream with file). The signature of FileInputStream constructors which creates a file descriptor and connection with abstract file.
    public FileInputStream(String name) throws FileNotFoundException {
     // Creates an Inputstream and open connection with Abstract file name 
    }
    public FileInputStream(File file) throws FileNotFoundException {
     // Creates an Inputstream and open connection with File Object
    FileNotFoundException is thrown if file does not exist in file system
    
    public FileInputStream(FileDescriptor fdObj) {
     // Creates an Inputstream and uses existing connection(file descriptor object fobj and 
     //SecurityException is thrown if read access is not allowed for the given file descriptor
    }
     
    
    In order to manage this connection Operating system kernel maintains an list of integer corresponding to each opened file termed as "File descriptor". File descriptor is very critical system resources, so every time we open a file and we should close it using close() method. Once close() method is called on the given FIS, it releases lock and associated system resources to OS kernel.
    FIS extends abstract class InputStream class and inherits read() methods from it. Since InputSteam class has an abstract method, FIS provides definition of that read() method in form of native method implementation.FIS also provides native implementation for open(), close(), readBytes() methods. The signature of native methods are as follows:-
    //Open a connection with specified abstract file name, else throw exception if file does not exist
    private native void open(String name) throws FileNotFoundException;
    
    //It is being called by close() method and releases all system resources to OS kernel.
    private native void close0() throws IOException;
    
    //native modifier indicates that this method has been implemented in other languages(preferably c) not in Java. reda() methods return value of one byte read in range of 0 to 255, -1 if end of file is reached.
    public native int read() throws IOException;
    private native int readBytes(byte b[], int off, int len) throws IOException;
    Note:- In Java, performance critical and hardware interaction code lines are implemented as native methods in c language and and JNI (Java Native Interface) provides communication channel between them.
  2. FileOutputStream(FOS) write bytes to a file. FOS is commonly used for writing streams of raw bytes such as image data. Similar to FileInputStream, FOS creates an output stream connection with abstract file and O/S kernel manage file descriptor for it. FOS also has close() method for releasing system resources. The signature of FileOutputStream constructors are as follows:-
    public FileOutputStream(String name) throws FileNotFoundException {
     //Creates an output file stream to write to the file and bytes written to staring of file
     //a new file descriptor object is created to represent this connection 
    }
    
    public FileOutputStream(String name, boolean append){
     //Creates an output file stream to write to the file 
     //and if append boolean is true, bytes will be written to end of file not at begining
    }
    
    public FileOutputStream(FileDescriptor fdObj) {
        //Creates an output file stream to write to the specified file 
        //descriptor, which represents an existing connection to an actual 
        //file in the file system.
    }
    

    FOS extends Outputstream and inherits write method. Outputstream provides an abstract write() method and FOS provides definition of write method in native form as we saw for read() method in FIS. FOS also provides native implementation for open(), close()  The signature of native write() method is as follows:- 
    // Open file with specified abstract name 
    private native void open(String name) throws FileNotFoundException;
    
    //Open file with specified abstract name in append mode, write bytes at end of file  
    private native void openAppend(String name) throws FileNotFoundException;
    
    //Close FileOutputStream and releases system resources
    private native void close0() throws IOException;
    
    //Writes the specified byte to this file output stream
    public native void write(int b) throws IOException;
    
    //Write bytes from buffer b from index off and up to length len.
    private native void writeBytes(byte b[], int off, int len) throws IOException;
    
    public void write(byte b[]) throws IOException {
    //Write bytes from buffer b start form index 0 to length of b
    }
    
    
  3. FileReader:- FileReader is used for reading character/text from file. FileReader extends InputStreamReader class(bridge between character and byte stream - reads bytes and decodes them into characters) and provides read() methods. FileReader constructor assumes that default encoding and byte buffer size is appropriate for read operation. FileReader uses read method of InputStreamReader class. The constructor of FileReader internally calls InputStreamReader constructor and creates an Instance of FileInputStream. Constructor signature are as follows:- 
    //Calls InputStreamReader constructor and create connection with specified 
    //fileName and File Object
     public FileReader(String fileName) throws FileNotFoundException {
     super(new FileInputStream(fileName));
     }
     public FileReader(File file) throws FileNotFoundException {
     super(new FileInputStream(file));
     }
    
    On each read() operation invocation, InputStreamReader read() method is executed and one byte/byte buffer is read. InputStreamReader uses character encoder (java.nio.charset.Charset ) to convert bytes read into character. In order to increase efficiency of read operation buffered stream should be used. Syntax of read method is as follows:-
     public int read() throws IOException {
            //Return The character read - int value of char between 0 to 255
        }
    public int read(char cbuf[], int offset, int length) throws IOException {
     // returnns number of character read into  buffer cbuf stasrting at index 
     //offset cbuf[offset] and up to cbuf[lenght- offset+1]
        }
    
  4.  FileWriter:-FileWriter is used for writing characters in file. It extends another bridge class OutputStreamWriter - bridge from character streams to byte streams. FileWriter constructor assumes that default character encoding and the default byte-buffer size are acceptable. Find below signature of constructor:-
    //Calls OutputStreamWriter constructor and create connection with specified 
    //fileName and File Object
    public FileWriter(String fileName) throws IOException {
     super(new FileOutputStream(fileName));
        }
    public FileWriter(String fileName, boolean append) throws IOException {
     //Connection created in append mode so that byte written at end of file
     super(new FileOutputStream(fileName, append));
        }
    public FileWriter(File file) throws IOException {
     super(new FileOutputStream(file));
        }
    

    FileWriter uses write method of OutputStreamWriter and Characters written to it are encoded into bytes using java.nio.charset.Charset. Below is the signature of the write() method. 
    public void write(int c) throws IOException {
     // Writes a single character.
    }
    public void write(char cbuf[], int off, int len) throws IOException {
     // Writes characters into cbuf starting from cbuf[off] and of length len.
    }
    public void write(String str, int off, int len) throws IOException {
     // Writes a portion of a string, starting from off and of length len
    }
    
    Refer following example for read and write operation in Java
In next post we will discuss about other classes for Array of bytes (ByteArrayInputStream and ByteArrayOutputStream), Array of characters (CharArrayReader and CharArrayWriter) and Strings(StringReader and StringWriter).


Oct 11, 2015

Textual description of firstImageUrl

File operations (Create, Delete, List files, Filter, Read and Write) in Java

File class (package java.io.*) is an abstract representation of file and directory pathnames. Each operating systems use system-dependent pathname strings to name files and directories(Windows use like this "C:\\DIR_NAME\\File_NAME"  and Linux uses "/usr/local/file_name"). In order to deal each fie in system independent manner , this class presents an abstract, system-independent view of hierarchical pathnames.
How Java makes file handling and associated operations platform independent?
In Java "File class" maintains FileSystem object representing the platform's local file system and also maintains various static fields like separator,pathSeparator which got initialized automatically when File is loaded in JVM. FileSystem object in File Class:- static private FileSystem fs = FileSystem.getFileSystem();

Separator character on UNIX systems is '/' and on Microsoft Windows systems it is '\\'
public static final char separatorChar = fs.getSeparator();
public static final String separator = "" + separatorChar;
// String separator for convenience 
Path separator character on UNIX systems is ':' and  on Microsoft Windows systems it is ';'

public static final char pathSeparatorChar = fs.getPathSeparator();
public static final String pathSeparator = "" + pathSeparatorChar; // String patseparator for convenience
Java ships with very rich I/O classes to support the input and output through bytes stream and file system.In Java File I/O posts we will discuss various operations associated with File class and visit code lines for the same. Before moving ahead with individual examples(creating file, read ,write operations, etc) and understanding various I/O classes available in Java, it needs special mention: File object is created for handling both file and directory and all operation is performed on this File object.When we say File object, we refers to both file(.txt, .xml) or directory.File Class provides API for distinguishing file and directory, however internally both are same(File Object)

Below are sample codes associated with various File operations.Refer each link for detailed explanation and sample Java program for the same.
  1. Create, Rename, Delete a file in Java
  2. Get Meta data of file
  3. Read and write operation in Java
  4. Display all files/directories (File Object) 
  5. Filter files with specific extension (.txt, .doc)
  6. Get and Set File permission
  7. Create temporary file and automatic clean-up

Oct 9, 2015

Textual description of firstImageUrl

Passing program arguments and VM arguments in Eclipse, Net beans and Command line

Sometimes when we execute out Java program/project, we need to pass arguments (program and VM). In this post we discusses how to pass these arguments in command line, Eclipse and Net beans. At the end of this post we will see how to pass VM arguments via pom.xml.

Pass program and VM arguments in eclipse:- 

Below program unit expects two program arguments and VM arguments(addProp and multProp).
/**
 * @author devinline
 *
 */
public class ProgramAndVMParaametersPassing {
 
 /**
  *  program arguments are collected in string array referenced by args.
  *  VM arguments can be accessed by System.getProperty("ARGUMENT_NAME")
  */
 public static void main(String[] args) {
  System.out.println("VM arguments addProp: "+ System.getProperty("addProp") 
     +" and multProp: "+ System.getProperty("multProp"));
  System.out.println("Program arguments are " + args[0] +  " and"+ args[1]);
  if(Boolean.parseBoolean(System.getProperty("addProp"))){ 
   System.out.println("Sum of two program arguments are: " +
      (Integer.parseInt(args[0])+Integer.parseInt(args[1])));
  }
  else if(Boolean.parseBoolean(System.getProperty("multProp"))){
   System.out.println("Multiplication of two program arguments are: " + 
      (Integer.parseInt(args[0])*Integer.parseInt(args[1])));
  }
 }

}
In eclipse program arguments and VM arguments can be passed from Run configurations.Follow following steps:-
Right click on project -> Go to Run As -> Run configurations.. and click on arguments tab.
Program arguments are added separated by spaces as shown in following diagram (14 12 added separated by spaces).
Similarly, VM arguments can be passed as -DKey = Value pair, we need to append -D in key and value can be any thing. For following program unit we passed two VM arguments as -DaddProp = true and -DmultProp= false
Program and VM arguments passing in eclipse 
 Note:-  "Error Could not find or load main class true"
No space is allowed between "key=value". If we give space between key and value like
-DaddProp = true  or 
DaddProp=  true , while running project or program we witness above error. Correct way of providing VM argument is :  -DKey=Value

Pass program and VM arguments in Net beans:- 

Right click on project and click properties. Click on Run in categories.
In arguments -  add program arguments 14 12 (separated by spaces)  and in VM Options-  add VM arguments (-DaddProp = true -DmultProp= false). Refer following diagram.
Program and VM arguments in Net beans

Pass program and VM arguments in Command line:- 

When we are executing java program form CMD, we can pass VM arguments and program arguments as follows:-
 java <vm_arguments> <java_class_name> <program_arguments>
Above program unit can be executed with VM and Program arguments as follows:-
java -DaddProp=true -DmultProp=false ProgramAndVMParaametersPassing 12 23
Program and VM arguments in command line

Pass VM arguments in pom.xml:- 

Sometimes we need to pass VM arguments via pom.xml. It can be done using "maven-sunfire-plugin". Following plugin node entry can be added to pass VM arguments via pom.xml. Under configuration we are passing two VM arguments -DaddProp=true and -DmultProp=false.
<project>
.....
.......

<build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.9</version>
                <configuration>
                    <argLine>-DaddProp=true</argLine>
                    <argLine>-DmultProp=false</argLine>
                </configuration>
            </plugin>
        </plugins>
    </build>
 
........
..........
<project>