Saturday, July 25, 2015

Bangalore Java User Group (JUG)- July Meet-up

Today, we had very interesting sessions on Concurrent Garbage Collector and Completable Future by Deepak and Srinivasan.

Deepak, while taking about the Azul Garbage Collector covers:
- The basics of GC
- How we can proceed for a concurrent GC
- What are the challenges to achieve a concurrency in GC

Srinivasan while talking about the Completable Future covers:
- How the new Completable Future of JDK 8 is better than the old calls
- How the real life applications like Banking, Online booking systems can use this feature.
- What were the old bottleneck which got addressed.

If you are also interesting in joining Bangalore Java User Group, follow us on:

Our meetup page - http://www.meetup.com/BangaloreOpenJUG/
Our Facebook page - https://www.facebook.com/groups/1018617618156273/



 Soon, I will provide the links for the presentation.

Thursday, July 23, 2015

Java User Group Bangalore - Lets Rock !!

Guys, if you are a Java Developer and you are in Bangalore. It's the time to meet the biggest Java User Group in Bangalore. Do join us, in JUG-Bangalore.

Here are the talks of the month:

Azul JVM - Concurrent Garbage Collection 
Harish Babu 60 mins

Java 8 new Javascript engine call Nashorn 
Shekhar Gupta 45 mins 

Completable Future of JDK8
Srinivasan Raghavan 45 mins

Its a free to join place without any fee. We will provide you Snacks and Java :-). Keep rocking. Do join the Facebook group: https://www.facebook.com/events/730993017030025/

Tuesday, July 21, 2015

VM options for optimization (C1 and C2 compilers)

Many Java Developers often ask what are the flag options available for C1 and C2 Compilers or what are the flag options available for JIT compilers. Though most of the time our slides will cover some of the important VM options (-XX) but certainly we can't  provide the list of complete option in slides. This is actually quite a trivial job.
Here it goes:
1. Complete VM global flag option (redirecting it to out file):
java -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal > out
wc -l < out
764  // Total available options, did on jdk7
2. If you will check without  UnlockDiagnosticVMOptions, the no.s will be bit less.
java  -XX:+PrintFlagsFinal > out
wc -l < out
672 
2. This document comes with the beautiful option of where it is been used like Product, C2 diagnostic, C1 Product and many more. So, just grep the out file with "C2" and see what options are available for you on C2 Compiler and which options are product options and which are diagnostic or logging options.
 cat out | grep "C2"  (Linux/Solaris/Mac machine option, find the equivalent to windows)
A list will come something like:
     intx AliasLevel                                = 3               {C2 product}
     bool AlignVector                               = true            {C2 product}
     intx AutoBoxCacheMax                           = 128             {C2 product}
     bool BlockLayoutByFrequency                    = true            {C2 product}
     intx BlockLayoutMinDiamondPercentage           = 20              {C2 product}
     bool BlockLayoutRotateLoops                    = true            {C2 product}
     bool BranchOnRegister                          = false           {C2 product}
     intx ConditionalMoveLimit                      = 3               {C2 pd product}
     bool DebugInlinedCalls                         = true            {C2 diagnostic}
ccstrlist DisableIntrinsic                          =                 {C2 diagnostic}
     bool DoEscapeAnalysis                          = true            {C2 product}
     intx DominatorSearchLimit                      = 1000            {C2 diagnostic}
     intx EliminateAllocationArraySizeLimit         = 64              {C2 product}
     bool EliminateAllocations                      = true            {C2 product}
     bool EliminateAutoBox                          = false           {C2 diagnostic}
     bool EliminateLocks                            = true            {C2 product}
     bool EliminateNestedLocks                      = true            {C2 product}
     bool IncrementalInline                         = true            {C2 product}
     bool InsertMemBarAfterArraycopy                = true            {C2 product}
     intx InteriorEntryAlignment                    = 16              {C2 pd product}
     intx LiveNodeCountInliningCutoff               = 20000           {C2 product}
 3. Running the same option for C1.
cat out | grep "C1" 
We can see:
     bool C1OptimizeVirtualCallProfiling            = true            {C1 product}
     bool C1ProfileBranches                         = true            {C1 product}
     bool C1ProfileCalls                            = true            {C1 product}
     bool C1ProfileCheckcasts                       = true            {C1 product}
     bool C1ProfileInlinedCalls                     = true            {C1 product}
     bool C1ProfileVirtualCalls                     = true            {C1 product}
     bool C1UpdateMethodData                        = true            {C1 product}
     intx CompilationRepeat                         = 0               {C1 product}
     bool LIRFillDelaySlots                         = false           {C1 pd product}
     intx SafepointPollOffset                       = 256             {C1 pd product}
     bool TimeLinearScan                            = false           {C1 product}
     intx ValueMapInitialSize                       = 11              {C1 product}
     intx ValueMapMaxLoopSize                       = 8               {C1 product}
 Enjoy Optimization, Enjoy JIT'ing.

Sunday, July 19, 2015

Just-In-Time Compiler Optimizations (Know your JVM)

JIT comes in these flavors:
 C1 (Client compiler) -client option
 C2 (Server compiler)-server option
 -XX:+TieredCompilation - Better decision of compilers.
Common Optimizations done by Just-In-Time (JIT) Compiler do:
 1. Eliminate dead codes and Expression optimization.
 int someCalculation(int x1, int x2, int x3) {
         int res1 = x1+x2;
         int res2 = x1-x2;
         int res3 = x1+x3;
         return (res1+res2)/2; 
 }
will be converted to
int someCalculation(int x1, int x2, int x3) {
 return x1; 
} 
 2. Inline Method
- Substitute body of the method (<35 bytes of JVM bytecode) - This provides the best optimization by JIT - A better inline that C++ 
For Example: 
int compute(int var) { int result; if(var > 5) { result = computeFurther(var); } else { result = 100; } return result; } 
If you call myVal = compute(3); it will get converted into myVal = 100;
3. Caching Technique:
Point findMid(Point p1, Point p2) { Point p; p.x = (p1.x + p2.x)/2; p.y = (p1.y + p2.y)/2; return p;
p1.x, p2.x -> It can convert into temp1, temp2 and can be cached.
4. Monomorphic dispatch:
public class Birds { private String color; public String getColor() { return color; } } myColor = birds.getColor(); 
If there is no other override of this method, it will convert into
public class Birds { String color; }
mycolor = birds.color; 
5. Null Checks Removal:
x = point.x; y = point.y; At JVM it is equivalents to: if(point==null) throw new NullPointerException(); else { x = point.x; y = point.y; }  
But if the code will not throw NullPointer for more than threshold reference, it will remove the if check.
6. Threading Optimizations:
- Eliminate locks if monitor is not reachable from other threads - Join adjacent synchronized blocks on the same object
7. Loop Optimizations: 
- Combining loops – Two loops can be combined if taking equivalent time. - Inversion loops – Change while into do-while. (why, just give a javap -c) - Tiling loops – Re-organize loop so that it will fix in cache. 
VM Args:
Xint – Interpreter mode Xcomp – Compiled mode Xmixed – Interpreter + Compiler -server → C2 compiler -client → C1 compiler -XX:+TieredCompilation → C1 + C2 (used by 32/64 bit mode) 
Logging Options:
-XX:+UnlockDiagnosticVMOptions -XX:+LogCompilation -XX:LogFile=<path to file> -XX:MaxInlineSize=<size> -XX:FreqInlineSize=<size> 

Monday, February 10, 2014

Best Practices Java - StringBuffer Part 2

It's good to define string as StringBuffer for most of the common use(Refer Part 1). We will now see how StringBuffer enlarge itself, as it is mutable.

If you are just calling the default creation of StringBuffer, the following code will get called(default size of 16 characters).
   super(16);
}



StringBuffer takes its data structure from its parent class which is AbstractStringBuilder, something like:


abstract class AbstractStringBuilder implements AppendableCharSequence {
            char value[]; // actual character storage.
int count; // count the no. of char's used.



This is how expandCapacity has been written in JDK:


void expandCapacity(int minimumCapacity) {
 int newCapacity = (value.length + 1) * 2;
 if (newCapacity < 0) {
  newCapacity = Integer.MAX_VALUE;
 } else if (minimumCapacity > newCapacity) {
  newCapacity = minimumCapacity;
 }
 value = Arrays.copyOf(value, newCapacity);
}




This expandCapacity() has been called from append() method of StringBuffer. Most of the methods of StringBuffer are synchronized as expected.

For more understanding, you can see the openJDK source code. 

 


Saturday, February 08, 2014

Best Practices Java - StringBuffer

It's been 3 years when I have not done any blogging here. Some day before, one of my friends was asking me about StringBuffer and he has the point that I don't have any justification that why Sun has created StringBuffer.

I am writing this blog from a very rural village of Bihar, India. The common problem I found was people are not utilizing the time in best of work. Many of the kids go to the market to bring one-one item at a time. Alright, are we engineers also follow the same trend.

We use String as default and then we keep adding things in it. Something like:

String dontUse = "This";
dontUse +="is not right";

Alright, here is a small code I have written to understand the estimated time.

public class StringBufferExample {

public static void main(String[] args) {

String[] dontUse = new String[10000];
                //StringBuffer[] dontUse = new StringBuffer[10000];
for(int i=0;i<10000;i++) { }
long startTime = System.nanoTime();
for(int i=0;i<10000;i++) {
                        dontUse[i]= new String("this");
// dontUse[i]= new StringBuffer("this");
}
for(int i=0;i<10000;i++) {
                        dontUse[i]+="is wrong";
// dontUse[i].append("is wrong");
}
long endTime = System.nanoTime();
System.out.println(endTime - startTime);

}
}

Approx Time taken from this code: 5501435(ns) whereas if you run the commented code, it will take: 2258812(ns)
So, not visible but normal String operation for "simply" addition of two string is "twice" costlier than StringBuffer.

Running: javap -c -classpath . StringBufferExample (copying those lines which are costly), will clearly tell you why String operation is a costly affair(actually it was never a String operation, it changes things to StringBuffer and then again convert it by toString to String).


   64:  if_icmpge       97
   67:  new     #6; //class java/lang/StringBuilder
   70:  dup
   71:  invokespecial   #7; //Method java/lang/StringBuilder."<init>":()V
   74:  aload_1
   75:  iload   4
   77:  dup2_x1
   78:  aaload
   79:  invokevirtual   #8; //Method java/lang/StringBuilder.append:(Ljava/lang/
String;)Ljava/lang/StringBuilder;
   82:  ldc     #9; //String is wrong
   84:  invokevirtual   #8; //Method java/lang/StringBuilder.append:(Ljava/lang/
String;)Ljava/lang/StringBuilder;
   87:  invokevirtual   #10; //Method java/lang/StringBuilder.toString:()Ljava/l
ang/String;
   90:  aastore


Now in the next blog, I will cover how StringBuffer handles the capacity, how it enlarge its capacity and when. It's a pretty simple code written in JDK.


Wednesday, October 06, 2010

JDK7 is on the way ...

Being a part of Oracle, I have not written any blog here. Anyways, Java doesn't belong to a company, its belong to the heart of billion people. There is lot which is coming in JDK7. Max. download is going to JDK6 which is a good news. People shifted from JDK1.5 and 1.4.2 to JDK6.

I will write some technical blog in coming days.