Sunday, November 18, 2012

How to delete records in join sql query in oracle.


2.
How to delete records in join sql query in oracle.

DELETE
      from role_resource_privilege
      where exists (select rrp.*
      FROM role_resource_privilege rrp
      LEFT JOIN role r
      ON r.role_id = rrp.role_id
      LEFT JOIN resource_privilege rp
      ON rp.resource_id     = rrp.resource_id
      AND rp.privilege_id   = rrp.privilege_id
      where rp.resource_id is null
      OR r.role_id         IS NULL); 

ORA-01779: cannot modify a column which maps to a non key-preserved table



Oracle issue :
------------------
ORA-01779: cannot modify a column which maps to a non key-preserved table

Reason : 


      update (
      select ur.sysadmin FROM user_role ur
      INNER JOIN user_profile up
      ON up.user_id = ur.user_id
      INNER JOIN role r
      ON r.role_id      = ur.role_id
      WHERE ur.sysadmin = 1
      and ( up.sysadmin = 0
      or r.sysadmin     = 0 )) t
      SET t.sysadmin = 0;
   
   Because inline view returns 0 rows and oracle could figure out records to be updated.
   
Solution :
  
   update user_role usr set usr.sysadmin = 0
      where exists (
      select ur.* FROM user_role ur
      INNER JOIN user_profile up
      ON up.user_id = ur.user_id
      INNER JOIN role r
      ON r.role_id      = ur.role_id
      WHERE ur.sysadmin = 1
      and ( up.sysadmin = 0
      or r.sysadmin     = 0 ));
   
We can also use merge statement to update rows in join

          merge INTO resource_master c USING resource_master p ON (p.resource_id = c.parent_id)
        WHEN matched THEN
          UPDATE SET c.active = 0;   

Why do we need 64 bit JVM


Why do we need 64 bit JVM:
----------------------------
The primary reason would be if you wanted to write an app capable of using a large amount of memory (e.g. over 4GB, or whatever the per-process limit on your operating system is).
1. When we need to handle more memory ptentially more than 4G.
2. Note, however, that due to the larger adresses (32-bit is 4 bytes, 64-bit is 8 bytes) a 64-bit JVM will require more memory than a 32-bit JVM for the same task.
3. The Java compiler produces byte code which is same whether you use the 32-bit or 64-bit JDK and plan to use the 32-bit or 64-bit JRE.
4. One way to use a 64-bit JVM efficiently is to use the -XX:+UseCompressedOops which uses 32-bit references in a way which can still access 32 GB of memory. It can do this because every object in the Sun/Oracle JVM is allocated on a 8-byte boundary (i.e. the lower 3 bits of the address are 000) By shifting the bits of the 32-bit reference, it can access 4GB * 8 or 32 GB in total. It has been suggested that this should be the default for Java 7.
Support for 32-bit programs
Programs written and compiled for a 32-bit JVM will work without re-compilation. However any native libraries used will not. A 64-bit JVM can only load 64-bit native libraries.
http://software.intel.com/en-us/blogs/2011/07/07/all-about-64-bit-programming-in-one-place/
http://java.dzone.com/articles/java-all-about-64-bit

Thursday, April 19, 2012

Load balancer

Load Balancer

A load balancer in front of the cluster makes sure that all servers receive fair share of user requests. A hardware load balancer is usually a best option as it provides maximum performance. Companies such F5 and Cisco (big IP) are known for good hardware load balancers. If your budged cannot afford a hardware load balancer, an Apache server running a combination of mod_proxy, mod_rewrite and mod_redundancy can be another option.

Tangosol vs Terracotta

Terracotta DSO uses a TCP/IP-based client/server architecture that consists of client-side instrumentation (byte code changes for "transparent clustering"), combined with a central server ("hub") for sharing state between application servers. In a Coherence cluster, the analogous components would be our free Coherence Data Client on the application servers, combined with any of our server-side editions (e.g. Coherence Caching Edition) as a fault-tolerant scale-out solution for state and data management.

Terracotta explains their clustering as follows:


Terracotta servers can be deployed as an active-Primary plus a passive-Secondary (i.e. 1 or numerous hot-standby(s)) - the hot-standby shares disk with the primary Terracotta Server. On failure of the primary, client-JVMs transparently connect to the standby.


Any my response:

For reference, this is one of the primary differences betweent Terracotta clustering and Tangosol clustering. Terracotta can cluster two servers (one hot, one standby) while Tangosol can run a couple thousand servers as "hot" (active + active + active + etc) in an n-way fully-connected mesh (virtual channels). Our server throughput in a 100-server system is 50x that of a hot+hot 2-server system, and (in a fully switched fabric) our throughput in a 1000-server system is 10x that of a 100-server system. And failover time (automatic, without data loss or interruption of application flow) is still typically sub-second.

Regarding the connections between the application servers and the Terracotta server, it is a TCP/IP client/server connection (no fundamental differences at the wire level from a Telnet session, JDBC connection or RMI). It is analogous to our free Data Client or our low-cost Real-Time Client.

Speaking for me personally, I would evaluate plugging Terracotta in through the TCP/IP connectivity (ours, theirs, whatever), because Terracotta has focused on the programming model (AOP, Spring, etc.), and it could be a good addition for working with data in a Data Grid.

--From link : http://www.infoq.com/news/2006/12/terracotta-jvm-clustering

Monday, February 27, 2012

Low latency architecture

LMAX - How to Do 100K TPS at Less than 1ms Latency

Imp links :
http://martinfowler.com/articles/lmax.html
http://ftalphaville.ft.com/blog/2009/07/08/60761/the-cold-war-in-high-frequency-trading/
http://ftalphaville.ft.com/blog/tag/high-frequency-trading/