Dell XPS 13 flickering issue with Ubuntu 16.04

Maybe a bit off topic. But this config solved my flickering issues on my Dell XPS 13 (9350) with the Intel HD Graphics 520

add /usr/share/X11/xorg.conf.d/20-intel.conf

Section "Device"
    Identifier "Intel Graphics"
    Driver "intel"
    Option "AccelMethod" "sna"
    Option "TearFree" "true"
    Option "DRI" "3"
EndSection

Restart your X

systemctl restart lightdm

And any flickering should be gone.

Tomcat – Deployment Freezes

I decided to write about this issue since it happen already twice to me.

Situation:
Suddenly your Tomcat freezes on deployment. After deleting the content of your weppapp directory and restarting tomcat, it freezes again, but probably on a different place. I spend a some time to find the issue.

Solution:
I configured some months ago a Log4j SocketAppender (in this case for logstash). But logstash hat some issues after an Elasticsearch upgrade.
log4j.properties
log4j.appender.=org.apache.log4j.net.SocketAppender
log4j.appender..port=9922
log4j.appender..remoteHost=localhost

After disabling the log4j SocketAppender the application was deploying without problems. Fortunately it was just a test setup, but I reminded me to be very careful about this dependency if it should be used on a production system.

Heat Map on Oracle CDB – ORA-38342

Even heat maps (ADO feature) are not supported (yet) on CDBs it is possible to set heat_map parameter to ON.
But trying to add a ILM Policy failes with error ORA-38342.

SQL> show parameter heat

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string ON
SQL> alter table test ilm add policy row store compress advanced segment after 1 day of no access;
alter table test ilm add policy row store compress advanced segment after 1 day of no access
*
ERROR at line 1:
ORA-38342: heat map not enabled

http://docs.oracle.com/database/121/VLDBG/GUID-85EF8DD3-B372-4D5A-8941-FD7A0AF9C364.htm#VLDBG14152

ADO and Heat Map are not supported with a multitenant container database (CDB).

How to use Apache Thrift with Google App Engine

I’m on holiday an played around with some stuff I want to learn. So I came across the question: “Can I use Thrift with the Google App Engine”.  I could not find clear answers nor useful examples. That’s why I decided to test it.

Short answer: yes, you can!

The full story:

The Google App Engine is great to run simple Servlets. But it has many limitations. One limitation is that you can not use any outbound connection on any port (on the experimental Managed VM are more option available). But most Thrift tutorials use a TSocketServer with a specific port. This seem not to work on the GAE.

But… a thrift server can be implemented very easy as Servlet.

import org.apache.thrift.server.TServlet;
import org.apache.thrift.protocol.TCompactProtocol;

public class SearchServlet extends TServlet {

public SearchServlet() {
  super(new SearchEanService.Processor<>(new SearchEanHandler()), new TCompactProtocol.Factory());
}

My example interface has just one function “string search(string abc)”.

import org.apache.thrift.TException;

public class SearchEanHandler implements SearchEanService.Iface {
    public void ping() throws TException {
    }
    public String search(String ean) throws TException {
        return "found the fish";
    }
}

The client uses THttpClient and looks like this.

import org.apache.thrift.TException;
import org.apache.thrift.protocol.TCompactProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TTransportException;
import org.apache.thrift.transport.THttpClient;

public class GameExClient  {
    private void invoke() {
        THttpClient transport;
        try {
            transport  = new THttpClient("http://localhost:8080/search");
            TProtocol protocol = new TCompactProtocol(transport);

            SearchEanService.Client client = new SearchEanService.Client(protocol);
            transport.open();

            String result = client.search("search it");
            System.out.println("Add result: " + result);

            transport.close();
        } catch (TTransportException e) {
            e.printStackTrace();
        } catch (TException e) {
            e.printStackTrace();
        }
    }

    public static void main(String[] args) {
        GameExClient c = new GameExClient();
        c.invoke();

    }
}

So we need not separate port. The thrift client just connects over the regular port with the servlet context path.

Let’s try it.
Start the local Google App Engine

mvn3 appengine:devserver

And start the client. (yes, I know it looks a bit ugly, but the demo was just created with maven and vim)

java -cp ".:../lib/libthrift-0.9.3.jar:../lib/slf4j-api-1.7.12.jar:../lib/httpclient-4.4.1.jar:../lib/httpcore-4.4.4.jar" ch/rcms/gameex/GameExClient
Add result: found the fish

Voila and it works.
So lets forget about heavy REST / JSON implementations do it with thrift :-)

RMAN Error with HP Data Protector (Timeout)

On multi channel backup backups with RMAN using HP DP the channels are closed per default after 30s from HP DP.
If you backup all archivelogs then try to backup the controlfile the timeout gets hit often.
As consequence the backups are market as failure in RMAN and as success in DP.

Solution:
Set HP DP Timeout to a higher value (like 15 Min or more) in the globals config file ( /etc/opt/omni/server/options/global)

SmWaitForNewBackupClient=900

and restart HP DP

omnistat
omnisv -stop
omnisv -start

HBase Client Issues after Upgrade

The Problem:
After upgrade application libraries (spring 3.2 to 4.2, mahout, hbase 0.98 to 1.1, ibatis 2 to mybatis, …..) the application was not able to connect to HBase anymore. After adding and excluding some the missing and conflicting dependencies it was still stuck.

The Debuging:
Narrow down the problem I decided to use a simple test application
Setup test table:

bin/hbase shell
hbase(main):001:0> create ‘test’, ‘cf’
0 row(s) in 3.8890 seconds

hbase(main):002:0> put ‘test’, ‘row1’, ‘cf:a’, ‘value1’
0 row(s) in 0.1840 seconds

Demo Application:

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

public class HBaseClient {

    public static void main(String[] arg) throws IOException {
        Configuration config = HBaseConfiguration.create();
        config.set("hbase.zookeeper.quorum", "127.0.0.1");
        config.set("hbase.zookeeper.property.clientPort", "2182");
        config.set("hbase.hconnection.threads.core","1");
        config.set("hbase.hconnection.threads.max","10");
        config.set("hbase.client.retries.number","3");
        config.set("hbase.rootdir","hdfs://127.0.0.1:9000/hbase");
        //config.set("","");


	long startTime0 = System.currentTimeMillis();
        HTable testTable = new HTable(config, "test");
	long stopTime0 = System.currentTimeMillis();
	System.out.println("connect in "+(stopTime0 - startTime0)+" ms");
        
	long startTime = System.currentTimeMillis();
        for (int i = 0; i < 1; i++) {
            byte[] family = Bytes.toBytes("cf");
            byte[] qual = Bytes.toBytes("a");

            Scan scan = new Scan();
            scan.addColumn(family, qual);
            scan.setMaxResultsPerColumnFamily(5);
            ResultScanner rs = testTable.getScanner(scan);
            for (Result r = rs.next(); r != null; r = rs.next()) {
	         	
                byte[] valueObj = r.getValue(family, qual);
                String value = new String(valueObj);
                System.out.println(value);
            }
        }
	long stopTime = System.currentTimeMillis();
	System.out.println("Scan finished in "+(stopTime - startTime)+" ms");

	long startTime2 = System.currentTimeMillis();
	Get g=new Get("row1".getBytes());
	Result rs = testTable.get(g);
	long stopTime2 = System.currentTimeMillis();
	System.out.println("Get finished in "+(stopTime2 - startTime2)+" ms");
        
        testTable.close();
    }
}

(Sorry about the unessary speed counters – I’m a bit a performance freak)

Now I just added the libraries needed until it worked. Then I added more util didn’t work again.

The Solution:

Exception in thread "main" java.lang.NoSuchFieldError: IBM_JAVA
at org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:339)
at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:384)

hadoop-core-1.2.1.jar and hadoop-auth-2.5.1.jar are conflicting remove hadoop-core from your classpath

 

Example to exclude it in Maven from mahout:

 <dependency>
    <groupId>org.apache.mahout</groupId>
    <artifactId>mahout</artifactId>
    <version>0.8</version>
    <exclusions>
       <exclusion>
           <groupId>org.apache.hadoop</groupId>
           <artifactId>hadoop-core</artifactId>
       </exclusion>
    </exclusions>
</dependency>

 

XEN 4.4 with Oracle Linux Guest

The first post is not directly about data. Is about XEN and howto prepare it for Oracle Linux as Guest. Installing XEN is the easy and documented task.

To install the Oracle Linux 7.1 guest VM you need a proper VM configuration for installing and one for running the guest

The configuration for installing:

name = “oralin01”
kernel = “/media/cdrom/images/pxeboot/vmlinuz”
ramdisk = “/media/cdrom/images/pxeboot/initrd.img”
extra = “inst.vnc inst.stage2=hd:LABEL=OL-7.1\\x20Server.x86_64 earlyprintk=xen console=hvc0”
memory = 4096
vcpus = 2
vif = [ ‘bridge=xenbr0’ ]
disk = [ ‘/dev/vg0/oralin01,raw,xvda,rw’,’file:/pathtoiso/oracle_linux_7_1_V74844-01.iso,xvdc:cdrom,r’ ]
vfb = [ ‘type=vnc, vncdisplay=1, vnclisten=127.0.0.1’ ]
on_poweroff = ‘destroy’
on_reboot = ‘restart’
on_crash = ‘restart’

connect with the consloe or vnc

xl console oralin01

Setup the Oracle Linux 7.1. Then change the configuratin, destroy the domU (your guest vm) and recreate it with the run configuratin:

name = “oralin01″
bootloader=”pygrub”
memory = 4096
vcpus = 2
vif = [ ‘mac=00:16:3e:02:0a:79,bridge=xenbr0’ , ‘mac=00:16:3e:02:0a:80,bridge=xenbr0’]
disk = [ ‘/dev/vg0/oralin01,raw,xvda,rw’,’file:/pathtoiso/oracle/oracle_linux_7_1_V74844-01.iso,xvdc:cdrom,r’ ]
vfb = [ ‘type=vnc, vncdisplay=1, vnclisten=127.0.0.1’ ]
on_poweroff = ‘destroy’
on_reboot = ‘restart’
on_crash = ‘restart’

Notice we have two network interfaces (eth0, eth1) both on the xenbr0 which should allow us to configure the public and private network if we will install Oracle RAC on Xen later.