Oracle 12c deferred_segment_creation workaround for partitioned tables

In manny oracle 12c version the deferred_segment_creation feature does not work on table partitions, table subpartitions and lob partitions when you do a split partition. (ref. BUG 20307186)
This is very anoying if you have many empty segments.
There is an easy workaround.
Use DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS to drop the emptry segments again after the split partition action.

Actually this does not give any speed benefits while splitting partitions, but it may save you from space problems with emptry segments.

Vertica installation on Ubuntu 16.04 LTS

Vertica currently supported on Ubuntu 14.04 LTS but not yet on 16.04.

But with a view hacks it will install on Ubuntu 16.04 LTS as well.

Install required packages

apt-get install mcelog dialog

Fake Debian version:<pre>

cp /etc/debian_version /etc/
echo "jessie/sid" > /etc/debian_version

Install Vertica

/opt/vertica/sbin/install_vertica --hosts --failure-threshold NONE

Configure Vertica

sudo su - dbadmin

Have fun with Vertica Community Edition on Ubunut 16.04

java.lang.ArrayIndexOutOfBoundsException without stack trace

Recently I got across a java.lang.ArrayIndexOutOfBoundsException, but could not find any stack trace. So it was difficult to find where the Exception was thrown.
The “problem” was the OmitStackTraceInFastThrow JVM option which is set by default for performance reasons.

It can be disabled with this JVM option:


After that you get a regular stack trace an can find the problem.
The best solution seems to be to avoid the ArrayIndexOutOfBoundsException by proper coding, so if the exception is not thrown at all there should be no performance impact.

Hortonworks Sandbox Tez: java.lang.OutOfMemoryError: Java heap space

Since the tutorial on Microsoft azure about the flight delays was running a bit slow I tried to run it on a Hortonworks Sandbox (on a Notebook with 16GB memory).
I used the data from 2013, the full year.

Raw Row Count:

select count(*) from delays;

But running this statement caused an OutOfMemory error:

create table delays_wather
as SELECT regexp_replace(origin_city_name, '''', ''),
    FROM delays
    WHERE weather_delay IS NOT NULL GROUP BY origin_city_name;

java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1476381439440_0001_1_00, diagnostics=[Task failed, taskId=task_1476381439440_0001_1_00_000005, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor( at

Changing some memory Options prevented the OutOfMemory, but the statement would never finish.

After some research (where this link was very helpflul) I got the statement running in less than 16 seconds.
Some memory parameters seem to be a bit strange on the Hortonworks Sandbox 2.5. (Docker Version)

How I changed the configuration:

  • Memory allocated for all YARN containers = 4G
  • Maximum Container Size (Memory) = 2G
  • Minimum Container Size (Memory) = 250M


  • = 800M # if to low it will be slow, but need to be lower than yarn max. container size
  • tez.task.resource.memory.mb = 1G
  • = 270M
  • tez.runtime.unordered.output.buffer.size-mb = 76M
  • tez.task.launch.cmd-opts = -Xmx624m # 80% of tez.task.resource.memory.mb


  • HiveServer2 Heap Size = 6G # it was to 96G on default
  • Metastore Heap Size = 2G # was on 32G on default
  • For Map Join, per Map memory threshold = 270M # just limit to a reasonable amount
  • Tez Container Size = 1G
  • For Map Join, per Map memory threshold = 800M # 200M is too low, 80% of Tez Container

That made the Notebook (Dell XPS 7i, 16RAM, Ubunut 16.04) beat the azure cloud solution, which needed 41 seconds.

Any suggestions and explanations about the memory settings are welcome.

Dell XPS 13 flickering issue with Ubuntu 16.04

Maybe a bit off topic. But this config solved my flickering issues on my Dell XPS 13 (9350) with the Intel HD Graphics 520

add /usr/share/X11/xorg.conf.d/20-intel.conf

Section "Device"
    Identifier "Intel Graphics"
    Driver "intel"
    Option "AccelMethod" "sna"
    Option "TearFree" "true"
    Option "DRI" "3"

Restart your X

systemctl restart lightdm

And any flickering should be gone.

Update: With kernel 4.4.0-42-generic Screen and Touchscreen are working after suspend.

Tomcat – Deployment Freezes

I decided to write about this issue since it happen already twice to me.

Suddenly your Tomcat freezes on deployment. After deleting the content of your weppapp directory and restarting tomcat, it freezes again, but probably on a different place. I spend a some time to find the issue.

I configured some months ago a Log4j SocketAppender (in this case for logstash). But logstash hat some issues after an Elasticsearch upgrade.

After disabling the log4j SocketAppender the application was deploying without problems. Fortunately it was just a test setup, but I reminded me to be very careful about this dependency if it should be used on a production system.

Heat Map on Oracle CDB – ORA-38342

Even heat maps (ADO feature) are not supported (yet) on CDBs it is possible to set heat_map parameter to ON.
But trying to add a ILM Policy failes with error ORA-38342.

SQL> show parameter heat

------------------------------------ ----------- ------------------------------
heat_map string ON
SQL> alter table test ilm add policy row store compress advanced segment after 1 day of no access;
alter table test ilm add policy row store compress advanced segment after 1 day of no access
ERROR at line 1:
ORA-38342: heat map not enabled

ADO and Heat Map are not supported with a multitenant container database (CDB).

How to use Apache Thrift with Google App Engine

I’m on holiday an played around with some stuff I want to learn. So I came across the question: “Can I use Thrift with the Google App Engine”.  I could not find clear answers nor useful examples. That’s why I decided to test it.

Short answer: yes, you can!

The full story:

The Google App Engine is great to run simple Servlets. But it has many limitations. One limitation is that you can not use any outbound connection on any port (on the experimental Managed VM are more option available). But most Thrift tutorials use a TSocketServer with a specific port. This seem not to work on the GAE.

But… a thrift server can be implemented very easy as Servlet.

import org.apache.thrift.server.TServlet;
import org.apache.thrift.protocol.TCompactProtocol;

public class SearchServlet extends TServlet {

public SearchServlet() {
  super(new SearchEanService.Processor<>(new SearchEanHandler()), new TCompactProtocol.Factory());

My example interface has just one function “string search(string abc)”.

import org.apache.thrift.TException;

public class SearchEanHandler implements SearchEanService.Iface {
    public void ping() throws TException {
    public String search(String ean) throws TException {
        return "found the fish";

The client uses THttpClient and looks like this.

import org.apache.thrift.TException;
import org.apache.thrift.protocol.TCompactProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TTransportException;
import org.apache.thrift.transport.THttpClient;

public class GameExClient  {
    private void invoke() {
        THttpClient transport;
        try {
            transport  = new THttpClient("http://localhost:8080/search");
            TProtocol protocol = new TCompactProtocol(transport);

            SearchEanService.Client client = new SearchEanService.Client(protocol);

            String result ="search it");
            System.out.println("Add result: " + result);

        } catch (TTransportException e) {
        } catch (TException e) {

    public static void main(String[] args) {
        GameExClient c = new GameExClient();


So we need not separate port. The thrift client just connects over the regular port with the servlet context path.

Let’s try it.
Start the local Google App Engine

mvn3 appengine:devserver

And start the client. (yes, I know it looks a bit ugly, but the demo was just created with maven and vim)

java -cp ".:../lib/libthrift-0.9.3.jar:../lib/slf4j-api-1.7.12.jar:../lib/httpclient-4.4.1.jar:../lib/httpcore-4.4.4.jar" ch/rcms/gameex/GameExClient
Add result: found the fish

Voila and it works.
So lets forget about heavy REST / JSON implementations do it with thrift :-)

RMAN Error with HP Data Protector (Timeout)

On multi channel backup backups with RMAN using HP DP the channels are closed per default after 30s from HP DP.
If you backup all archivelogs then try to backup the controlfile the timeout gets hit often.
As consequence the backups are market as failure in RMAN and as success in DP.

Set HP DP Timeout to a higher value (like 15 Min or more) in the globals config file ( /etc/opt/omni/server/options/global)


and restart HP DP

omnisv -stop
omnisv -start