ORA-00600 [krccckp_scn_low] – from CTWR Process

Since Oracle RDBMS 11g it is possible to use block change tracking on a standby database.
After using a snapshot standby database we got instnace crashes with:

ORA-00600 [krccckp_scn_low] - from CTWR Process

not exactly shure if this is a bug or missing dokumentation, but it seems that standby block change tracking is not handled correctly.
so if a 12.1 snapshot standby database is reverted to a physical standby, I recomment do disable and enable block change traking.
So this ORA-00600 [krccckp_scn_low] can be avoided.

Indexing Oracle table columns of collection type (nested table)

With Oracle’s Object-Relational you can store collection of user defined datatypes (UDT) in a table column. This column can be stored a IOT or heap table.

If stored as heap table (normal table) you may want to index the nested table so that there are not FTS on every access.

| Id  | Operation                           | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                    |                               |     9 |  1818 | 32469   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL                  | TABLE                         |     1 |    81 |   309   (1)| 00:00:01 |

The nested table has a hidden column “NESTED_TABLE_ID” which can be indexed:

create index <my_index>on <my_nested_table> (NESTED_TABLE_ID)
| Id  | Operation                           | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                    |                               |     9 |  1818 |    56   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NESTED_TABXX_TAB2             |     1 |    27 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | NESTED_TAB_IX1                |     1 |       |     1   (0)| 00:00:01 |

* Don’t forget to gather statistics after create the indexes

Howto Purge a OLS Group from the library cache with dbms_shared_pool (Oracle Label Security)

With the dbms_shared_pool.purge() function specific object can be purged out of the library cache.
But according oracle documentation of dbms_shared_pool just a few types of objects can purged.

Here I give a short example how other objects can be purged. I my case a oracle OLS Group, casue of a curruption with OLS

ORA-12403: invalid internal label

The function to use is:
exec dbms_shared_pool.purge(,,<heap, mostly 1>);

  1. Lookup the hash:
    select SHARABLE_MEM,NAMESPACE,FULL_HASH_VALUE  from v$db_object_cache where namespace like '%LABEL%' and ;
    ------------ ----------------------------------------------------------------

    You have identify the needed object’s hash.

  2. Identify the namespace:
    elect kglsttyp,kglstdsc,kglstidn from x$kglst where kglstdsc like '%LABEL%' order by 3,1 ;
    KGLSTTYP            KGLSTDSC                                         KGLSTIDN
    ------------------- ---------------------------------------------- ----------
    NAMESPACE           LABEL SECURITY ILABELS                                 96
    NAMESPACE           LABEL SECURITY GROUP NO->SHORT NAME                    97
    NAMESPACE           LABEL SECURITY GROUP SHORT NAME->NO                    98
    NAMESPACE           LABEL SECURITY LEVEL NO->SHORT NAME                   101

    In my case I need to purge a “LABEL SECURITY GROUP NO->SHORT NAME” with the id 97.

  3. Purge the object:
    SQL>  exec dbms_shared_pool.purge('2d86c4246f3c0eb516628bf324d6b9a3',97,1);
    PL/SQL procedure successfully completed.
    SQL> select SHARABLE_MEM,NAMESPACE  from v$db_object_cache where namespace like '%LABEL%' and FULL_HASH_VALUE='2d86c4246f3c0eb516628bf324d6b9a3';
    ------------ ----------------------------------------------------------------

    Here we see that the object occupies 0 memory, so it’s gone.

If this method works for all library cache object is not clear, but for most it does. Neverthenless this this method need to be usesd with care, since it’s not documentet and it may not be clear if the rdbms reloads the purged object correctly if needed.

VIM incrementing number by one command

VIM is my favorite editor for nearly everything. Even I use it since 1996 there is still much to learn about it.
Today I wanna share my latest VIM knowlegde gain 🙂

The challenge is: How do I put incrementing number in front of each line?

The solution is:

let c=0|,$ g/^/ let c=c+1|s/^/\=c/


  1. define a variable “c” and initialize it with 0
  2. start at the current line until the last on the file “,$” (sure you can choose what ever you need)
  3. “g/^/” do it for all lines matching, thought I do it for all lines because every line has a beginning 🙂
  4. increment the varialbe c by 1 “let c=c+1”
  5. replace the begining of the line with the varialbe. to use the varaible you need to reference it like this “\=c”

Quote: “Share knowledge, gain wisdom” (Thomas Rieder)

ODA X6-2 HA, factory/initial release setup

I’m gonna share this small piece of information, because it took me some time to find it.

If you arge going to setup an ODA X6-2 HA and you wonder with wich release it will arrive, it is
Reference Readme of 20815644: “ODA X6-2 HA, the initial release is”

If you need another version it’s always possible to re-image the whole ODA. But if you plan to go to just install Bundle Patch (25064836) and deploy the ODA with the End-User Bundle 12978712.

Simple and easy, but not as quick as supposed. The server upgrade takes about 90 minutes and the end-user bundle more than 30 min. without db creation.

Oracle 12c deferred_segment_creation workaround for partitioned tables

In manny oracle 12c version the deferred_segment_creation feature does not work on table partitions, table subpartitions and lob partitions when you do a split partition. (ref. BUG 20307186)
This is very anoying if you have many empty segments.
There is an easy workaround.
Use DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS to drop the emptry segments again after the split partition action.

Actually this does not give any speed benefits while splitting partitions, but it may save you from space problems with emptry segments.

Vertica installation on Ubuntu 16.04 LTS

Vertica currently supported on Ubuntu 14.04 LTS but not yet on 16.04.

But with a view hacks it will install on Ubuntu 16.04 LTS as well.

Install required packages

apt-get install mcelog dialog

Fake Debian version:<pre>

cp /etc/debian_version /etc/debian_version.org
echo "jessie/sid" > /etc/debian_version

Install Vertica

/opt/vertica/sbin/install_vertica --hosts --failure-threshold NONE

Configure Vertica

sudo su - dbadmin

Have fun with Vertica Community Edition on Ubunut 16.04

java.lang.ArrayIndexOutOfBoundsException without stack trace

Recently I got across a java.lang.ArrayIndexOutOfBoundsException, but could not find any stack trace. So it was difficult to find where the Exception was thrown.
The “problem” was the OmitStackTraceInFastThrow JVM option which is set by default for performance reasons.

It can be disabled with this JVM option:


After that you get a regular stack trace an can find the problem.
The best solution seems to be to avoid the ArrayIndexOutOfBoundsException by proper coding, so if the exception is not thrown at all there should be no performance impact.

Hortonworks Sandbox Tez: java.lang.OutOfMemoryError: Java heap space

Since the tutorial on Microsoft azure about the flight delays was running a bit slow I tried to run it on a Hortonworks Sandbox (on a Notebook with 16GB memory).
I used the data from 2013, the full year.

Raw Row Count:

select count(*) from delays;

But running this statement caused an OutOfMemory error:

create table delays_wather
as SELECT regexp_replace(origin_city_name, '''', ''),
    FROM delays
    WHERE weather_delay IS NOT NULL GROUP BY origin_city_name;

java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1476381439440_0001_1_00, diagnostics=[Task failed, taskId=task_1476381439440_0001_1_00_000005, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159) at

Changing some memory Options prevented the OutOfMemory, but the statement would never finish.

After some research (where this link was very helpflul) I got the statement running in less than 16 seconds.
Some memory parameters seem to be a bit strange on the Hortonworks Sandbox 2.5. (Docker Version)

How I changed the configuration:

  • Memory allocated for all YARN containers = 4G
  • Maximum Container Size (Memory) = 2G
  • Minimum Container Size (Memory) = 250M


  • tez.am.resource.memory.mb = 800M # if to low it will be slow, but need to be lower than yarn max. container size
  • tez.task.resource.memory.mb = 1G
  • tez.runtime.io.sort.mb = 270M
  • tez.runtime.unordered.output.buffer.size-mb = 76M
  • tez.task.launch.cmd-opts = -Xmx624m # 80% of tez.task.resource.memory.mb


  • HiveServer2 Heap Size = 6G # it was to 96G on default
  • Metastore Heap Size = 2G # was on 32G on default
  • For Map Join, per Map memory threshold = 270M # just limit to a reasonable amount
  • Tez Container Size = 1G
  • For Map Join, per Map memory threshold = 800M # 200M is too low, 80% of Tez Container

That made the Notebook (Dell XPS 7i, 16RAM, Ubunut 16.04) beat the azure cloud solution, which needed 41 seconds.

Any suggestions and explanations about the memory settings are welcome.

Dell XPS 13 flickering issue with Ubuntu 16.04

Maybe a bit off topic. But this config solved my flickering issues on my Dell XPS 13 (9350) with the Intel HD Graphics 520

add /usr/share/X11/xorg.conf.d/20-intel.conf

Section "Device"
    Identifier "Intel Graphics"
    Driver "intel"
    Option "AccelMethod" "sna"
    Option "TearFree" "true"
    Option "DRI" "3"

Restart your X

systemctl restart lightdm

And any flickering should be gone.

Update: With kernel 4.4.0-42-generic Screen and Touchscreen are working after suspend.