Scale docker application with HAProxy

To be able to scale a docker service a instance can not have a static port definition for export.
HAProxy is a simple way to load balance traffic to diffrenet docker containers.

 
The concpet is that every container expose a port (can be the same for all), but not publish any ports.
HAProxy exports one port which gets distributed to all containers.

 
The application has to set the environment varialbe SERVICE_PORTS (for swarm). That’s all.

 
See this docker-comose (for stack):

  
version: '3'
services:
  app:
    build: .
    image: rit_app
    environment:
      - SERVICE_PORTS=80
    expose:
      - 80
    networks:
      - nw-rit
  ha:
    image: dockercloud/haproxy
    depends_on:
      - app
    environment:
      - BALANCE=roundrobin
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - "8102:80"
    networks:
      - nw-rit
    deploy:
      placement:
        constraints: [node.role == manager]
...

How check the jdbc client version in your database?

On the java side you can check the JDBC Client Version very easy.

java -jar ./mwhome/.../ojdbc7.jar -getversion
Oracle 12.1.0.2.0 JDBC 4.1 compiled with JDK7 on Mon_Jun_30_11:30:34_PDT_2014
#Default Connection Properties Resource
#Tue Apr 17 14:13:51 CEST 2018

But somethimes you may not have access to the application server (here Weblogic) or there are many version installed on the application side.
So the best way is to check in you Oracle Database who your client are.

SQL> select SID,NETWORK_SERVICE_BANNER,CLIENT_CHARSET,CLIENT_CONNECTION,CLIENT_VERSION,CLIENT_DRIVER from v$session_connect_info;
       396
Crypto-checksumming service for Linux: Version 12.2.0.1.0 - Production
Unknown                                  Heterogeneous 12.1.0.2.0                               jdbcthin

This is a reliable way to find how (jdbcthin, oci, ..) and with what JDBC Version the client are connecting.

Oracle: udpate a XML value in place with sql

Oracle sql provides a simple way to update values in a xml document by sql.
The “updatexml” function can udpate any fields in by xpath.

update SCOTT.DEMO" d
set
myval=updatexml(d.myval,'/myroot/mynode/mytime/text()',to_char(sysdate,'YYYY-MM-DD"T"HH24:Mi:SS.FF3"Z"'))
where id=1234;

to update multiple values at once:

udpatexml(xmltype,xpath1,rep1,xpath2,rep2,...)

Very handy to update large documents.

Login into Oracle Database without password / Autologin

Here I try to explain most simple and short way to login with a oracle wallen / without password.

create wallet:

mkstore -wrl -createCredential mydbservice.example.ch mkstore -wrl -listCredential

configure client side sqlnet.ora:

SQLNET.WALLET_OVERRIDE = TRUE
WALLET_LOCATION=(
SOURCE=(METHOD=FILE) (METHOD_DATA=(DIRECTORY=))
)

configure tns alias in tnsnames.ora:

mydbservice_al.example.ch =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.example.ch)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = mydbservice.achat.example.ch)
)
)

If the oracle instant client is used the location of sqlnet.ora and tnsnames.ora needs to defined with the TNS_ADMIN environment variable .
example::

LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
TNS_ADMIN=/etc/oracle/tnsnames

Oracle on Docker (I): Backup / Restore Oracle RDBMS on Docker

Running Oracle on Docker is a great way for Testing Features or Development environments, specially for offline development on notebooks which require an oracle database.

If you have setup an oracle Image according oracle rdbms docker and choosen to create your db on docker volumes. And mounted backup1 volume at /backup.

docker volume create oracledb1
docker volume create backup1

Create an offline backup:

docker exec -u 0 -it ora122 bash
bash-4.2# chown oracle /backup

docker exec -ti ora122 sqlplus / as sysdba
SQL> alter database close;

Database altered.

docker exec -ti ora122 rman target /
backup database format '/backup/%U' tag=bk1;

Restore Database from Backup

docker exec -ti ora122 rman target /
RMAN> restore database from tag=BK1

RMAN> shutdown immediate

RMAN> startup mount

RMAN> alter database open resetlogs;

PL/SQL Unit Test for SQL geneartor

If PL/SQL is unit tested utPLSQL (http://utplsql.org/utPLSQL/) is probably the most majure solution to implement it.
Here I suggest a solution to check if generated SQL is executable.

We have a package P_SERCH whith a function generate_sql to generate dynamic sql.
Since there is no native execption checking in utPLSQL we execute the generated SQL in a block.
On a error we to a False comparsion “ut.expect(‘ok’).to_equal(‘error’)”.
If the statement executes without error we to a True comparsion “ut.expect(‘ok’).to_equal(‘error’)”

To lower the impact of the sql execution the statement is wrap in a count select.
If theh statement execution is still inappropriate for performance reasons this could be replaced as well with a DBMS_SQL parse.

  c := DBMS_SQL.OPEN_CURSOR;
  DBMS_SQL.PARSE(c, 'select * from multi_tab order by 1', DBMS_SQL.NATIVE);

Package Header and Body of the unit test:

create or replace package ut3_P_SEARCH as
  -- %suite(p_search)
  -- %suitepath(all.online)

  --%beforeall
  procedure global_setup;

  --%afterall
  procedure global_cleanup;

  --%beforeeach
  procedure test_setup;

  --%aftereach
  procedure test_cleanup;
 
  -- %test
  -- %displayname(generate solr sql)
  procedure gensql_starship;

end ut3_P_SEARCH;
/


create or replace package body ut3_P_SEARCH as
  procedure global_setup is
  begin
    null;
  end;

  procedure test_setup is
  begin
    null;
  end test_setup;

  procedure global_cleanup is
  begin
    null;
  end;

  procedure test_cleanup is
  begin
    null;
  end test_cleanup;


  procedure gensql_starship is
    v_sql                 varchar2(4000);
  begin
    -- retrieve generated sql for person class
    select scott.p_search.generate_sql('STARSHIP') into v_sql from dual;
    -- check if executable
    begin
      execute immediate 'select count(*) from ('||v_sql||')';
      ut.expect('ok').to_equal('ok');
    exception  when others then
      ut.expect('ok').to_equal('error');
    end;
    -- check tables 
    ut.expect(v_sql).to_be_like(a_mask=>'%'|| p_metadata_util.f_class_relational_name('PERSON') ||'%');
    -- check static attributes
    ut.expect(v_sql).to_be_like(a_mask=>'%"id"%');
  END gensql_person;


end ut3_P_SEARCH;
/

test execution
SQL> begin ut.run('scott.ut3_p_search.gensql_starship'); end;
  2  /
all
online
p_search
generate solr sql
Finished in .014174 seconds
1 tests, 0 failed, 0 errored, 0 disabled, 0 warning(s)

PL/SQL procedure successfully completed.

A good source for PL/SQL Testing and Error handling is Jacek Gebals blog (http://www.oraclethoughts.com/), he is also one of the main developer of utPLSQL (version 3)

Find Oracle SQL Profiles causing dynamic_sampling

Recently I hat memory lock issues with statements using dynamic sampling.
But after disabling dynamic_sampling at the system level there were still some statements causing issues with dynamic_sampling.
The execution plan showed that the problematic statements had SQL Profiles. Thanks to a blog from
Cristian Antognini (https://antognini.ch/2008/08/sql-profiles-in-data-dictionary/) all SQL Profiles with with a dynaimic sampling hint could be found.

Demo: Setup a statment with a sqlprofile

exec dbms_sqltune.drop_sql_profile('SQLPROFILE42');
DECLARE
     l_sql               clob;
     BEGIN
     l_sql := q'!select id from t42 where id=:x!';

     dbms_sqltune.import_sql_profile( sql_text => l_sql, 
                                     name => 'SQLPROFILE42',
                                     profile => sqlprof_attr(q'!OPT_PARAM('optimizer_dynamic_sampling' 2)!'
                                                -- ,q'!FULL(@"SEL$1" "T42"@"SEL$1")!'
                                                ,q'!INDEX(@"SEL$1" "T42"@"SEL$1" "I42_3")!'
             ),
             force_match => true );
     end;
/


var x number;
exec :x := 2167658022;

explain plan for select id from t42 where id=:x;
select * from table(dbms_xplan.display(null,null,'ADVANCED'));

Finding the SQL Profiles with dynamic sampling

 
SELECT so.name,extractValue(value(h),'.') AS hint
FROM sys.sqlobj$data od, sys.sqlobj$ so,
table(xmlsequence(extract(xmltype(od.comp_data),'/outline_data/hint'))) h
WHERE 1=1 -- so.name = '&sqlprof'
  AND so.signature = od.signature
  AND so.category = od.category
  AND so.obj_type = od.obj_type
  AND so.plan_id = od.plan_id
  and extractValue(value(h),'.') like '%dynamic%'
;
HINT
----------------------------------------------------------------------------------------------------
OPT_PARAM('optimizer_dynamic_sampling' 2)

adjust column high and low value on column with hybrid histograms

On table where time series data get loaded it’s not uncommon to get out-of-range conditions for the CBO optimizer.
This means that the statistics are not uptodate and until they are gathered again, the optimizer may think that there is no data within a daterange, since the queries range may be higher than the columns high value.
As consequence you may end up with suboptimal plan which could include catesian joins und unwanted nested loops.

An even wors condition is if you have few outliers in a date range and bind variables are used. if all your data is from the past 10 Month but you have a record from the year 0001, the optimizer may guess without histogram that your data es even distributed. if the last month are queries the guess cardinality would be by magnitute to low. this usually end as well with bad plans.

A common solution to this issues is to set the high and/or low value explicitly with DBMS_STATS.set_column_stats (Doc ID 1276174.1).

This works well if there are no histograms on the column. It may even work if you have histograms (even they may get destroyed). On time series date there is usually a high-balanced histogram. But on 12.1 or 12.2 this often changes to a hybrid histogram.

And using the old methode may end up with:
ERROR at line 1:
ORA-20001: Invalid or inconsistent input values
ORA-06512: at "SYS.DBMS_STATS", line 13570
ORA-06512: at line 23

This is mostly because the bkvals array is incorrect.

Even the details are documented in the description of DBMS_STATS it may be no so obvious how set the high value in such a case.
Here I share my example which workes with a hybird histogram.

declare
srec DBMS_STATS.STATREC;
v_distcnt NUMBER;
v_density NUMBER;
v_nullcnt NUMBER;
v_avgclen NUMBER;
v_datevals DBMS_STATS.DATEARRAY;
v_histogram_type user_tab_columns.histogram%type;
BEGIN
DBMS_STATS.get_column_stats (ownname => USER,
tabname => 'MYTAB',
colname => 'DCOL',
distcnt => v_distcnt,
density => v_density,
nullcnt => v_nullcnt,
srec => srec,
avgclen => v_avgclen
);
-- init a datearray with the correct number of values (buckets in the histogram)
v_datevals := DBMS_STATS.DATEARRAY();
v_datevals.extend(srec.novals.count);
-- check if there is a hybrid histogram
select histogram INTO v_histogram_type from user_tab_columns where table_name='MYTAB' and column_name='DCOL';
IF v_histogram_type='HYBRID' then
-- copy the values from the histogram to the datearray (! this is an example for date/timestamp columns only, Julian calendar !)
for i in 1..srec.novals.count loop
v_datevals(i) := to_date(srec.novals(i),'J');
end loop;
-- reset first and last (low/high value) !! according your needs!
v_datevals(1) := to_date('01.01.2017','DD.MM.YYYY');
v_datevals(datevals.last) := trunc(sysdate) + 14;
ELSE
null; -- .. do what you do in other cases, this is only an example, think before run ..
END IF;
-- srec.bkvals := null;
-- if oyu wnat a freq or hybrid histogram ... else null
-- srec.bkvals := dbms_stats.numarray(srec.bkvals(1),srec.bkvals(srec.bkvals.last));

DBMS_STATS.prepare_column_values (srec, v_datevals);
DBMS_STATS.set_column_stats (ownname => USER,
tabname => 'MYTAB',
colname => 'DCOL',
distcnt => v_distcnt,
density => v_density,
nullcnt => v_nullcnt,
srec => srec,
avgclen => v_avgclen,
force => true
);
end;
/

A hybrid histogram on skewed time series can be really wonderful and the out-of-range condition can be prevented.

Please leave comments how you handle the out-of-range issues.

Clone Oracle Home 12c on the same Host

If you have to test some patches and you need a new Oracle Home on the same host, cloning the existing home to the same host is an easy an quick way to do that.

Here a short step by step guide:

  1. check diskspace
    df -h .
  2. copy the oracle home to it’s new location
    cp -rp /opt/app/oracle/product/12.1.0/rdbms6/ /opt/app/oracle/product/12.1.0/rdbms7
  3. export the new home path and cone. this creates as well all nessesary inventory entries. be carefull to use a free oracle_home_name.
    export ORACLE_HOME=/opt/app/oracle/product/12.1.0/rdbms7

    /opt/app/oracle/product/12.1.0 $/opt/app/oracle/product/12.1.0/rdbms6/oui/bin/runInstaller -clone -silent ORACLE_HOME=/opt/app/oracle/product/12.1.0/rdbms7 ORACLE_HOME_NAME="OraDB12Home4" ORACLE_BASE="/opt/app/oracle"

    Starting Oracle Universal Installer...

    Checking swap space: must be greater than 500 MB. Actual 9835 MB Passed
    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-11-15_01-14-43PM. Please wait ...
    Copyright (C) 1999, 2014, Oracle. All rights reserved.

    You can find the log of this install session at:
    /opt/app/oraInventory/logs/cloneActions2017-11-15_01-14-43PM.log
    .................................................................................................... 100% Done.

    Setup in progress (Wednesday, November 15, 2017 1:16:34 PM CET)
    .......... 100% Done.
    Setup successful

    Saving inventory (Wednesday, November 15, 2017 1:16:34 PM CET)
    Saving inventory complete
    Configuration complete

    End of install phases.(Wednesday, November 15, 2017 1:17:00 PM CET)
    WARNING:
    The following configuration scripts need to be executed as the "root" user.
    /opt/app/oracle/product/12.1.0/rdbms7/root.sh
    To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts

    The cloning of OraDB12Home4 was successful.
    Please check '/opt/app/oraInventory/logs/cloneActions2017-11-15_01-14-43PM.log' for more details.

  4. run the root.sh script as instructed.
    /opt/app/oracle/product/12.1.0/rdbms7/root.sh
  5. you can check the inventory if all looks as expected:
    cat /opt/app/oraInventory/ContentsXML/inventory.xml|grep OraDB12Home4

  6. update your oratab and the the OPATCH directories of the database you want to switch to the new home
    /etc/oratab
    ..
    rdbms7:/opt/app/oracle/product/12.1.0/rdbms7:N

    sid rdbms7
    $ORACLE_HOME/OPatch/opatch lsinventory

    create or replace directory OPATCH_INST_DIR as '/opt/app/oracle/product/12.1.0/rdbms7/OPatch';
    create or replace directory OPATCH_SCRIPT_DIR as '/opt/app/oracle/product/12.1.0/rdbms7/QOpatch';
    create or replace directory OPATCH_LOG_DIR as '/opt/app/oracle/product/12.1.0/rdbms7/QOpatch';