Index Oracle RDBMS in Solr using Apache Spark

After I wrote multiprocessing python script which can be scaled with multiple docker containers to load billions of records from Oracle RDBMS to Solr Cloud I started to think of simpler solutions.

The basic idea is to handle all multiprocessing and scaling aspects by spark. As Spark is able to partition jdbc datasources out of the box the prove was needed that data can be saved in solr. Spakr-Solr provides the needed functionality.

Environment Preparation:

  • Setup Solr Cloud on Docker (one node in cloud mode is sufficient, create a collection “test”)
  • Run Oracle 18c on Docker
  • Run a Spark Cluster on Docker (I used this Spark images)

Copy the jdbc driver and the spark-solr driver

docker cp ojdbc8.jar spark-master:/
docker cp spark-solr-3.6.4-shaded.jar  spark-master:/
docker cp ojdbc8.jar spark-worker-1:/
docker cp spark-solr-3.6.4-shaded.jar  spark-worker-1:/

Start the Spark Shell (I use the pyspark) and load the date into a data frame

docker exec -it spark-master /spark/bin/pyspark --jars /ojdbc8.jar,/spark-solr-3.6.4-shaded.jar

empDF = spark.read \
    .format("jdbc") \
    .option("url", "jdbc:oracle:thin:test/test@//192.168.1.12:1521/demo01") \
    .option("dbtable", "test.emp") \
    .option("user", "test") \
    .option("password", "****") \
    .option("driver", "oracle.jdbc.driver.OracleDriver") \
    .load()
empDF.printSchema()
empDF.show()

Now save the date to Solr Cloud

empDF.write.format("solr").option("zkhost","192.168.1.12:9983").option("collection","test").mode("append").save()

And the date is in Solr Cloud.

On a Test with more date I had to add the Option .option(“commit_within”, “5000”)
I could not find explicit commit on solr-spark.

This solution should able scale well on a spark cluster by partitioning the data on the jdbc side. Transformation can be added on Spark. And such a solution would much less complex and better maintainable then a python multiprocessing solution (who ever used python multiprocessing may know what I mean).

If you have used similar setup please share your experiences.

Oracle on Docker (I): Backup / Restore Oracle RDBMS on Docker

Running Oracle on Docker is a great way for Testing Features or Development environments, specially for offline development on notebooks which require an oracle database.

If you have setup an oracle Image according oracle rdbms docker and choosen to create your db on docker volumes. And mounted backup1 volume at /backup.

docker volume create oracledb1
docker volume create backup1

Create an offline backup:

docker exec -u 0 -it ora122 bash
bash-4.2# chown oracle /backup

docker exec -ti ora122 sqlplus / as sysdba
SQL> alter database close;

Database altered.

docker exec -ti ora122 rman target /
backup database format '/backup/%U' tag=bk1;

Restore Database from Backup

docker exec -ti ora122 rman target /
RMAN> restore database from tag=BK1

RMAN> shutdown immediate

RMAN> startup mount

RMAN> alter database open resetlogs;

Find Oracle SQL Profiles causing dynamic_sampling

Recently I hat memory lock issues with statements using dynamic sampling.
But after disabling dynamic_sampling at the system level there were still some statements causing issues with dynamic_sampling.
The execution plan showed that the problematic statements had SQL Profiles. Thanks to a blog from
Cristian Antognini (https://antognini.ch/2008/08/sql-profiles-in-data-dictionary/) all SQL Profiles with with a dynaimic sampling hint could be found.

Demo: Setup a statment with a sqlprofile

exec dbms_sqltune.drop_sql_profile('SQLPROFILE42');
DECLARE
     l_sql               clob;
     BEGIN
     l_sql := q'!select id from t42 where id=:x!';

     dbms_sqltune.import_sql_profile( sql_text => l_sql, 
                                     name => 'SQLPROFILE42',
                                     profile => sqlprof_attr(q'!OPT_PARAM('optimizer_dynamic_sampling' 2)!'
                                                -- ,q'!FULL(@"SEL$1" "T42"@"SEL$1")!'
                                                ,q'!INDEX(@"SEL$1" "T42"@"SEL$1" "I42_3")!'
             ),
             force_match => true );
     end;
/


var x number;
exec :x := 2167658022;

explain plan for select id from t42 where id=:x;
select * from table(dbms_xplan.display(null,null,'ADVANCED'));

Finding the SQL Profiles with dynamic sampling

 
SELECT so.name,extractValue(value(h),'.') AS hint
FROM sys.sqlobj$data od, sys.sqlobj$ so,
table(xmlsequence(extract(xmltype(od.comp_data),'/outline_data/hint'))) h
WHERE 1=1 -- so.name = '&sqlprof'
  AND so.signature = od.signature
  AND so.category = od.category
  AND so.obj_type = od.obj_type
  AND so.plan_id = od.plan_id
  and extractValue(value(h),'.') like '%dynamic%'
;
HINT
----------------------------------------------------------------------------------------------------
OPT_PARAM('optimizer_dynamic_sampling' 2)

Clone Oracle Home 12c on the same Host

If you have to test some patches and you need a new Oracle Home on the same host, cloning the existing home to the same host is an easy an quick way to do that.

Here a short step by step guide:

  1. check diskspace
    df -h .
  2. copy the oracle home to it’s new location
    cp -rp /opt/app/oracle/product/12.1.0/rdbms6/ /opt/app/oracle/product/12.1.0/rdbms7
  3. export the new home path and cone. this creates as well all nessesary inventory entries. be carefull to use a free oracle_home_name.
    export ORACLE_HOME=/opt/app/oracle/product/12.1.0/rdbms7

    /opt/app/oracle/product/12.1.0 $/opt/app/oracle/product/12.1.0/rdbms6/oui/bin/runInstaller -clone -silent ORACLE_HOME=/opt/app/oracle/product/12.1.0/rdbms7 ORACLE_HOME_NAME="OraDB12Home4" ORACLE_BASE="/opt/app/oracle"

    Starting Oracle Universal Installer...

    Checking swap space: must be greater than 500 MB. Actual 9835 MB Passed
    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-11-15_01-14-43PM. Please wait ...
    Copyright (C) 1999, 2014, Oracle. All rights reserved.

    You can find the log of this install session at:
    /opt/app/oraInventory/logs/cloneActions2017-11-15_01-14-43PM.log
    .................................................................................................... 100% Done.

    Setup in progress (Wednesday, November 15, 2017 1:16:34 PM CET)
    .......... 100% Done.
    Setup successful

    Saving inventory (Wednesday, November 15, 2017 1:16:34 PM CET)
    Saving inventory complete
    Configuration complete

    End of install phases.(Wednesday, November 15, 2017 1:17:00 PM CET)
    WARNING:
    The following configuration scripts need to be executed as the "root" user.
    /opt/app/oracle/product/12.1.0/rdbms7/root.sh
    To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts

    The cloning of OraDB12Home4 was successful.
    Please check '/opt/app/oraInventory/logs/cloneActions2017-11-15_01-14-43PM.log' for more details.

  4. run the root.sh script as instructed.
    /opt/app/oracle/product/12.1.0/rdbms7/root.sh
  5. you can check the inventory if all looks as expected:
    cat /opt/app/oraInventory/ContentsXML/inventory.xml|grep OraDB12Home4

  6. update your oratab and the the OPATCH directories of the database you want to switch to the new home
    /etc/oratab
    ..
    rdbms7:/opt/app/oracle/product/12.1.0/rdbms7:N

    sid rdbms7
    $ORACLE_HOME/OPatch/opatch lsinventory

    create or replace directory OPATCH_INST_DIR as '/opt/app/oracle/product/12.1.0/rdbms7/OPatch';
    create or replace directory OPATCH_SCRIPT_DIR as '/opt/app/oracle/product/12.1.0/rdbms7/QOpatch';
    create or replace directory OPATCH_LOG_DIR as '/opt/app/oracle/product/12.1.0/rdbms7/QOpatch';

Indexing Oracle table columns of collection type (nested table)

With Oracle’s Object-Relational you can store collection of user defined datatypes (UDT) in a table column. This column can be stored a IOT or heap table.

If stored as heap table (normal table) you may want to index the nested table so that there are not FTS on every access.

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                               |     9 |  1818 | 32469   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL                  | TABLE                         |     1 |    81 |   309   (1)| 00:00:01 |

The nested table has a hidden column “NESTED_TABLE_ID” which can be indexed:

create index <my_index>on <my_nested_table> (NESTED_TABLE_ID)
---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                               |     9 |  1818 |    56   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NESTED_TABXX_TAB2             |     1 |    27 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | NESTED_TAB_IX1                |     1 |       |     1   (0)| 00:00:01 |

* Don’t forget to gather statistics after create the indexes