วันเสาร์ที่ 26 กรกฎาคม พ.ศ. 2557

Oracle 12.1.0.2 Inmemory option

Now it's official !, The oracle in memory option pure column store base memory:

In-Memory Column Store

Starting in Oracle Database 12c Release 1 (12.1.0.2), the In-Memory Column Store (IM column store) is an optional, static SGA pool that stores copies of tables and partitions in a special columnar format optimized for rapid scans. The IM column store does not replace the buffer cache, but acts as a supplement so that both memory areas can store the same data in different formats. By default, only objects specified as INMEMORY using DDL are candidates to be populated in the IM column store.

Oracle document claims in memory option suitable for the following:
  • Scanning many rows and applying filters that use operators such as =, <, >, and IN
  • Querying a subset of columns in a table, for example, selecting 5 of 100 columns
  • Accelerating joins by converting predicates on small dimension tables into filters on a large fact table
It's not designed for:
  • Queries with complex predicates.
  • Queries that return large numbers of rows.
  • Queries that return a large number of columns.

ok enough, I'd like to show you how fast on this feature compare with traditional buffer cache access, base on my benchmark and below factor:

-Big-fact-table        =      500m  records, 
-Small 3 dim table  (less than 2k records)
-No analytic index.
-Warm up both buffer cache and in memory region once time for each.



###Traditional buffer cache access: (inmemory_size = 0, memory_target=16G)
[oracle@db1202 im_test]$ conora

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jul 26 13:47:35 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sat Jul 26 2014 13:40:19 +07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

12c_im> @mystat

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
IM scan rows                                                              0
IM scan segments disk                                                     0

12c_im> @big_analytic

----I was hide the output from here-----------

71 rows selected.

Elapsed: 00:00:45.07
12c_im>
12c_im> @mystat

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
IM scan rows                                                              0
IM scan segments disk                                                     0

Elapsed: 00:00:00.00

###In-memory access: (inmemory_size = 8G, memory_target=16G)
[oracle@db1202 im_test]$ conora

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jul 26 14:11:35 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sat Jul 26 2014 14:10:30 +07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

12c_im> @mystat

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
IM scan rows                                                              0
IM scan segments disk                                                     0

12c_im> @big_analytic

----I was hide the output from here-----------

71 rows selected.

Elapsed: 00:00:05.11
12c_im> @mystat

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
IM scan rows                                                      521458787
IM scan segments disk                                                     1


##Test summary
-Base on my testing I got performance improve 9x without change any application code or add h/w resource or anything, I just flip and run wow ! exciting if you consider to apply this option to your own environment.

##Conclusion

Oracle Database In-Memory transparently accelerates analytic queries by orders of magnitude, enabling real-time business decisions. It dramatically accelerates data warehouses and mixed workload OLTP environments. The unique "dual-format" approach automatically maintains data in both the existing Oracle row format for OLTP operations, and in a new purely in-memory column format optimized for analytical processing. Both formats are simultaneously active and transactionally consistent. Embedding the column store into Oracle Database ensures it is fully compatible with ALL existing features, and requires absolutely no changes in the application layer. This means you can start taking full advantage of it on day one, regardless of the application.


Cheer,
Chatchai

วันเสาร์ที่ 12 มกราคม พ.ศ. 2556

How do I startup PDB & CDB ?


How do I start up a Pluggable database ?

When connect to current PDB:
SQL> alter pluggable database open;

When connect to root:
SQL> alter pluggable database pdb6 open;

How do I shutdown / close a Pluggable database ?

When connect to current PDB:
SQL> alter pluggable database close;

When connect to root:
SQL> alter pluggable database pdb6 close;


How do I shut down / Startup the Container Database ?

Use startup / Shutdown command similar to startup / shutdown of Non CDB.
When the container database is shutdown  , no PDB is accessible.
In a CDB, the root and all of the PDBs share a single instance, or, when using Oracle RAC, multiple concurrent database instances. You start up and shut down an entire CDB, not individual PDBs. However, when the CDB is open, you can change the open mode of an individual PDB by using the ALTER PLUGGABLE DATABASE statement.

Enjoy 12c, Official is comming soon !!, very exciting !

วันอังคารที่ 8 มกราคม พ.ศ. 2556

Oracle Database 12C What is plugable database ?


Pluggable Databases (PDBs) is new in Oracle Database 12c Release 1 (12.1). You can have many pluggable databases inside a single Oracle Database occurrence. Pluggable Databases are fully backwards compatible with an ordinary pre-12.1 database.


Here this is example basic CDB / PDB Operations

How do I know if my database is Container Database or  Non Container database ?
SQL> select NAME, DECODE(CDB, 'YES', 'CONTAINER DATABASE', 'NON CONTAINER DATABASE') "Container DB ?" , OPEN_MODE, CON_ID from V$DATABASE;
NAME               Container DB ?                 OPEN_MODE                   CON_ID
---------           ------------------------       --------------------           ----------
CDB2              CONTAINER DATABASE       MOUNTED                       0


What Pluggable databases do we have in this container database ?
SQL>  select CON_ID, NAME, OPEN_MODE from V$PDBS;
    CON_ID NAME                          OPEN_MODE
---------- ------------------------       ------------
         2 PDB$SEED                        READ ONLY
         3 PDB1                                 MOUNTED
         4 PDB2                                 MOUNTED
         5 PDB3                                 MOUNTED
         6 PDB4                                 MOUNTED
         7 PDB5                                 MOUNTED
         8 PDB6                                 MOUNTED
         9 PDB7                                 MOUNTED
 ...

How do I connect to a Pluggable Database , say, PDB5 ?
SQL> alter session set container = pdb5;

How do I switch to main container Database ?
SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;

How do I determine which PDB or CDB I am currently connected to ?
SQL> show con_name
CON_NAME
------------------------------
PDB5

Enjoy Oracle Database 12C !!! 
Chatchai

วันศุกร์ที่ 4 มกราคม พ.ศ. 2556

Oracle Database 12c Release 1 (12.1) Upgrade New Features


DBUA interface  is changed completely .It is now panel based ( same as OUI ) .DBUA upgrade is  divided into the phase
  • Pre Upgrade Steps
  • Database Upgrade Steps
  • Post Upgrade Steps

1. Enhanced Upgrade Automation with New Pre-Upgrade Information Tool

2. Parallel Processing for Database Upgrade
  • Upgrade parallelism is the new featur introduced in Oracle database 12c R1  , where upgrade scripts and processes can run in parallel
3. New Upgrade Utility catctl.pl
  • The Upgrade Utility (catctl.pl), introduced in Oracle Database 12c.
  • It loads the data dictionary and components in parallel, thus reducing the overall upgrade time.

Oracle XML Database is Mandatory with Installation
  • Oracle XML Database (Oracle XML DB) is now a mandatory component of Oracle Database. . If during the upgrade, Oracle XML DB is not found to be installed, then Oracle XML DB is automatically loaded into the SYSAUX tablespace.The upgraded Oracle Database includes Oracle XML DB.
Enjoy ! New Version Oracle Database 12C !!,  Oracle'll official announce soon in first Q 2013.
Regard,
Chatchai

วันศุกร์ที่ 9 พฤศจิกายน พ.ศ. 2555

Oracle 12c New Feature Temporary Undo

Oracle 12c announce new feature call Temporary Undo for Temporary Table for
   -Reduce the amount of UNDO in the UNDO tablespace
   -Better for retention periods for “real” data
   -Reduce the size of the redo generated
   -Allows for DML on temporary tables in Active Data Guard

It very simple step to using  temporary undo you can do that by

alter system/session set temp_undo_enabled=true|false;

let me show demonstrate how temp undo reduce redo generate record (This demo got from T.Kyte Presentation Oracle Open World 2012)

###Create global temp demo###

c##tkyte%CDB1> create global temporary table gtt
2 on commit delete rows
3 as
4 select * from stage where 1=0;
Table created.

###Running with default undo###

c##tkyte%CDB1> alter session set temp_undo_enabled = false;
Session altered.
c##tkyte%CDB1> insert into gtt
2 select *
3 from stage;
87310 rows created.
Statistics
----------------------------------------------------------

566304 redo size

c##tkyte%CDB1> update gtt
2 set object_name = lower(object_name);
87310 rows updated.
Statistics
----------------------------------------------------------

8243680 redo size


###Running with Temporary Undo###


c##tkyte%CDB1> alter session set temp_undo_enabled = true;
Session altered.
c##tkyte%CDB1> insert into gtt
2 select *
3 from stage;
87310 rows created.
Statistics
----------------------------------------------------------

280 redo size

c##tkyte%CDB1> update gtt
2 set object_name = lower(object_name);
87310 rows updated.
Statistics
----------------------------------------------------------

0 redo size


Wow !! 99% redo generate reduce from temporary undo !! , now imagine to your database performance ha ha ha.

Credit : Tom Kyte

Hope to Help !
Chatchai Komrangded


วันพุธที่ 31 ตุลาคม พ.ศ. 2555

Oracle Database 12c: New Features Pluggable Database


Oracle has leap forwarded the middleware technologies especially the database technology into the cloud. So far Oracle has been the traditional RDBMS database suitable for the private enterprise data centers within corporate walls. In Oracle Open World 2012 held in San Francisco, Larry Ellison announced that oracle database also be cloud enabled by introducing pluggable databases for multi-tenancy and easy database movement between systems, platforms or releases. When the database is a cloud ready, it should be hardware agnostic, platform agnostic and release agnostic so that it gives all the characteristics to be in Platform as a Service (PaaS) for middleware. It is a brand new capability insider a single container database. So the DBAs and developer community should be familiar with “Container Databases” or CDB and “Pluggable Database” or PDB. I will refer PDB and CDB to refer pluggable database and container database respectively.

Before we delve into deeper details about CDB and PDB, let us get some basic details about multi-tenancy. Most of the organizations use the multi-tenancy with application level logic i.e. multiple customer or different entities data within the same database. They can be setup with many different schemas or even within a schema. But managing the security has lot of caveats including auditing as Larry Ellison & Andrew Mendelsohn mentioned on their respective keynote addresses. It has been an administrative nightmare when multiple databases are running in one machine. The backups need to be run separately. Each database has memory footprint and each database has background processes. This increases the capacity of the server on what it can handle in terms of the loads. By consolidating into one container database and at the same time keeping all of them as separate databases are a great thing from consolidation, performance, capacity and operational perspective. This is going to help in a very big way for consolidation on many enterprises and at the same time it will reduce the server footprint significantly giving the maximum Return on Investments (ROI) on the middleware database technologies. I really think this new feature has lot of advantages from small customers to large scale enterprise customers no matter how we look at it. It is going to make the enterprises to become smarter in terms of utilizing the compute capacity what they have.

DB is an acronym for “Container Database” and PDB is an acronym for “Pluggable Database”. I think it will be easier to explain with a metaphor for DBAs. Think of a freight train with many cars up to 250. Each container could be having different contents, with delivery target for different customers and completely packed/sealed independently with customer options but the entire freight is carried by a single engine or carrier at the front. It will be stupid enough to run 250 freight trucks but rather it is efficient to consolidate them into a single freight train. When running independently we will spend on gas, drivers’ expenses and much more complicated to manage them. The freight train is basically the CDB and each car is the PDB.
PDB is fully backward compatible to pre-12.1 database releases. There is nothing different from a developer or application connectivity perspective. Everything stays the same but the PDB will belong to a single CDB. When application connects to the PDB, it will specify the destination PDB via a database service. All home-grown or third party applications typically will have connectivity defined out of the application so it is easier to just change the service name outside of the application code. So all database connectivity should use “database service” rather than using the legacy approach of ORACLE_SID based connectivity. ORACLE_SID ties the application connectivity to a specific database instance and does not give the scalability or high availability. You can have many pluggable databases in 12.1.
you can have up to 250 pluggable
databases or PDBs within one container database or CDB. It is clear that there
is one-to-many relationship between CDB and PDBS

I'll show some feature about pluggable database, the first is how do we convert from non-cdb to pdb ?

Convert Non-CDB to PDB

We can convert a non-CDB database to a PDB into a container. There are few steps involved as listed below.
·         Ensure the non-cdb can be opened in the 12.1 release
·         Close the non-cdb
·         Open the non-cdb in restricted mode
·         Set the container to itself (as initial container would be itself)
·         Run an oracle script to convert

SQL> alter pluggable database noncdbp open;

Pluggable database altered.

SQL> alter pluggable database noncdbp close;

Pluggable database altered.

SQL> alter pluggable database noncdbp open restricted;

Pluggable database altered.  

SQL> alter session set container = noncdbp;

Session altered.

SQL> @?/rdbms/admin/noncdb_to_pdb.sql

    STATUS   COUNT(*)
---------- ----------
         5       9968
         6      39793

  COUNT(*)
----------
      6416

  COUNT(*)
----------
      6271

  COUNT(*)
----------
      3821

  COUNT(*)
----------
      3785

  COUNT(*)
----------
        12

  COUNT(*)
----------
         6

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2012-10-03 19:27:56

Credit : Michael Rajendran

Hope this help :D
Chatchai

Compressing Data for Direct Path Loading


You’re working with a decision support system (DSS)-type database and you want to improve the
performance of an associated reporting application. This environment contains large tables that are
loaded once and then frequently subjected to full table scans. You want to compress data as it is loaded
because this will compact the data into fewer database blocks and thus will require less I/O for
subsequent reads from the table. Because fewer blocks need to be read for compressed data, this will
improve data retrieval performance.

How ?

Use Oracle’s basic compression feature to compress direct path–loaded data into a heap-organized
table. Basic compression is enabled as follows:
1. Use the COMPRESS clause to enable compression either when creating, altering,
or moving an existing table.
2. Load data via a direct path mechanism such as CREATE TABLE…AS SELECT or
INSERT /*+ APPEND */.

For Example

you’re working with a table has that already been created, then you can alter its basic compression
characteristics with the ALTER TABLE statement—for example:
SQL> alter table regs_dss compress;

When you alter a table to enable basic compression, this does not affect any data currently existing
in the table; rather it only compresses subsequent direct path data load operations.
If you want to enable basic compression for data in an existing table, use the MOVE COMPRESS clause:
SQL> alter table regs_dss move compress;


Thank for good information from good oracle book "Oracle Database 11g Performance Tuning Recipes"

Hope this help you to more understand :D
Chatchai Komrangded