วันศุกร์ที่ 9 พฤศจิกายน พ.ศ. 2555

Oracle 12c New Feature Temporary Undo

Oracle 12c announce new feature call Temporary Undo for Temporary Table for
   -Reduce the amount of UNDO in the UNDO tablespace
   -Better for retention periods for “real” data
   -Reduce the size of the redo generated
   -Allows for DML on temporary tables in Active Data Guard

It very simple step to using  temporary undo you can do that by

alter system/session set temp_undo_enabled=true|false;

let me show demonstrate how temp undo reduce redo generate record (This demo got from T.Kyte Presentation Oracle Open World 2012)

###Create global temp demo###

c##tkyte%CDB1> create global temporary table gtt
2 on commit delete rows
3 as
4 select * from stage where 1=0;
Table created.

###Running with default undo###

c##tkyte%CDB1> alter session set temp_undo_enabled = false;
Session altered.
c##tkyte%CDB1> insert into gtt
2 select *
3 from stage;
87310 rows created.
Statistics
----------------------------------------------------------

566304 redo size

c##tkyte%CDB1> update gtt
2 set object_name = lower(object_name);
87310 rows updated.
Statistics
----------------------------------------------------------

8243680 redo size


###Running with Temporary Undo###


c##tkyte%CDB1> alter session set temp_undo_enabled = true;
Session altered.
c##tkyte%CDB1> insert into gtt
2 select *
3 from stage;
87310 rows created.
Statistics
----------------------------------------------------------

280 redo size

c##tkyte%CDB1> update gtt
2 set object_name = lower(object_name);
87310 rows updated.
Statistics
----------------------------------------------------------

0 redo size


Wow !! 99% redo generate reduce from temporary undo !! , now imagine to your database performance ha ha ha.

Credit : Tom Kyte

Hope to Help !
Chatchai Komrangded


วันพุธที่ 31 ตุลาคม พ.ศ. 2555

Oracle Database 12c: New Features Pluggable Database


Oracle has leap forwarded the middleware technologies especially the database technology into the cloud. So far Oracle has been the traditional RDBMS database suitable for the private enterprise data centers within corporate walls. In Oracle Open World 2012 held in San Francisco, Larry Ellison announced that oracle database also be cloud enabled by introducing pluggable databases for multi-tenancy and easy database movement between systems, platforms or releases. When the database is a cloud ready, it should be hardware agnostic, platform agnostic and release agnostic so that it gives all the characteristics to be in Platform as a Service (PaaS) for middleware. It is a brand new capability insider a single container database. So the DBAs and developer community should be familiar with “Container Databases” or CDB and “Pluggable Database” or PDB. I will refer PDB and CDB to refer pluggable database and container database respectively.

Before we delve into deeper details about CDB and PDB, let us get some basic details about multi-tenancy. Most of the organizations use the multi-tenancy with application level logic i.e. multiple customer or different entities data within the same database. They can be setup with many different schemas or even within a schema. But managing the security has lot of caveats including auditing as Larry Ellison & Andrew Mendelsohn mentioned on their respective keynote addresses. It has been an administrative nightmare when multiple databases are running in one machine. The backups need to be run separately. Each database has memory footprint and each database has background processes. This increases the capacity of the server on what it can handle in terms of the loads. By consolidating into one container database and at the same time keeping all of them as separate databases are a great thing from consolidation, performance, capacity and operational perspective. This is going to help in a very big way for consolidation on many enterprises and at the same time it will reduce the server footprint significantly giving the maximum Return on Investments (ROI) on the middleware database technologies. I really think this new feature has lot of advantages from small customers to large scale enterprise customers no matter how we look at it. It is going to make the enterprises to become smarter in terms of utilizing the compute capacity what they have.

DB is an acronym for “Container Database” and PDB is an acronym for “Pluggable Database”. I think it will be easier to explain with a metaphor for DBAs. Think of a freight train with many cars up to 250. Each container could be having different contents, with delivery target for different customers and completely packed/sealed independently with customer options but the entire freight is carried by a single engine or carrier at the front. It will be stupid enough to run 250 freight trucks but rather it is efficient to consolidate them into a single freight train. When running independently we will spend on gas, drivers’ expenses and much more complicated to manage them. The freight train is basically the CDB and each car is the PDB.
PDB is fully backward compatible to pre-12.1 database releases. There is nothing different from a developer or application connectivity perspective. Everything stays the same but the PDB will belong to a single CDB. When application connects to the PDB, it will specify the destination PDB via a database service. All home-grown or third party applications typically will have connectivity defined out of the application so it is easier to just change the service name outside of the application code. So all database connectivity should use “database service” rather than using the legacy approach of ORACLE_SID based connectivity. ORACLE_SID ties the application connectivity to a specific database instance and does not give the scalability or high availability. You can have many pluggable databases in 12.1.
you can have up to 250 pluggable
databases or PDBs within one container database or CDB. It is clear that there
is one-to-many relationship between CDB and PDBS

I'll show some feature about pluggable database, the first is how do we convert from non-cdb to pdb ?

Convert Non-CDB to PDB

We can convert a non-CDB database to a PDB into a container. There are few steps involved as listed below.
·         Ensure the non-cdb can be opened in the 12.1 release
·         Close the non-cdb
·         Open the non-cdb in restricted mode
·         Set the container to itself (as initial container would be itself)
·         Run an oracle script to convert

SQL> alter pluggable database noncdbp open;

Pluggable database altered.

SQL> alter pluggable database noncdbp close;

Pluggable database altered.

SQL> alter pluggable database noncdbp open restricted;

Pluggable database altered.  

SQL> alter session set container = noncdbp;

Session altered.

SQL> @?/rdbms/admin/noncdb_to_pdb.sql

    STATUS   COUNT(*)
---------- ----------
         5       9968
         6      39793

  COUNT(*)
----------
      6416

  COUNT(*)
----------
      6271

  COUNT(*)
----------
      3821

  COUNT(*)
----------
      3785

  COUNT(*)
----------
        12

  COUNT(*)
----------
         6

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2012-10-03 19:27:56

Credit : Michael Rajendran

Hope this help :D
Chatchai

Compressing Data for Direct Path Loading


You’re working with a decision support system (DSS)-type database and you want to improve the
performance of an associated reporting application. This environment contains large tables that are
loaded once and then frequently subjected to full table scans. You want to compress data as it is loaded
because this will compact the data into fewer database blocks and thus will require less I/O for
subsequent reads from the table. Because fewer blocks need to be read for compressed data, this will
improve data retrieval performance.

How ?

Use Oracle’s basic compression feature to compress direct path–loaded data into a heap-organized
table. Basic compression is enabled as follows:
1. Use the COMPRESS clause to enable compression either when creating, altering,
or moving an existing table.
2. Load data via a direct path mechanism such as CREATE TABLE…AS SELECT or
INSERT /*+ APPEND */.

For Example

you’re working with a table has that already been created, then you can alter its basic compression
characteristics with the ALTER TABLE statement—for example:
SQL> alter table regs_dss compress;

When you alter a table to enable basic compression, this does not affect any data currently existing
in the table; rather it only compresses subsequent direct path data load operations.
If you want to enable basic compression for data in an existing table, use the MOVE COMPRESS clause:
SQL> alter table regs_dss move compress;


Thank for good information from good oracle book "Oracle Database 11g Performance Tuning Recipes"

Hope this help you to more understand :D
Chatchai Komrangded

วันพฤหัสบดีที่ 11 ตุลาคม พ.ศ. 2555

Cloud Overview and Considerations


According to the U.S. National Institute of Standards and Technology, cloud computing
consists of five essential characteristics, three distinct service models, and four deployment
models.

Essential Characteristics
On-Demand Self- Service
Resource Pooling
Rapid Elasticity
Measured Service
Broad Network Access

Service Models
Service Software-as-a-Service (SaaS)
Platform-as-a-Service (PaaS)
Infrastructure-as-a-Service (IaaS)

Deployment Models
Public Cloud
Private Cloud
Community Cloud
Hybrid Cloud

Service Models

Software-as-a-Service
This term generally refers to applications that are delivered to end users over the Internet or
broad band access. There are hundreds of SaaS providers covering a wide variety of applications.
Oracle CRM On Demand, Salesforce.com, and Google Apps are examples of the SaaS model.

Infrastructure-as-a-Service
Refers to computing hardware (servers, storage, and network) delivered as a service. This
typically includes the associated software as well, including operating systems, virtualization,
clustering, and so on. Amazon Web Services, for example, offers their Elastic Compute Cloud
(EC2) for compute servers, SimpleDB for database, and Simple Storage Service (S3) for storage.

Platform-as-a-Service
This model conveys how an application development and deployment platform can be delivered
as a service to developers, allowing them to quickly build and deploy an SaaS application for end
users. These platforms typically include database and middleware, and are often specific to a
language or API. For example, Google AppEngine is based on Java and Python, EngineYard is
based on Ruby on Rails, and Force.com uses a proprietary variation of Java.


Hope this help,
Chatchai

วันอังคารที่ 9 ตุลาคม พ.ศ. 2555

Top 12 features of Oracle Database 12C

1) "With" clause can define PL/SQL functions

2) Improved defaults, including Default col to a sequence or "default if (on) null". Or always use a generated as an identity (with optional sequence def info). Or Metadata-only defaults (default on an added column). 

3) Bigger varchar2, nvarchar2, raw -up to 32K. But follows rules like LOB, if over 4K will be out of line. (max_SQL_String_Size init param)

4) TopN and Pagenation queries using the 'OFFSET' clause + optional 'FETCH next N rows' in SELECT. Eg: SELECT ... FROM t ORDER BY y FETCH FIRST 5 ROWS

5) Row pattern matching using the "MATCH_RECOGNIZE" clause. Gonna take a while to get this one.

6) Partitioning improvements including ASYNC Global Index maintenance (includes new jobs to do work 'later'), cascade truncate & exchange, multi ops in a single DDL, online partition moves (no RDBMS_REDEFINITION), "interval + reference" partitioning.

7) Adaptive execution plans, which sets thresholds and allows execution plans to switch if threshold is exceeded. (Also 'gather_plan_statistics' hint.) Shown by 'Statistics Collector' steps in trace/tkprof.

8) Enhanced statistics. Dynamic sampling goes to 'eleven', turning it persistent. New histograms: hybrid (for more than 254 distinct values, instead of height-balanced) and top. Stats gathered on data loads automatically. (By the way, don't regather stats if not needed.) Session private statistics for GTTs.

9) UNDO for temporary objects, managed in TEMP, which eliminates REDO on the permanent UNDO. (ALTER SESSION/SYSTEM SET TEMP_UNDO_ENABLED=TRUE/FALSE)

10) Data optimization, or Information Lifecycle Management, which detects block use - hot, medium, dormant - and allows policies in table defintion (new ILM clause) to compress or archive data after time.

11) "transaction Guard' to preserve commit state, which includes TAF r/w transfer and restart for some types of transactions.

12) pluggable databases! Implications too numerous to list right now. Suffice it to say, huge resource improvements, huge consolidation possibilities. Looking forward to reality.








Hope this help,


Chatchai :D

วันจันทร์ที่ 8 ตุลาคม พ.ศ. 2555

#OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2


The title of this post is a summary of the 4 announcements made by Larry Ellison at 30-09-2012, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement.
Oracle IaaS goes Public... and Private...
Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together".
This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model.
Oracle 12c Multitenant Database
This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure.
X3H2M2, enabling the new Exadata X3 in-Memory Database
Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything :
  • the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), 
  • the memory with 512GB memory as well per node,
  •  and the new Flash Fire card, bringing now up to 22 TB of Flash cache.
All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension.
But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point.
As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics.

วันจันทร์ที่ 1 ตุลาคม พ.ศ. 2555

Oracle Announces Oracle Exadata X3 Database In-Memory Machine


Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point


ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012

News Facts

During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines.
The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud.
Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing.
In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks.
With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-MemoryMachine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications.

Next-Generation Technologies Deliver Dramatic Performance Improvements

Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads.
Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including:
Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash;
20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold;
33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors;
Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data;
Up to 30 percent reduction in power and cooling.

Configured for Your Business, Available Today

Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability.
Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers.
Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications.

Supporting Quotes

“Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.”

วันจันทร์ที่ 24 กันยายน พ.ศ. 2555

The Life of an Oracle Shadow Process


Here is a breakdown of the life of an Oracle shadow process and where it might be waiting as it
lives. This comes from Oracle Doc ID 61998.1 and shows what’s happening in less than one
second within Oracle.

State     Notes
IDLE    Waits for “SQL*Net message from client” (waiting for the user).Receives the SQL*Net packet requesting “parse/execute” of a statement.

ON CPU  Decodes the SQL*Net packet.

WAITING Waits for “latch free” to obtain the “library cache” latch. Gets the library cache latch.

ON CPU  Scans the shared pool for the SQL statement, finds match, frees
latch, sets up links to the shared cursor, etc., and begins to execute.

WAITING  Waits for “db file sequential read”; we need a block not in the cache
(wait for I/O).

ON CPU  Block read from disk complete. Execution continues. Construct SQL*Net packet with first row of data to send to client and sends.

WAITING  Waits on “SQL*Net message from client” for acknowledgment packet received.
IDLE           Waits for next SQL*Net message from client


Thank  Richard Niemeic for good information from oracle book "oracle performance tuning tip and technique"

Hope this help you:D
Chatchai Komrangded


วันศุกร์ที่ 21 กันยายน พ.ศ. 2555

วันพฤหัสบดีที่ 20 กันยายน พ.ศ. 2555

ORA-29702: error occurred in Cluster Group Service operation Tip

When you using Oracle RAC, you have grid , db components on each node, someday you want to remove it  (may cause of app requirement, buget and so on), you may disable grid service by add comment to /etc/inttab or un-install, the oracle startup and running fine in single environment but sometime you'll see ORA-29702: error occurred in Cluster Group Service operation that cause of
LMON process still active and It is assume it  to be RAC Instance

This situation if you ensure no need rac anymore you can delink it from oracle libary by

1. Shutdown the database completely.

2. Relink with RAC OFF :

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_off
$ make -f ins_rdbms.mk ioracle

3. Startup the database.

Hope this help you:D
Chatchai Komrangded

RFS-process on physical standby database fails with Ora-00600:[Kcrrrfswda.9]

In Oracle Database Guard Environment, sometime you may see this error message at Standby Site on below:


Physical standby database

RFS-process fails on physical standby database with fallowing errors:

Errors in file d:\oracle\app\administrator\diag\rdbms\pihist\pihist3\trace\pihist3_rfs_5364.trc:
ORA-00272: error writing archive log L:\ARCHIVE\PIHIST\ARC00089_0731084629.001
ORA-00600: internal error code, arguments: [kcrrrfswda.9], [4], [368], [], [], [], [], []
RFS[29]: Possible network disconnect with primary database


On the primary database there may be fallowing errors reported:
Errors in file d:\oracle\app\administrator\diag\rdbms\pdonds\pdonds\trace\pdonds_arc7_3572.trc:
ORA-12582: TNS:invalid operation
FAL[server, ARC7]: FAL archive failed, see trace file.
Errors in file d:\oracle\app\administrator\diag\rdbms\pdonds\pdonds\trace\pdonds_arc7_3572.trc:
ORA-16055: FAL request rejected


This errors are network related, eg firewall is enabled between primary/standby database.

This ORA-600 is a network related issue:

1. A firewall-feature is most likely the culprit for this ORA-600. Please ensure there is no feature enabled that is modifying TCP packets like 'fixup' or 'inspect' on Cisco Firewalls for example. Those features modify the TCP-packets containing redo data resulting into the RFS-Process on the standby database refusing those packets. This can then end up in the ORA-600.

Please make sure the following firewall features are disabled:

- SQLNet fixup protocol
- Deep Packet Inspection (DPI)
- SQLNet packet inspection
- SQL Fixup
- SQL ALG (Juniper firewall)

2. There is also a known issue reported in Bug 8605213: ORA-00600: [KCRRRFSWDA.9]:
The ORA-600 didn't happen anymore after setting  dead connection detection in Oracle Net (SQLNET.EXPIRE_TIME) to a value below the firewall timeout threshold.

3. There are different non-public bugs, most of them are closed as not reproducable, or customer was not willing to disable firewall-feature to verify if the error still happens afterwards.


Refererence
RFS-process on physical standby database fails with Ora-00600:[Kcrrrfswda.9], [4], [368], [], [], [], [], [] [ID 1269749.1]


Hope this help you:D
Chatchai Komrangded




วันพุธที่ 19 กันยายน พ.ศ. 2555

Private Database Cloud


Oracle Database Cloud


Private Database Clouds are a proven model for the delivery of database services. Private Database Clouds enable customers to consolidate servers, storage, and database workloads onto a shared hardware and software infrastructure. By providing on-demand access to database services in a self-service, elastically scalable and metered manner, Private Database Clouds offer compelling advantages in cost, quality of service, and agility. Accelerate and optimize deployment by leveraging Oracle Database, RAC, Exadata, and Servers & Storage.
More information, see this link

Hope this help you:D
Chatchai Komrangded


วันจันทร์ที่ 17 กันยายน พ.ศ. 2555

Chatchai_K: Webcast: Three Compelling Reasons to Upgrade to Or...

Chatchai_K: Webcast: Three Compelling Reasons to Upgrade to Or...: Date:  Tuesday, September 18, 2012  Time:  10 a.m. PT/1 p.m. ET  If you or your organization is still working with Oracle Database 10 g  o...

Webcast: Three Compelling Reasons to Upgrade to Oracle Database 11g


Date: Tuesday, September 18, 2012 
Time: 10 a.m. PT/1 p.m. ET 

If you or your organization is still working with Oracle Database 10g or an even older version, now is the time to upgrade. Oracle Database 11g offers a wide variety of advantages to enhance your operation. Join us for this live Webcast and learn about what you’re missing: the business, operational, and technical benefits. 

With Oracle Database 11g, you can:
  • Upgrade with zero downtime
  • Improve application performance and database security
  • Reduce the amount of storage required
  • Save time and money

If you interest you can join this webcast as link below:


Hope this help you:D
Chatchai Komrangded


Warning "aiowait timed out x times"


When the above message is produced in alert.log, the database may also
appear to be hanging.

The message indicates that Oracle has encountered problems while trying
to perform asynchronous I/O.

When the Oracle instance is configured to run with asynchronous I/O
(disk_asynch_io = true), it will use the aiowait system call to obtain the
completion status of such I/Os.

When you using truss (On Solaris you will see)

28463/1:        open("/etc/passwd", O_RDONLY)                   = 3
28463/1:        kaio(5, 0xFFFFFFE8, 0xFFFFFFFF, 0x00000000, 0x00000000, 0x000000
00, 0xEF7F4298) = 0
28463/1:        kaio(AIOREAD, 3, 0x00020FF0, 1024, 0, 0xEFFFF9FC) Err#48
28463/1:        lwp_create(0xEFFFF628, 0, 0xEF781F44)           = 3
28463/3:        lwp_create(0x00000000, 0, 0x00000000)           = -277341360
28463/3:        lwp_self()                                      = 3
28463/1:        lwp_create(0xEFFFF628, 0, 0xEF778F44)           = 4
28463/4:        lwp_create(0x00000000, 0, 0x00000000)           = -277378224
28463/4:        lwp_self()                                      = 4
28463/1:        lwp_create(0xEFFFF628, 0, 0xEF6BAF44)           = 5
28463/5:        lwp_create(0x00000000, 0, 0x00000000)           = -278156464
28463/5:        lwp_self()                                      = 5
28463/1:        lwp_create(0xEFFFF628, 0, 0xEF6B1F44)           = 6
28463/6:        lwp_create(0x00000000, 0, 0x00000000)           = -278193328
28463/6:        lwp_self()                                      = 6          
28463/3:        pread(3, " r o o t : x : 0 : 1 : 0".., 1024, 0) = 1024
28463/4:        pread(3, " r y h a : x : 2 1 6 : 1".., 1024, 1024) = 1024
28463/5:        pread(3, " s 1 / j c a h i l l : /".., 1024, 2048) = 1024
28463/6:        pread(3, " s h\n s f a x : x : 5 0".., 1024, 3072) = 115

This is the clever bit. The KAIO kernel driver cannot operate on a filesystem file, so as a result of the first kaio(AIOREAD) failing, the aioread() function
spawns 4 lightweight processes (threads). Each of these threads is responsible for reading a different section of the file, but in parallel; thereby SIMULATING asynchronous I/O.

There are a number of actions that can be done to deal with this problem:

#1) Restart the database without asynchronous I/O

This is not an ideal solution and will only serve as a workaround until the underlying problem with asynchronous I/O is identified and fixed at the O/S level.

To operate without asynchronous I/O set the init.ora parameter 

disk_asynch_io = false

At the same time you should also set the following parameter, dbwr_io_slaves, to a value other than 0. This will allow Oracle to simulate asynchronous I/O through usage of multiple I/O slave processes where each such process performs I/O synchronously.

#2) Reduce I/O load on the system from this and other databases

Again this is only an attempt to avoid the underlying condition that causes asynchronous I/O to fail in cases where this happens under heavy load.

This is only a temporary measure until the problem is found and fixed e.g. by applying O/S patches.

Possible actions are:
- avoid running heavy batch jobs known to generate lots of I/O
- allocate more memory to the Buffer Cache and Sort Areas to reduce I/O from/to datafiles and temporary tablespaces
(relevant parameters: db_block_buffers, db_cache_size,
pga_aggregate_target, sort_area_size, hash_area_size etc.)
- tune checkpoints to reduce frequency of write I/O by DBWR

#3) Check all Oracle prerequisites for O/S and I/O subsystem configuration

You will need to refer to Oracle platform-specific Installation manuals and Administrator's references and make sure all required O/S patches and kernel parameters are in place and that Asynchronous I/O is correctly configured.

Note 169706.1 Oracle® Database on AIX®,HP-UX®,Linux®,Mac OS® X

#4) Check for known O/S issues with Asynchronous I/O operation

#5)Check filesystem's disk and try FULL export to /dev/null to identify possible corruption. 


Hope this help you to more understand :D
Chatchai Komrangded

วันอาทิตย์ที่ 16 กันยายน พ.ศ. 2555

Matching Table Types to Business Requirements

Oracle provides a wide variety of table types. The default table type is heap-organized. For most
applications, a heap-organized table is an effective structure for storing and retrieving data. However,
there are other table types that you should be aware of, and you should know the situations under which
each table type should be implemented. Table 1-1 describes each table type and its appropriate use.





In most scenarios, a heap-organized table is sufficient to meet your requirements. This Oracle table type
is a proven structure used in a wide variety of database environments. If you properly design your
database (normalized structure) and combine that with the appropriate indexes and constraints, the
result should be a well-performing and maintainable system.
Normally most of your tables will be heap-organized. However, if you need to take advantage of a
non-heap feature (and are certain of its benefits), then certainly do so. For example, Oracle partitioning
is a scalable way to build very large tables and indexes. Materialized views are a solid feature for
aggregating and replicating data. Index-organized tables are efficient structures when most of the
columns are part of the primary key (like an intersection table in a many-to-many relationship). And
so forth.

Thank For Good Information From Good Oracle Book "Oracle.Database.11g.Performance.Tuning.Recipes"

Hope this help you to more understand :D
Chatchai Komrangded

Keep in mind about Oracle Nologging Option

Many Time, We using Oracle "Nologging" Option to avoid the redo generate and speed up of bulk load, CTAS and etc, This is pros of nologging, but in cons that come from you can't recover the objects that using "Nologging" in some case, The below demonstrate will let you see my explanation on above:

*****You must have backupset for database before follow below step *****

1. Prepare 
create table t nologging as select * from all_objects;

2. Query Test
select count(*) from t;

3. Flush Buffer Cache
alter system flush buffer_cache;

4. Check block
select min(dbms_rowid.rowid_relative_fno(rowid)) "File_Number",
       min(dbms_rowid.rowid_block_number(rowid)) "Block number"
from t;

5. Corrupt Data Block
dd of=$FILE bs=$BLOCKNO conv=notrunc seek=8196
dd of=$FILE bs=$BLOCKNO conv=notrunc seek=`expr 1 + 8196`

6. Query Test, You will see the error
select count(*) from t;

7. Restore + Recover
rman target /
sql 'alter database datafile 11 offline';
restore datafile 11;
recover datafile 11;
sql 'alter database datafile 11 online';

8. Query Test you will found nologging problem your object can't recover
select count(*) from t;

If you following step by step until step 8 you will see

ORA-26040: Data block was loaded using the NOLOGGING option

Cause: Trying to access data in block that was loaded without redo generation using the NOLOGGING/UNRECOVERABLE option
Action: Drop the object containing the block.


Very carefully when using this option, it can make your object to be unrecoverable state.


Hope this help you to more understand :D
Chatchai Komrangded