Long running queries troubleshooting in oracle database


In general application team come to DBA’s and complaint on one application query, last time or previous times it was done in few seconds now it is taking 1 minute or more time. For this issue no concreate answer. Every DBA has their own approaches to solve the issue.

Step:1
Find the long running query:
Long running session are currently running, then we can use v$session find the long running query sql_id, using sql_id get the sql using v$sqltext view.
Sql>select sql_id from v$session where sid=1234;
      Sql_id
 btdzd9ktsa55n
sql> select sql_text from v$sqltext where sql_id=’ btdzd9ktsa55n’ order by piece;
Sql_text
select name, accno, htown, amt, transactions from abd.transact where mobile_no=’9933445599’;
If in-case query already completed, then we can find the sql_id from AWR, ASH and dba_hist_active_session_history views on specific time interval.

Step2:
1.     Run the Sql_Tuning_advisor for the sql_id:
SET SERVEROUTPUT ON
DECLARE
  l_sql_tune_task_id  VARCHAR2(200);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          sql_id      => 'btdzd9ktsa55n',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 60,
                          task_name   => 'btdzd9ktsa55n_tuning',
                          description => 'Tuning for btdzd9ktsa55n');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/
2.     Execute the DBMS_SQLTUNE using task_name
EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => 'btdzd9ktsa55n_tuning');
3.     Once tuning task done, execute report tuning task.

SELECT DBMS_SQLTUNE.report_tuning_task('btdzd9ktsa55n_tuning') FROM dual;

It gives us few recommendations have to take the actions accordingly.
Recommendations could be gather statistics, creating indexes, drop indexes……..

Step 3:
Few time we didn’t get any recommendations from the Sql_tune, then that time we have to verify is there any plan changed for the query using dba_sqlstat, dba_hist_snapshot and DBA_HIST_SQL_PLAN.

select distinct b.BEGIN_INTERVAL_TIME as Snap, a.PLAN_HASH_VALUE as plan, a.EXECUTIONS_DELTA as EXECUTIONS, a.ELAPSED_TIME_DELTA/1000000 as ELAPSED_SEC, ROWS_PROCESSED_DELTA as "ROWS" , a.ROWS_PROCESSED_DELTA/CASE WHEN a.EXECUTION_DELTA = 0 THEN -1 ELSE a.EXECUTIONS_DELTA END "Avg Rows", a.ELAPSED_TIME_DELTA/1000000/CASE WHEN a. EXECUTION_DELTA = 0 THEN -1 ELSE a.EXECUTION_DELTA END "Avg Elapsed", a.optimizer_cost, a.SQL_PROFILE from DBA_HIST_SQLSTAT a, DBA_HIST_SNAPHOT b where a.SQL_ID = '&sqlid' and a.snap_id = b.snap_id order by b.BEGIN_INTERVAL_TIME;

or

SELECT DISTINCT sql_id, plan_hash_value
FROM dba_hist_sqlstat q,
    (
    SELECT /*+ NO_MERGE */ MIN(snap_id) min_snap, MAX(snap_id) max_snap
    FROM dba_hist_snapshot ss
    WHERE ss.begin_interval_time BETWEEN (SYSDATE - &No_Days) AND SYSDATE
    ) s
WHERE q.snap_id BETWEEN s.min_snap AND s.max_snap
  AND q.sql_id IN ( '&SQLID')
/

Above queries gives us sql paln from number of days, from the list we can choose the best plan and pin the best plan.
How to pin the sql plan:

pre and post migration overview of database migration to cloud Azure, AWS

As you prepare for migrating to the cloud, verify that your source environment is supported and the you have addressed any prerequisites.

1. Pre-Migration Overview:

  • Find the Database size, it helps us to how long the data copy will take.
  • How many schema's and tables going to migrate and check the large objects.
  • Database users, roles and privileges.
  • How busy/critical of source database.
  • Under standing the network, firewalls and security.
  • How much down time afford.
  • Limitations of source database.

2. Planing migration methods and Execution phase:

  • Are we have any additional resources available to assist with various data migration scenarios.
  • Relatively small databases that allow for downtime during migration, relatively large databases that can't afford downtime during migration.

3. Post migration overview:

  • Developing validation test on both source and target databases using same queries.
  • Set up test environment
  • Run validation tests against the source and the target and then analyze the result.
  • Run the performance tests against the source and target and then analyze and compare result.

[FATAL] [DBT-11101] The block size (0MB) for the tablespace SYSAUX does not match the configured block size (0MB)


On creation of database through OEM with a block size of 32K is failing @ pre-requiste step(error is below).

[FATAL] [DBT-11101] The block size (0MB) for the tablespace SYSAUX does not match the configured block size (0MB).
ACTION: To use block size of (0MB), set initialization parameter db_8k_cache_size.

On checking the template file (QC_12201_32k_DB_TEMPLATE.dbt), default tablespaces block sizes are explicitly set to “8192” where this doesn’t work, as the database block size is set to “32K”. Per oracle, If we want to create tablespaces with non-standard block size(other than database block size), we may need to set the non standard block size buffers  parameter db_8k_cache_size to certain MB(per requirement), Legimate  values are from 2K to 32K,so that the oracle can load the content of the datafile with the block size of “8k” into “32k” buffer size.

For now, created with dbca & tablespaces have been picked with blocksize of 32K (No issue). We might face issue only when mismatches happens.

TABLESPACE_NAME      BLOCK_SIZE
------------------------------ ----------
SYSTEM                              32768
SYSAUX                              32768
UNDOTBS1                        32768
TEMP                                  32768
USERS                                 32768

In our case, since db blocksize and tablespace blocksize have mismatches, Assuming that we may need to add “blocksize” parameter to ‘32K’ in the template” or “set the ‘db_nk_cache_size’ parameter with certain memory.

Content of template:
         
8192
--
8192

Refer below link for explanation :
https://docs.oracle.com/cd/B28359_01/server.111/b28310/memory004.htm#i1014186 (Navigate to “Setting the Buffer Cache Initialization Parameters“)

Oracle 12c Database mandatory background processes, new processes


Process: Oracle database having several types of processes, those are Oracle process, Server process, background processes and client process.

Oracle has introduced new mandatory background process in oracle 12c release and increased no.of process in existing mandatory BG process.

Mandatory Background process in Oracle 12c:

PMON (Process Monitor process): The process Monitor (PMON) monitors the background processes and performs process recovery when a server process or dispatcher process terminates abnormally.

PMON is responsible for cleaning up the database buffer cache and freeing the resources that the client process was using.

For example, PMON will resets the values of active transaction table, releases the locks that are no longer required, and removes the process ID from the list of active processes.

Transactional Table: The data structure with in an undo segment that holds transactional identifiers of the transactions using undo segments.

LERG (Listener registration process): The Listener registration process registers the information about database instance and dispatcher’s processes with oracle net listener.

LERG provides the listener with information about the following:

1.       Names of the database services provided by the database.
2.       Name of the database instance associated with the services and its current and maximum load.
3.       Service handlers (Dispatchers and dedicated servers) available for the instance, including their type, protocol addresses and current and maximum load.

SMON (System Monitor Process):
DBW (Database writer process):
LGWR (Log writer process):
CKPT (Check point process):
MMON and MMNL (Manageability Monitor process):
Recoverer Process (RECO):

Newly added background processes:

BWnn (Database writer process):
FENC (Fence Monitor Process):
IPCO (IPC service background process):
LDDn (Global enqueue service Daemon helper slave):
LGnn (Log writer worker):
LREG (Listener registration process):
OFSD (Oracle File server background process):
RPOP (Instant recovery repopulation daemon):
SAnn (SGA Allocator):
SCRB (ASM Disk Scrubbing Master process):
SCRn (ASM disk scrubbing slave repair process):
SCVn (ASM disk scrubbing slave verify process):

Oracle Dataguard background process, log apply service internals

Dataguard Background process:
1.       Logwriter process
2.       Archiver process
3.       Logwriter network server (LNS)
4.       Fetch archive logs (FAL) For client server mechanism
5.       Remote file server
6.       Managed recovery process for physical standby
7.       Logical standby process for logical standby
8.       Data guard broker monitor (DMON) – If incase dg broker enabled

Apply services at High Level-          Data guard apply services comprises of two parts
o   Redo Apply for physical standby
o   Sql Apply for logical standby

-        The Physical standby and logical standby databases utilize the same redo transport and role management services, only apply service is different.

Redo apply: A physical standby database applies redo data received from the primary database using oracle media recovery process.
  1.    Redo applies uses a specialized process called the managed recovery process.
  2.   As the RFS process is writing redo data to standby redo logs (SRL’s). MRP process reads the redo data and applies it to the physical standby database.
  3.    MRP is started in the physical standby database by mounting the database using below command.
-          Sql > alter database recover managed standby database using current logfile disconnect from the session;

Sql Apply: Sql apply uses a collection of background process that perform the task of applying changes from the primary database to the logical standby database.

 Logical standby database:
  • -          Logical standby database contains the same logical information at standby DB.
  • -          Physical organization and structure of the data can be different.
  • -          Synchronization with the primary database through sql apply.
  • -          Can be used for queries and reporting at any time.

-         Service to be involved for logical standby : Log transport services, Log apply services

Log Transport services:  Transmit the redo data from the primary database to standby. Enforce the data protection modes.

Log apply services: Automatically apply the archived redo logs on the standby database. Maintains transitional synchronization with primary database. Allows transitionally consistent read-only access to the data.