PERFORMANCE TUNING INTERVIEW QUESTIONS AND ANSWERS
Page – 4
31..How
one can to the hardware tuning?
Hardware cannot be tuned in the same
manner as the RDBMS and the OS. After analysing the system, you may discover
that it is necessary to add hardware resources such as more disks, disk
controllers, RAM, cache, CPUs, and so on. Adding hardware resources is all part
of the tuning and requires the same
analysis and testing as RDBMS and OS tuning.
32..How do you get the info about FTS?
using v$sysstat
33..Why library cache pins are needed?
Library cache pins deals with current of dependent objects. Example, an underlying objects should not be modified when a session is executing or accessing a dependent objects. So before parse locks on a library cache object can be broken, library cache pins must be acquired in an exclusive mode and then only library cache objects can be dropped. If session in executing a stored object, then the library cache pins will not be available and there will be waits for library cache pins. Typically this happens for a long running statement executing a stored objects
34..Where do you get the info about library cache?
v$librarycache
35..Where should the tuning effort be directed?
Consider
the following areas for tuning. The order in which steps are listed needs to be
maintained to prevent tuning side effects. For example, it is no good
increasing the buffer cache if you can reduce I/O by rewriting a SQL
statement.
Database Design (if it’s not too late):
Poor
system performance usually results from a poor database design. One should
generally normalize to the 3NF. Selective denormalization can provide valuable
performance improvements. When designing, always keep the “data access path” in
mind. Also look at proper data partitioning, data replication, aggregation
tables for decision support systems, etc.
Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
Memory Tuning:
Properly size your database buffers (shared pool, buffer cache, log buffer, etc) by looking at your buffer hit ratios. Pin large objects into memory to prevent frequent reloads.
Properly size your database buffers (shared pool, buffer cache, log buffer, etc) by looking at your buffer hit ratios. Pin large objects into memory to prevent frequent reloads.
Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
Study database locks, latches and wait events carefully and eliminate where possible.
Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
36..How do you create a execute and display the plan ?
A..@?/rdbms/admin/utlxplan.sql (To create a plan table)
B..explain set statemen_id='1' for select * from emp;
C..@?/rdbns/admin/utlxpls.sql (To display the plan)
37..What is the difference between DBFile sequential and scattered reads?
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
36..How do you create a execute and display the plan ?
A..@?/rdbms/admin/utlxplan.sql (To create a plan table)
B..explain set statemen_id='1' for select * from emp;
C..@?/rdbns/admin/utlxpls.sql (To display the plan)
37..What is the difference between DBFile sequential and scattered reads?
Both
“db file sequential read” and “db file scattered read” events signify time
waited for I/O read requests to complete. Time is reported in 100’s of a second
for Oracle 8i releases and below, and 1000’s of a second for Oracle 9i and
above. Most people confuse these events with each other as they think of how
data is read from disk. Instead they should think of how data is read into the
SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt “AVERAGE WAIT TIME FOR READ REQUESTS”
select a.average_wait “SEQ READ”, b.average_wait “SCAT READ”
from sys.v_$system_event a, sys.v_$system_event b
where a.event = ‘db file sequential read’
and b.event = ‘db file scattered read’;
38..Which init parameter you have to set to by pass parsing?
cursor_sharing=force
39..How to check and generate the TKPROF report?
The TKPROF tool is a tuning tool used to determine CPU and execution times for SQL statements. Use it by first setting timed_statistics to true in the initialization file and then tuning to trace for either the entire database via the sql_trace parameter or for the session against the trace file and then look at the output from the TKPROF tool. This can also be used to generate explain plan output
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt “AVERAGE WAIT TIME FOR READ REQUESTS”
select a.average_wait “SEQ READ”, b.average_wait “SCAT READ”
from sys.v_$system_event a, sys.v_$system_event b
where a.event = ‘db file sequential read’
and b.event = ‘db file scattered read’;
38..Which init parameter you have to set to by pass parsing?
cursor_sharing=force
39..How to check and generate the TKPROF report?
The TKPROF tool is a tuning tool used to determine CPU and execution times for SQL statements. Use it by first setting timed_statistics to true in the initialization file and then tuning to trace for either the entire database via the sql_trace parameter or for the session against the trace file and then look at the output from the TKPROF tool. This can also be used to generate explain plan output
40..When is cost based optimization triggered?
It’s
important to have statistics on all tables for the CBO (Cost Based Optimizer)
to work correctly. If one table involved in a statement does not have
statistics, Oracle has to revert to rule-based optimization for that statement.
So you really want for all tables to have statistics right away; it won’t help
much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:
Generally, the CBO can change the execution plan when you:
A Change statistics of objects by doing an ANALYZE;
B. Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).
No comments:
Post a Comment