Looking for Bi Performance & Tuning book for BI...
Hi
I am looking for BI-7 (Performance & Tuning) book. Actually i looked SAP Press site & found book "SAP BW Performance Optimization Guide" but its BW-3.5 based but I want for BI 7.
Pls. Suggest me any good book on this topic.
Thanks
Edited by: Harpal Singh on Oct 5, 2010 9:45 AM
I am also a BI developer...
"SAP NetWeaver BW: Administration and Monitoring:
offers a lot of insight into what happens when loading happens and also talks in length about how to tune the system using RSADMIN parameters etc...
I found this book very useful and in fact it helped me a lot preparing for the BI Professional exam in data modeling ...
and to answer your question - the BW performance tuning book also talks a lot about memory allocation / consumption etc which might be stuff that is usually used by BASIS ... and since the concepts are the same - you could go with the 3.5 book and get to know the options you have ...
Edited by: Arun Varadarajan on Oct 8, 2010 2:33 AM
Similar Messages
-
LDAP Performance Tuning In Large Deployments - dir_chkcredentialsonreadonly parameter
Calendar users are experiencing long delays in logging in or updating a meeting
with many attendees or dates. This is especially notificeable after migrating
from calendar server 1.0x to calendar server 3.x.
<P>
At this time, calendar performance can be improved by up to 30% by reconfiguring
the calendar server to bind to it's directory server either anonymously or with
a specific user. The default is to bind as the user requesting directory
information.
<P>
There is a parameter that can be added in the server configuration file by
editing the /users/unison/misc/unison.ini file. For anonymous binds
adding the dir_chkcredentialsonreadonly line to the DAS section:
<P>
[DAS]
dir_updcalonly = TRUE
dir_connection = persistent
dir_service = LDAP,v2,NSCP,1
enable = TRUE
dir_chkcredentialsonreadonly = FALSE
<P>
For binding as a specific user, also add the following to the LDAP,..,...,,
section:
<P>
[LDAP...]
binddn = dn
bindpwd = password
<P>
Going forward, we are working on other changes in the next versions of the
Calendar Client and Calendar Server to improve performance. Please check with
your Netscape Sales contact for announcements on the availability of these
versions.
<P>Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
But, could you please clarify 2 points to me
1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
Best reagards -
LDAP Performance Tuning In Large Deployments - numconnect parameter
LDAP Performance Tuning In Large Deployments - numconnect parameter
<p>
Tuning the LDAP connections
(numconnect parameter)
This parameter translates directly into the number of unidas processes that will
be launched when Calendar Server is started. A process takes time to load, uses
RAM, and when active, CPU cycles. And, unidas maintains an LDAP client
connection to a Directory Server which can only support a fixed number of these
connections. Since a calendar client does not require constant directory access
then having a matching number of unidas processes (to match uniengd "client"
processes) is not a good configuration.
Basically, a calendar client will make many requests for LDAP information, even
if the event information being retrieved is not currently view able. For example,
if the calendar client is displaying a week view with 20 events and each event
has 5 attendees, that will translate into at least 100 separate ldap search
requests for the given name and surname of each attendee. What this means is
that an "active" calendar user will require the services of a calendar server
unidas connection quite often.
Recommendation is that you increase the number of unidas connections
to match the number of "active" calendar users. Our experience is that
at least 20% of the number of configured users (lck_users from the
/users/unison/misc/unison.ini file) are actually logged in, and 10% of
those calendar users are active. For example, if have 3000 configured
calendar users, 600 configured are logged in and 10% of the logged in
are active, which would translate into at least 60 unidas connections.
Keep in mind that configured vs logged in vs active might be different at each
customer site, so please adjust your number of unidas connections
accordingly. To set this up, edit the /users/unison/log/unison.ini file and add
the numconnect parameter to the section noted (where "hostname" is the name of
your local host):
[LCK]
lck_users = 600
[hostname,unidas]
numconnect = 60
The calendar server will need to be restarted after making changes
to the /users/unison/log/unison.ini file, before those changes will
take effect.
Note: Due to some architectural changes in the Calendar Server 4.x, the total
number of DAS connections should never be set higher than 250.
Recommendations for num_connect would be a maximum of 5% of logged on users.
However, keep in mind that 250 das connections is a very high number.
Example:
[LCK]
lck_users = 5000
[hostname,unidas]
numconnect = 250Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
But, could you please clarify 2 points to me
1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
Best reagards -
Looking for book about performance tuning 11g database
Hi,
I am looking for books about performance tuning oracle 11g. The documents from OTN 2 days 11g performance tuning and performance tuning guide are not really study material.
Has someone idea?
greeting,
MaxHi,
http://astore.amazon.com/oraclebooks-20/detail/1590599683 {Part IV Database Tuning }
Greetings,
Sim -
Good book for Oracle 9i Performance Tuning
Hi Can anybody suggest good book in Oracle 9i performance Tuning (All the Tuning methods and I/O, tuning Memeory Tuning .......)
I done my OCP 9i and I worked as Junior DBA and now I want to concentrate only on Tuning.
Thanks
Venkataragavan.SIf you are looking generalized, not exactly 9i performance, but,good in terms of oracle tuning, I would suggest the below apart from the above given,
Oracle Performance by Cary Milsap and Jef Holt
Jonathan Lewis, 'Practical Oralce 8i', dont go on 8i name.
Sql Tuning by Dan Tow
Jaffar -
We have to inverstigate about a reporting solution where things are getting slow (may be material, database design, network matters).
I have red a lot in MSDN and some books about performance tuning on SQL Server 2008 R2 (or other) but frankly, I feel a little lost in all that stuff
I'am looking for practical steps in order to do the tuning. Someone had like a recipe for that : a success story...
My (brain storm) Methodology should follow these steps:
Resource bottlenecks: CPU, memory, and I/O bottlenecks
tempdb bottlenecks
A slow-running user query : Missing indexes, statistics,...
Use performance counters : there are many, can one give us the list of the most important
how to do fine tuning about SQL Server configuration
SSRS, SSIS configuration ?
And do the recommandations.
Thanks
"there is no Royal Road to Mathematics, in other words, that I have only a very small head and must live with it..."
Edsger W. DijkstraHello,
There is no clear defined step which can be categorized as step by step to performance tuning.Your first goal is to find out cause or drill down to factor causing slowness of SQL server it can be poorly written query ,missing indexes,outdated stats.RAM crunch
CPU crunch so on and so forth.
I generally refer to below doc for SQL server tuning
http://technet.microsoft.com/en-us/library/dd672789(v=sql.100).aspx
For SSIS tuning i refer below doc.
http://technet.microsoft.com/library/Cc966529#ECAA
http://msdn.microsoft.com/en-us/library/ms137622(v=sql.105).aspx
When I face issue i generally look at wait stats ,wait stats give you idea about on what resource query was waiting.
--By Jonathan KehayiasSELECT TOP 10
wait_type ,
max_wait_time_ms wait_time_ms ,
signal_wait_time_ms ,
wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms ,
100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( )
AS percent_total_waits ,
100.0 * signal_wait_time_ms / SUM(signal_wait_time_ms) OVER ( )
AS percent_total_signal_waits ,
100.0 * ( wait_time_ms - signal_wait_time_ms )
/ SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
FROM sys.dm_os_wait_stats
WHERE wait_time_ms > 0 -- remove zero wait_time
AND wait_type NOT IN -- filter out additional irrelevant waits
( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
'RESOURCE_QUEUE' )
ORDER BY wait_time_ms DESC
use below link to analyze wait stats
http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
HTH
PS: for reporting services you can post in SSRS forum
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
I am looking for a book database app to keep track of my physical books. I have used a Windows based application called BookBag plus but it looks like there won't be an iPhone version. I'm looking for an app that offers barcode scanning and a database that will work on my iPhone, iPad and PC (via the web) at the same time. I've looked at iBookshelf and Book Crawler and they seem OK but they don't seem to work on multiple devices. From what I can tell I will be able to transfer my existing database via a CSV file.
Any recommendations will be appreciated.
Thanks,
MikeHi,
http://astore.amazon.com/oraclebooks-20/detail/1590599683 {Part IV Database Tuning }
Greetings,
Sim -
ORACLE OBJECTS FOR OLE(OO4O) PERFORMANCE TUNING
제품 : ORACLE SERVER
작성날짜 : 1997-10-10
ODBC의 경우는 Block단위로 data를 Query하는데 비해 OLE의 경우는 한번에 전체
의 자료를 가져다가 Temporary storage space에 넣게 됩니다.
그래서 튜닝을 위해서는
Windows 3.1의 경우는 c:/windows/oraole.ini
WIN95의 경우는 HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\OO4O
parameters를 수정해야 합니다.
만일 위의 File이 없는 경우는 모든 변수가 Default로 설정된 경우이므로 인스톨
된 Help를 자세히 읽어보고 적용을 해야합니다.
FetchLimit이 가장 큰 영향을 끼치는 파라미터로, 일반적으로 이 값이 클수록
속도가 빨라지게 됩니다. 다음은 관련 자료입니다.
Tuning and Customization
A number of working parameters of Oracle Objects for OLE can be
customized. Access to these parameters is provided through the Oracle
initialization file, by default named ORAOLE.INI.
Each entry currently available in that file is described below. The location
of the ORAOLE.INI file is specified by the ORAOLE environment variable.
Note that this variable should specify a full pathname to the Oracle
initialization file, which is not necessarily named ORAOLE.INI. If this
environment variable is not set, or does not specify a valid file entry, then
Oracle Objects for OLE looks for a file named ORAOLE.INI in the Windows
directory. If this file does not exist, all of the default values
listed will apply.
You can customize the following sections of the ORAOLE.INI file:
[Cache Parameters]
A cache consisting of temporary data files is created to manage amounts
of data too large to be maintained exclusively in memory. This cache
is needed primarily for dynaset objects, where, for example, a single
LONG RAW column can contain more data than exists in physical
(and virtual) emory.
The default values have been chosen for simple test cases, running on a machine
with limited Windows resources. Tuning with respect to your machine and
applications is recommended.
Note that the values specified below are for a single cache, and that a separate
cache is allocated for each object that requires one. For example, if
your application contains three dynaset objects, three independent data
caches are constructed, each using resources as described below.
SliceSize = 256 (default)
This entry specifies the minimum number of bytes used to store a piece
of data in the cache. Items smaller than this value are allocated the
full SliceSize bytes for storage; items larger than this value are
allocated an integral multiple of this space value. An example of an
item to be stored is a field value of a dynaset.
PerBlock = 16 (default)
This entry specifies the number of Slices (described in the preceding
entry) that are stored in a single block. A block is the minimum unit
of memory or disk allocation used within the cache. Blocks are read
from and written to the disk cache temporary file in their entirety. Assuming a SliceSize of 256 and a PerBlock value of 16, then the block
size is 256 * 16 = 4096 bytes.
CacheBlocks = 20 (default)
This entry specifies the maximum number of blocks held in memory at any
one time. As data is added to the cache, the number of used blocks
grows until the value of CacheBlocks is reached. Previous blocks are
swapped from memory to the cache temporary disk file to make room for
more blocks. The blocks are swapped based upon recent usage. The total
amount of memory used by the cache is calculated as the product of
(SliceSize * PerBlock * CacheBlocks).
Recommended Values: You may need to experiment to find optimal cache parameter
values for your applications and machine environment. Here are some guidelines
to keep in mind when selecting different values:
The larger the (SliceSize * PerBlock) value, the more disk I/O is
required for swapping individual blocks. The smaller the (SliceSize * PerBlock) value, the
more likely it is that blocks will need to be swapped to or from disk.
The larger the CacheBlocks value, the more memory is required, but the
less likely it is that Swapping will be required.
A reasonable experiment for determining optimal performance might
proceed as follows:
Keep the SliceSize >= 128 and vary PerBlock to give a range of block
sizes from 1K through 8K.
Vary the CacheBlocks value based upon available memory. Set it high
enough to avoid disk I/O, but not so high that Windows begins swapping
memory to disk.
Gradually decrease the CacheBlocks value until performance degrades or
you are satisfied with the memory usage. If performance drops off,
increase the CacheBlocks value once again as needed to restore
performance.
[Fetch Parameters]
FetchLimit = 20 (default)
This entry specifies the number of elements of the array into which data
is fetched from Oracle. If you change this value, all fetched values
are immediately placed into the cache, and all data is retrieved from
the cache. Therefore, you should create cache parameters such that all
of the data in the fetch arrays can fit into cache memory. Otherwise,
inefficiencies may result.
Increasing the FetchLimit value reduces the number of fetches (calls
to the database) calls and possibly the amount of network traffic.
However, with each fetch, more rows must be processed before user
operations can be performed. Increasing the FetchLimit increases
memory requirements as well.
FetchSize = 4096 (default)
This entry specifies the size, in bytes, of the buffer (string) used for
retrieved data. This buffer is used whenever a long or long raw column
is initially retrieved.
[General]
TempFileDirectory = [Path]
This entry provides one method for specifying disk drive and directory
location for the temporary cache files. The files are created in the
first legal directory path given by:
1.The drive and directory specified by the TMP environment variable
(this method takes precedence over all others);
2.The drive and directory specified by this entry (TempFileDirectory)
in the [general] section of the ORAOLE.INI file;
3.The drive and directory specified by the TEMP environment variable; or
4.The current working drive and directory.
HelpFile = [Path and File Name]
This entry specifies the full path (drive/path/filename) of the Oracle Objects
for OLE help file as needed by the Oracle Data Control. If this entry cannot
be located, the file ORACLEO.HLP is assumed to be in the directory where
ORADC.VBX is located
(normally \WINDOWS\SYSTEM).제품 : ORACLE SERVER
작성날짜 : 1997-10-10
ODBC의 경우는 Block단위로 data를 Query하는데 비해 OLE의 경우는 한번에 전체
의 자료를 가져다가 Temporary storage space에 넣게 됩니다.
그래서 튜닝을 위해서는
Windows 3.1의 경우는 c:/windows/oraole.ini
WIN95의 경우는 HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\OO4O
parameters를 수정해야 합니다.
만일 위의 File이 없는 경우는 모든 변수가 Default로 설정된 경우이므로 인스톨
된 Help를 자세히 읽어보고 적용을 해야합니다.
FetchLimit이 가장 큰 영향을 끼치는 파라미터로, 일반적으로 이 값이 클수록
속도가 빨라지게 됩니다. 다음은 관련 자료입니다.
Tuning and Customization
A number of working parameters of Oracle Objects for OLE can be
customized. Access to these parameters is provided through the Oracle
initialization file, by default named ORAOLE.INI.
Each entry currently available in that file is described below. The location
of the ORAOLE.INI file is specified by the ORAOLE environment variable.
Note that this variable should specify a full pathname to the Oracle
initialization file, which is not necessarily named ORAOLE.INI. If this
environment variable is not set, or does not specify a valid file entry, then
Oracle Objects for OLE looks for a file named ORAOLE.INI in the Windows
directory. If this file does not exist, all of the default values
listed will apply.
You can customize the following sections of the ORAOLE.INI file:
[Cache Parameters]
A cache consisting of temporary data files is created to manage amounts
of data too large to be maintained exclusively in memory. This cache
is needed primarily for dynaset objects, where, for example, a single
LONG RAW column can contain more data than exists in physical
(and virtual) emory.
The default values have been chosen for simple test cases, running on a machine
with limited Windows resources. Tuning with respect to your machine and
applications is recommended.
Note that the values specified below are for a single cache, and that a separate
cache is allocated for each object that requires one. For example, if
your application contains three dynaset objects, three independent data
caches are constructed, each using resources as described below.
SliceSize = 256 (default)
This entry specifies the minimum number of bytes used to store a piece
of data in the cache. Items smaller than this value are allocated the
full SliceSize bytes for storage; items larger than this value are
allocated an integral multiple of this space value. An example of an
item to be stored is a field value of a dynaset.
PerBlock = 16 (default)
This entry specifies the number of Slices (described in the preceding
entry) that are stored in a single block. A block is the minimum unit
of memory or disk allocation used within the cache. Blocks are read
from and written to the disk cache temporary file in their entirety. Assuming a SliceSize of 256 and a PerBlock value of 16, then the block
size is 256 * 16 = 4096 bytes.
CacheBlocks = 20 (default)
This entry specifies the maximum number of blocks held in memory at any
one time. As data is added to the cache, the number of used blocks
grows until the value of CacheBlocks is reached. Previous blocks are
swapped from memory to the cache temporary disk file to make room for
more blocks. The blocks are swapped based upon recent usage. The total
amount of memory used by the cache is calculated as the product of
(SliceSize * PerBlock * CacheBlocks).
Recommended Values: You may need to experiment to find optimal cache parameter
values for your applications and machine environment. Here are some guidelines
to keep in mind when selecting different values:
The larger the (SliceSize * PerBlock) value, the more disk I/O is
required for swapping individual blocks. The smaller the (SliceSize * PerBlock) value, the
more likely it is that blocks will need to be swapped to or from disk.
The larger the CacheBlocks value, the more memory is required, but the
less likely it is that Swapping will be required.
A reasonable experiment for determining optimal performance might
proceed as follows:
Keep the SliceSize >= 128 and vary PerBlock to give a range of block
sizes from 1K through 8K.
Vary the CacheBlocks value based upon available memory. Set it high
enough to avoid disk I/O, but not so high that Windows begins swapping
memory to disk.
Gradually decrease the CacheBlocks value until performance degrades or
you are satisfied with the memory usage. If performance drops off,
increase the CacheBlocks value once again as needed to restore
performance.
[Fetch Parameters]
FetchLimit = 20 (default)
This entry specifies the number of elements of the array into which data
is fetched from Oracle. If you change this value, all fetched values
are immediately placed into the cache, and all data is retrieved from
the cache. Therefore, you should create cache parameters such that all
of the data in the fetch arrays can fit into cache memory. Otherwise,
inefficiencies may result.
Increasing the FetchLimit value reduces the number of fetches (calls
to the database) calls and possibly the amount of network traffic.
However, with each fetch, more rows must be processed before user
operations can be performed. Increasing the FetchLimit increases
memory requirements as well.
FetchSize = 4096 (default)
This entry specifies the size, in bytes, of the buffer (string) used for
retrieved data. This buffer is used whenever a long or long raw column
is initially retrieved.
[General]
TempFileDirectory = [Path]
This entry provides one method for specifying disk drive and directory
location for the temporary cache files. The files are created in the
first legal directory path given by:
1.The drive and directory specified by the TMP environment variable
(this method takes precedence over all others);
2.The drive and directory specified by this entry (TempFileDirectory)
in the [general] section of the ORAOLE.INI file;
3.The drive and directory specified by the TEMP environment variable; or
4.The current working drive and directory.
HelpFile = [Path and File Name]
This entry specifies the full path (drive/path/filename) of the Oracle Objects
for OLE help file as needed by the Oracle Data Control. If this entry cannot
be located, the file ORACLEO.HLP is assumed to be in the directory where
ORADC.VBX is located
(normally \WINDOWS\SYSTEM). -
Steps for performance Tuning....!!!!
Hi all,
I need your help in Performance tuning.
While we do tuning in Oracle, apart from Indexes, where clause and order by clause, what are the other points we need to check. I mean explain plan etc...
I am working as Informatica Developer, but i need to make an documents which points out what are the step we can check while doing performance tuning on SQL queries.
Thanks in advance for your help.Hi,
have a look into these link.it may helpful to you.
When your query takes too long .
When your query takes too long ...
* HOW TO Post a SQL statement tuning request template posting *
HOW TO: Post a SQL statement tuning request - template posting
Edited by: Ravi291283 on Jul 28, 2009 4:00 AM
Edited by: Ravi291283 on Jul 28, 2009 4:01 AM
Edited by: Ravi291283 on Jul 28, 2009 4:02 AM -
Performance Tuning for BAM 11G
Hi All
Can anyone guide me for any documents or any tips realted to performance tuning for BAM 11G on on LinuxIt would help to know if you have any specific issue. There are number of tweaks all they way from DB to Browser.
Few key things to follow:
1. Make sure you create index on DO. If there are too much old data in the DO and not useful then periodically delete it. Similar to relational database indexes, defining indexes in Oracle BAM creates and maintains an ordered list of data object elements for fast retrieval.
2. Ensure that IE setup to do automatic caching. This will help with reducing server round trips.
3. Tune DB performance. This would typically require DBA. Identify the SQL statements most likely to be causing the waits by looking at
the drilldown Top SQL Statements Ordered by Wait Time. Use SQL Analyze, EXPLAIN PLAN, or the tkprof utility to tune the queries that were identified.
Check the Dataobject tables involved in the query for missing indexes.
4. Use batching (this is on by default for most cases)
5. Fast network
6. Use profilers to look at machine load/cpu usage and distribute components on different boxes if needed.
7. Use better server AND client hardware. BAM dashboard are heavy users of ajax/javascript logic on the client -
IIR 2.8.0.7 performance tuning suggestions / documents for Oracle 11?
Would we have any hints or white papers that would support a customer in IIR matching server tuning for initial load performance,
beyond the Siebel specific
Performance Tuning Guidelines for Siebel CRM Application on Oracle Database (Doc ID 781927.1)
which does NOT generate any statistics on the Informatica Schema?
Customer is starting production data load into Siebel UCM of over 5 million customer records . Their current bottle neck seems to be IIR queries and high IIR host resources usage.
This would be for Siebel 8.1.1.4 [21225] on Oracle 11.1.0.6; I currently do not know if ACR 475 with its EBC access is used or not; I'd be looking for any performance tuning suggestions on the Informatica & database side - I have not found anything helpful in the informatica knowledgebase and in the 2.8.0.7 docs the only performance tuning suggestions seem to be on IDT loads.
Obviously performance can be influenced by the matching fields used, but this customer will be looking for performance tuning suggestions keeping the matching fields and thresholds they got approved from the business side.
Any documents we could share?Hi Varad,
Well, Oracle Metalink actually got it fixed for me! Got to give them credit.
I ran the 1st bit of SQL they sent me and it fixed it so I think it was only cosmetic. I've enclosed all of their response in case others may find themselves in similar situation.
Thanks again for YOUR help too.
Perry
==========================================================
1. Do you see any invalid objects in the database?
2. Please update the SR with the output of running the commands below
1.Connect as SYS.
set serveroutput on (This will let you see if any check fails)
exec validate_apex; (If your version is 2.0 or above)
If all are fine, the registry should be changed to VALID for APEX and you should see the following
message:
PL/SQL procedure successfully completed.
Note : The procedure validate_apex should always complete successfully.
With serveroutput on, it should show why Application Express is listed as invalid.It should show output like:
FAILED CHECK FOR <<obect_name>>
Or
FAILED EXISTENCE CHECK FOR <<object_name>> -
Looking for the latest Performance documents
Hello,
Latley we installed SP14.
I am looking for SAP's latest documents which involves performence
issues such as: Tuning, Fine Tuning, Troubleshooting guide ect.
My current documents are relevant to SP2 and I would like to read the latest ones at this subject.
Can someone please attach here the relevant links? Is there a central place where I can download these documents?
Roy.Ho Roy,
> Latley we installed SP14
Congrats
> Can someone please attach here the relevant links
The tuning guides are to be found under http://service.sap.com/nw04 -- How-to Guides -- Portal, KM and Collaboration; for example:
TREX: How To Configure Efficient Indexing
EP: Finetuning Performance of Portal Platform
KM: Tuning the Performance of Knowledge Management
Hope it helps
Detlev -
Privilleges for performance tuning
can we assign any specific privilleges to DBA for the performance tuning.
i dont want to give sysdba and sysopr privilleges to DBA.The question should be why there is need to tune anything in Oracle database. What is the problem, have you defined the problem in a very clear manner ? Is there anything which is called problem or it is something other one ? What performance tuning tool you are going to use and why ?
Performance tuning is only two words, but it is very big topic; a topic on which 100 books / blogs have been written and still writing going on.
What and why are you trying to tune and what ORA you got ? As such there is no ready made role in oracle which says something like "PT_ROLE". We just issue the sql to the dictionary objects and as and when we gets privileges related error, we provides select on those sys objects to user; if he really needs though.
Regards
Girish Sharma -
Need OES Performance tuning steps for 10g
Hi Guys,
I am looking for a performance tuning document for OES 10g?
can someone provide me that or can someone illustrate the steps please?
Regards,
NaveenHi Gopi,
Following are the different tools provided by SAP for performance analysis of an ABAP object
Run time analysis transaction SE30
This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
SQL Trace transaction ST05
The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
check these links:
http://www.sapgenie.com/abap/performance.htm
http://www.sapdevelopment.co.uk/perform/performhome.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_Introduction.asp
regards,
keerthi. -
What are the best practices for Database management and performance tuning?
Hello,
I want to ensure that I am using the best practices for managing and maintaining our Database.
Is there any documentation out there that outlines how to maintain and ensure top performance out of our database?
Thank you!
John SeftonI appreciate the responses, however this is not the information I am looking for.
I am specificaly looking for best practices invloving the managment and performance tuning.
Example: are their tools that I can install that will monitor the size and response time of the database and alert me if there is degradation in performance?
Are there specific periodic activities I should be doing to garuntee that my database will continue to function that way it is supposed to?
Or is this a fire and forget solution that does not need this attention?
Maybe you are looking for
-
Hi, we having a licensed version of Crystal Reports 2011. When I register for SAP Coummunity Network , Its asking SAP Market Place ID & Passoword, We having Order Id, How to get SAP Market Place ID & Password? Regards, Esha Abdullah M
-
Access bean events data with axbridge ?
I build a bean and packaged with axbridge It works in c# but how can I get data from my Events ? I defined a listener in my bean public interface TestChangedListener extends EventListener{ void testChanged(TestChangedEvent e); and in c# the meth
-
Active Directory Groups Not Working in Sharepoint
We are trying to manage permissions with AD groups but thus far permissions are not working. We have a site and are able to search for, find, and add AD groups. However, users in this group still get access denied. If users are added explicitly to
-
Any help on this Message (Yellow) while installing BW Statistics?
Hi, Following the link: http://help.sap.com/saphelp_nw04/helpdata/en/8c/131e3b9f10b904e10000000a114084/content.htm to install BI Statistics (0BWTCT_STA). In rsa1, Bi Content, InfoArea, I collected 0BWTCT_STA to the right pane, and followed the Proced
-
Problem wth weblogic.servlet.jsp.jspstub
hai, i have three pages in my application iam calling a login page which verifies the user and gets the data from the database and iam showing the results in the secod page and logging from there. This process iam doing