Cost Based Optimizer (CBO)
not sure if this is a daft question or what. but i am trying to find out where exactly it exists.
i know, when performing ST05 and viewing the execution plan, we see what the CBO has used, but is the CBO purely performed at the database server, and not at the SAP Application.
When updating the statistics, are these passed to the database server, and once again, the CBO utilizes them for the execution plan, or do the database statistics actually reside in the database server.
finally, in viewing the execution plan, the statement "execution costs = xxx" (xxx being a numeric value). what exactly is xxx. maybe an internal index used to compare execution plans, or maybe the number of blocks required to read the "estimated #rows".
anyone ??
thanks
glen
Hello Glen,
So far as my knowledge is concerned, the statistics are actually located on the database server. That is what appears to be more logical too. what is the use of maintaining the access paths on tha application server ? Most of the modern database servers are equipped with the CBO functionality. And Cost-Based-Optimizing is dependent on the database.
Here's what the documentation says:
<i>You can update statistics on the Oracle database using the Computing Center Management System (CCMS). The transactions to be used are DB20 and DB21.
By running update statistics regularly, you make sure that the database statistics are up-to-date, so improving database performance. The Oracle cost-based optimizer (CBO) uses the statistics to optimize access paths when retrieving data for queries. If the statistics are out-of-date, the CBO might generate inappropriate access paths (such as using the wrong index), resulting in poor performance.
From Release 4.0, the CBO is a standard part of the SAP System. If statistics are available for a table, the database system uses the cost-based optimizer. Otherwise, it uses the rule-based optimizer.</i>
Regards,
Anand Mandalika.
Similar Messages
-
Top Link Special Considerations in moving to Cost Based Optimizer....
Our current application architecture consists of running a Java based application with Oracle 9i as the database and toplink as the object relational mapping tool. This is a hosted application about 5 years old with stringent SLA requirements and high availability needs. We are currently using Rule Based Optimizer (RBO) mode and do not collect statistics for the schemas. We are planning a move to Cost Based Optimizer (CBO)
What are the special considerations we need to be aware of from moving RBO to CBO from top link perspective. Is top link code optimized for one mode over the other ?. What special parameter settings are needed ?. Any of your experience in moving Top Link based applications to RBO and best practices will be very much appreciated.
-Thanks
Ganesan MahaGanesan,
Over the 10 years we have been delivering TopLink I do not recall any issues with customizing TopLink for either approach. You do have the ability to customize how the SQL is generated and even replace the generated SQL with custom queries should you need to. This will not require application changes but simply modifications to the TopLink metadata.
As of 9.0.4 you can also provide hints in the TopLink query and expression framework that will be generated into the SQL to assist the optimizer.
Doug -
Rule based & Cost based optimizer
Hi,
What is the difference Rule based & Cost based optimizer ?
ThanksWithout an optimizer, all SQL statements would simply do block-by-block, row-by-row table scans and table updates.
The optimizer attempts to find a faster way of accessing rows by looking at alternatives, such as indexes.
Joins add a level of complexity - the simplest join is "take an appropriate row in the first table, scan the second table for a match". However, deciding which is the first (or driving) table is also an optimization decision.
As technology improves a lot of different techiques for accessing the rows or joining that tables have been devised, each with it's own optimium data-size:performance:cost curve.
Rule-Based Optimizer:
The optimization process follows specific defined rules, and will always follow those rules. The rules are easily documented and cover things like 'when are indexes used', 'which table is the first to be used in a join' and so on. A number of the rules are based on the form of the SQL statement, such as order of table names in the FROM clause.
In the hands of an expert Oracle SQL tuner, the RBO is a wonderful tool - except that it does not support such advanced as query rewrite and bitmap indexes. In the hands of the typical developer, the RBO is a surefire recipie for slow SQL.
Cost-Based Optimizer:
The optimization process internally sets up multiple execution proposals and extrapolates the cost of each proposal using statistics and knowledge of the disk, CPU and memory usage of each of the propsals. It is not unusual for the optimizer to analyze hundred, or even thousands, of proposals - remember, something as simple as a different order of table names is a proposal. The proposal with the least cost is generally selected to be executed.
The CBO requires accurate statistics to make reasonable decisions.
Even with good statistics, the complexity of the SQL statement may cause the CBO to make a wrong decision, or ignore a specific proposal. To compensate for this, the developer may provide 'hints' or recommendations to the optimizer. (See the 10g SQL Reference manual for a list of hints.)
The CBO has been constantly improving with every release since it's inception in Oracle 7.0.12, but early missteps have given it a bad reputation. Even in Oracle8i and 9i Release 1, there were countless 'opportunities for improvement' <tm> As of Oracle 10g, the CBO is quite decent - sufficiently so that the RBO has been officially deprecated. -
Rule based optimizer vs Cost based optimizer - 9i
Is Rule based optimizer not used any more or can be used depending on the application etc.
I think Rule based optimizer still has some advantages. Please give your input if you think otherwise.
ThxI think Rule based optimizer still has some
advantages. Please give your input if you think
otherwise.You are absolutely correct. There are a few advantages to RBO.
RBO is better for any application that meets the following criteria:
- designed for Oracle version 7;
- has not been updated since Oracle 7;
- was hand tuned in Oracle 7;
- will not be upgraded to Oracle Database 10g (where RBO is obsolete);
- will not use Bitmap Indexes, Materialized Views, Query Rewrite, or vitrtually anything that was introduced in Oracle8 and beyond.
CBO, while not perfect, will allow new features to be used. And it is improving with every release. -
How to use Cost Based Optimizer
Hi,
I'm looking for a documentation about CBO, I found some information through google and here but anyone knows where I can found more informaton about CBO, how to use, how i'ts increase the performance and more?
Thank YouSee Oracle® Database Performance Tuning Guide
-
Cost Based Optimizer Statistics
Hi,
I just wanted to check on how to do this activity............
In my production system in R3........on HPUX 11.23 and Oracle DB 9i and ECC 5.0....
When I go to DB02.......Checks.......Date of Table Analysis......The output shows as below
Date of last analysis SAPDAT SYSTEM others
never analyzed 0 129 195
older one year 0 0 40,871
31 - 365 days 0 0 782
8 - 30 days 0 0 2,942
0 - 7 days 0 0 337
Total 0 129 45,127
How do I go about do the analysis for all the files so that there is an up-to-date status....
Thanks in advance.
AlfredYou can force a creation of new statistics by using the "-f collect" force option:
brconnect -u / -c -f stats- t all -f collect
Nevertheless I usually would not recommend this because you can expect a high runtime of the statistics creation and "old" statistics are not "bad" statistics. Instead it is normal that statistics of static tables are months or years old. See note 825653 (7) for more information.
Regards
Martin -
Re: Oracle 8i (8.1.7.4) Rule based v/s Cost based
Hi,
I would like to know the advantages/disadvantages of using RULE based optimizer v/s COST based optimizer in Oracle 8i. We have a production RULE based database and are experiencing performance issues on some queries sporadically.
TKPROF revealed:
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 3 94.67 2699.16 1020421 5692711 51404 0
Fetch 13 140.93 4204.41 688482 4073366 0 26896
total 16 235.60 6903.57 1708903 9766077 51404 26896
Please post your expert suggestions as soon as possible.
Thanks and Regards,
AI think the answer you are looking for is that Rule Based optimizer is predictive, but Cost Based optimizer results may vary depending on statistics of rows, indexes, etc. But at the same time, you can typically get better speed for OLTP relational databases with CBO, assuming you have correct statistics, and correct optimizer settings set.
-
Hi,
Rule Based Optimization is a deprecated feature in Oracle 10g.We are in the process of migrating from Oracle 9i to 10g.I have never heard of this Rule based Optimization earlier.I have googled for the same.But, got confused with the results.
Can anybody shed some light on the below things...
Is this Optimization done by Oracle or as a developer do we need to take care of the rules while writing SQL statements?
There is another thing called Cost Based Optimization...
Who will instruct the Oracle whether to use Rule Based Optimization or cost Based Optimization?
Thanks & Regards,
user569598Hope the following explanation would be helpful.
Whenever a statement is fired, Oracle should goes through the following stages:
Parse -> Execute -> Fetch (fetch only for select statement).
During Parse, Oracle first evaluates, Syntatic checking (SELECT, FROM, WHERE, ORDER BY ,GROUP and etc) and then Semantic Checking (columns names, table name, user permission on the objects and etc). Once these two stages passes, then, it has to decided whether to do soft parse or hard parse. If similar cursor(statement) doesn't exits in the shared pool, Oracle goes for Hard parse where Optimizer comes in picture for generating query plan.
Oracle has to decide either RBO or CBO. It also depends on the OPTIMIZER_MODE parameter value. If RULE hint is used, RBO will be used, if there are no statistics for those tables involved in the query, Oracle decides RBO, (condition applies). If statistics are available, or dynamic samplying is defined then Oracle use CBO to prepare the Optimal execution plan.
RBO is simply relies on set of rules where CBO relies on statistical information.
Jaffar -
I have the following Select Statement:
SELECT FGBTRND_SUBMISSION_NUMBER, FGBTRND_TRANS_AMT, FGBTRND_COAS_CODE, FGBTRND_FUND_CODE, FGBTRND_ORGN_CODE,
FGBTRND_ACCT_CODE, FGBTRND_PROG_CODE, FGBTRND_ACTV_CODE, FGBTRND_LOCN_CODE, FGBTRND_RUCL_CODE
FROM FGBTRND
WHERE FGBTRND_DOC_CODE = 'F0022513'
AND FGBTRND_RUCL_CODE IN ( SELECT FGBTRNH_RUCL_CODE FROM FGBTRNH
WHERE FGBTRNH_DOC_CODE = 'F0022513' )
AND FGBTRND_LEDGER_IND='O'
AND FGBTRND_FIELD_CODE='03' --:B4 01 02 03
AND DECODE('Y','Y',BWFKPROC.F_SECURITY_FOR_WEB_FNC(FGBTRND_COAS_CODE, FGBTRND_FUND_CODE, FGBTRND_ORGN_CODE, 'PBEED'),'Y' ) = 'Y'
AND ((FGBTRND_SUBMISSION_NUMBER IS NULL AND '0' IS NULL) OR (FGBTRND_SUBMISSION_NUMBER='0' ))
This statement is ok without the following:
AND DECODE('Y','Y',BWFKPROC.F_SECURITY_FOR_WEB_FNC(FGBTRND_COAS_CODE, FGBTRND_FUND_CODE, FGBTRND_ORGN_CODE, 'PBEED'),'Y' ) = 'Y'
The call is to a security package which has to evaluate to Y inorder for the user to see the result. This statement in total would work fine provided the decode in the where clause is called last. However, the cost based optimizer is determining that it needs to evaluate this first.
Question is:
How do I get the cost based optimizer to evaluate the decode last and not first?
I am on 10.2.0.3
Patrick Churchilluser3390467 wrote:
" Consider setting your optimizer_index_caching parameter to assist the cost-based optimizer. Set the value of optimizer_index_caching to the average percentage of index segments in the data buffer at any time, which you can estimate from the v$bh view.
Can someone give me the query to use to estimate from v$bh view mentioned above?
What are other considerations for setting this parameter for optimizationThis post, and the flood of your other posts, appear to be quoting sections of a Statspack Analyzer report. Why are you posting this material here?
If you want to set the optimizer_index_caching initialization parameter, first determine the purpose of the parameter. Next, determine if the current value of the parameter is causing performance problems. Next, determine if there are any unwanted side-effects. Finally, test the changed parameter, possibly at the session level or through an OPT_PARAM hint in affected queries.
Here is a link to the starting point. http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams159.htm
Blindly changing parameters in response to vague advice is likely to lead to problems.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Adding HINTS produce a cost based plan ?
I have an SQL with Oracle Hints. If I do an explain plan report on this SQL, there is data under Rows, Bytes and Cost. If I remove the hints from the SQL, the explain plan has no data under rows, bytes, cost and a note: rule based optimization.
If I compute statistics on one of the tables used by the SQL, using ANALYSE TABLE as recommended, then I have a third explain plan, with data under rows, bytes and cost.
So how, in the absence of statistics, can Hints help produce a cost based plan ?When you provide hints in the SQL statments you typically are controlling the execution path and the nature of join that SQL statment is choosing. This can give you good results or can slow down performance of your query as the time passes and database is subjected to changes.
If on the other hand you choose COST based optimization and collect statistics as recommended by Oracle then you make optimizer think instead of your self which yealds competative performance when you let optimizer engine decide the execution plan. So If i where you would think of performing following tasks.
1)Collect the statistics for all the tables and indexes refrenced in the SQL statment.
2)Set the optimizer goal to choose.
3) Vary the optimizer sampling size while collecting the statistics using ANALYZE command. In the past I have noticed that optimizer behavior will change as per the sampling so you might have to adjest your stats while using ANALYZE command to fine tune the behavior of SQL statment.
4)This should improve performance of your query. -
Partitioning on Oracle 8i (Rule Based vs. Cost Based)
At my current engagement, we are using Oracle Financials 11.0.3 on Oracle 8.0.6. The application uses rule-based optimizer. The client wants to implement Oracle partitioning. With this in mind, we are concerned about possible performance issues that the implementation of partitioning may cause since RBO does not recognize it.
We agree that the RBO will see a non-partitioned table the same as a partitioned. In this scenario where you gain the most is with backup/recoverability and general maintenance of the partitioned table.
Nevertheless, we have a few questions:
When implementing partitions, will the optimizer choose to go with Cost base vs. Rule base for these partitioned tables?
Is it possible that the optimizer might get confused with this?
Could it degrade performance at the SQL level?
If this change from RBO to CBO does occur, the application could potential perform poorly because of the way it has been written.
Please provide any feedback.
Thanks in advance.If the CBO is invoked when accessing these tables, you may run into problems.
- You'll have to analyze your tables & ensure that the statistics are kept up to date.
- It's possible that any SQL statements which invoke the CBO rather than the RBO will have different performance characteristics. The SYSTEM data dictionary tables, for example, must use the RBO or their performance suffers dramatically. Most of the time, the CBO beats the RBO, but applications which have been heavily tuned with the RBO may have problems with the CBO.
- Check your init.ora to see what optimizer mode you're in. If you're set to CHOOSE, the CBO will be invoked whenever statistics are available on the table(s) involved. If you choose RULE, you'll only invoke the CBO when the RBO encounters situations it doesn't have rules for.
Justin -
SQL 문장이 RULE 에서 COST-BASED로 전환되는 경우
제품 : ORACLE SERVER
작성날짜 : 2004-05-28
SQL 문장이 RULE에서 COST-BASED로 전환되는 경우
==============================================
PURPOSE
SQL statement 문장이 자동으로 cost-based mode로 전환되는 경우에 대해
알아보자.
Explanation
Rule-based mode에서 sql statement를 실행하더라도 Optimizer에 의해
cost-based mode로 전환되는 경우가 있다.
이런 경우는 해당 SQL이 아래와 같은 경우로 사용되는 경우 가능하다.
- Partitioned tables
- Index-organized tables
- Reverse key indexes
- Function-based indexes
- SAMPLE clauses in a SELECT statement
- Parallel execution and parallel DML
- Star transformations
- Star joins
- Extensible optimizer
- Query rewrite (materialized views)
- Progress meter
- Hash joins
- Bitmap indexes
- Partition views (release 7.3)
- Hint (RULE 또는 DRIVING_SITE제외한 Hint가 왔을경우)
- FIRST_ROWS,ALL_ROWS Optimizer의 경우는 통계정보가 없어도 CBO로 동작
- TABLE 또는 INDEX에 Parallel degree가 설정되어 있거나,
INSTANCE가 설정되어 있는 경우(DEFAULT도 해당)
- Table에 domain index(Text index등) 이 생성되어 있는 경우 -
Improving performace for a Rule Based Optimizer DB
Hi,
I am looking for information on improving the current performance of an ancient 35GB Oracle 7.3.4 using RULE based optimizer mode. It is using 160 MB SGA and the physical memory on the system is 512MB RAM.
As of now, all the major tasks which take time, are run after peak hours so that the 130 user sessions are not affected significantly.
But recently am told some procedures take too long to execute ( procedure has to do with truncating tables and re-populating data into it ) and I do see 54% of the pie chart for WAITS are for "sequential reads" followed by "scattered reads" of 36%. There are a couple of large tables of around 4GB in this DB.
Autotrace doesn't help me much in terms of getting an explain plan of slow queries since COST option doesnt show up and am trying to find ways of improving the performance of DB in general.
Apart from the "redo log space requests" which I run into frequently (which btw is something I am trying to resolve ..thanks to some of you) I dont see much info on exactly how to proceed.
Is there any info that I can look towards in terms of improving performance on this rule based optimizer DB ? Or is identifying the top sql's in terms of buffer gets the only way to tune ?
Thank you for any suggestions provided.Thanks Hemant.
This is for a 15 minute internal under moderate load early this morning.
Statistic Total Per Transact Per Logon Per Second
CR blocks created 275 .95 5.19 .29
Current blocks converted fo 10 .03 .19 .01
DBWR buffers scanned 74600 258.13 1407.55 78.44
DBWR free buffers found 74251 256.92 1400.96 78.08
DBWR lru scans 607 2.1 11.45 .64
DBWR make free requests 607 2.1 11.45 .64
DBWR summed scan depth 74600 258.13 1407.55 78.44
DBWR timeouts 273 .94 5.15 .29
OS Integral shared text siz 1362952204 4716097.59 25716079.32 1433177.92
OS Integral unshared data s 308759380 1068371.56 5825648.68 324668.12
OS Involuntary context swit 310493 1074.37 5858.36 326.49
OS Maximum resident set siz 339968 1176.36 6414.49 357.48
OS Page faults 3434 11.88 64.79 3.61
OS Page reclaims 6272 21.7 118.34 6.6
OS System time used 19157 66.29 361.45 20.14
OS User time used 195036 674.87 3679.92 205.09
OS Voluntary context switch 21586 74.69 407.28 22.7
SQL*Net roundtrips to/from 16250 56.23 306.6 17.09
SQL*Net roundtrips to/from 424 1.47 8 .45
background timeouts 646 2.24 12.19 .68
bytes received via SQL*Net 814224 2817.38 15362.72 856.18
bytes received via SQL*Net 24470 84.67 461.7 25.73
bytes sent via SQL*Net to c 832836 2881.79 15713.89 875.75
bytes sent via SQL*Net to d 42713 147.8 805.91 44.91
calls to get snapshot scn: 17103 59.18 322.7 17.98
calls to kcmgas 381 1.32 7.19 .4
calls to kcmgcs 228 .79 4.3 .24
calls to kcmgrs 20845 72.13 393.3 21.92
cleanouts and rollbacks - c 86 .3 1.62 .09
cleanouts only - consistent 40 .14 .75 .04
cluster key scan block gets 1051 3.64 19.83 1.11
cluster key scans 376 1.3 7.09 .4
commit cleanout failures: c 18 .06 .34 .02
commit cleanout number succ 2406 8.33 45.4 2.53
consistent changes 588 2.03 11.09 .62
consistent gets 929408 3215.94 17536 977.3
cursor authentications 1746 6.04 32.94 1.84
data blocks consistent read 588 2.03 11.09 .62
db block changes 20613 71.33 388.92 21.68
db block gets 40646 140.64 766.91 42.74
deferred (CURRENT) block cl 668 2.31 12.6 .7
dirty buffers inspected 3 .01 .06 0
enqueue conversions 424 1.47 8 .45
enqueue releases 1981 6.85 37.38 2.08
enqueue requests 1977 6.84 37.3 2.08
execute count 20691 71.6 390.4 21.76
free buffer inspected 2264 7.83 42.72 2.38
free buffer requested 490899 1698.61 9262.25 516.19
immediate (CR) block cleano 126 .44 2.38 .13
immediate (CURRENT) block c 658 2.28 12.42 .69
logons cumulative 53 .18 1 .06
logons current 1 0 .02 0
messages received 963 3.33 18.17 1.01
messages sent 963 3.33 18.17 1.01
no work - consistent read g 905734 3134.03 17089.32 952.4
opened cursors cumulative 2701 9.35 50.96 2.84
opened cursors current 147 .51 2.77 .15
parse count 2733 9.46 51.57 2.87
physical reads 490258 1696.39 9250.15 515.52
physical writes 2265 7.84 42.74 2.38
recursive calls 37296 129.05 703.7 39.22
redo blocks written 5222 18.07 98.53 5.49
redo entries 10575 36.59 199.53 11.12
redo size 2498156 8644.14 47135.02 2626.87
redo small copies 10575 36.59 199.53 11.12
redo synch writes 238 .82 4.49 .25
redo wastage 104974 363.23 1980.64 110.38
redo writes 422 1.46 7.96 .44
rollback changes - undo rec 1 0 .02 0
rollbacks only - consistent 200 .69 3.77 .21
session logical reads 969453 3354.51 18291.57 1019.4
session pga memory 35597936 123176.25 671659.17 37432.11
session pga memory max 35579576 123112.72 671312.75 37412.8
session uga memory 2729196 9443.58 51494.26 2869.82
session uga memory max 20580712 71213.54 388315.32 21641.13
sorts (memory) 1091 3.78 20.58 1.15
sorts (rows) 12249 42.38 231.11 12.88
table fetch by rowid 57246 198.08 1080.11 60.2
table fetch continued row 111 .38 2.09 .12
table scan blocks gotten 763421 2641.6 14404.17 802.76
table scan rows gotten 13740187 47543.9 259248.81 14448.15
table scans (long tables) 902 3.12 17.02 .95
table scans (short tables) 4614 15.97 87.06 4.85
total number commit cleanou 2489 8.61 46.96 2.62
transaction rollbacks 1 0 .02 0
user calls 15266 52.82 288.04 16.05
user commits 289 1 5.45 .3
user rollbacks 23 .08 .43 .02
write requests 331 1.15 6.25 .35Wait Events :
Event Name Count Total Time Avg Time
SQL*Net break/reset to client 7 0 0
SQL*Net message from client 16383 0 0
SQL*Net message from dblink 424 0 0
SQL*Net message to client 16380 0 0
SQL*Net message to dblink 424 0 0
SQL*Net more data from client 1 0 0
SQL*Net more data to client 24 0 0
buffer busy waits 169 0 0
control file sequential read 55 0 0
db file scattered read 74788 0 0
db file sequential read 176241 0 0
latch free 6134 0 0
log file sync 225 0 0
rdbms ipc message 10 0 0
write complete waits 4 0 0I did enable the timed_stats for the session but dont know why the times are 0's. Since I cant bounce the instance until weekend, cant enable the parameter in init.ora as well. -
hi,
my database is 10.2.0.1...by default optimizer_mode=ALL_ROWS..
for some sessions..i need rule based optimizer...
so can i use
alter session set optimizer_mode=rule;
will it effect that session only or entire database....
and following also.i want to make them at session level...
ALTER SESSION SET "_HASH_JOIN_ENABLED" = FALSE;
ALTER SESSION SET "_OPTIMIZER_SORTMERGE_JOIN_ENABLED" = FALSE ;
ALTER SESSION SET "_OPTIMIZER_JOIN_SEL_SANITY_CHECK" = TRUE;
will those effect only session or entire database...please suggest< CBO outperforms RBO ALWAYS! > I disagree - mildlyWhen I tune SQL, the first thing I try is a RULE hint, and in very simple databases, the RBO still does a good job.
Of course, you should not use RULE hints in production (That's Oracle job).
When Oracle eBusiness suite migrated to the CBO, they placed gobs of RULE hints into their own SQL!!
Anyway, always adjust your CBO stats to replicate an RBO execution plan . . . .
specifically CAST() conversions from collections and pipelined functions.Interesting. Hsve you tried dynamic sampling for that?
Hope this helps. . .
Don Burleson
Oracle Press author
Author of “Oracle Tuning: The Definitive Reference”
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm -
Hi all,
On one of the production server we are using RULE BASED OPTIMIZER(Its application requirement).
I have to tune this database as users are complaining about the performance.
Any tips how can I tune for a RULE BASED optimizer database.
Does the tuning statergy will remain same as like seeing execution plan for missing index,instance paramets
execpt you cant generate stats.
Regards
UmairHi!
There are one thing about RBO, YOU must check all long-running queryis for it's
execution plans, try find better plans and after force RBO to using it.
You can use different hints for changing eceution plans. But for tuning RBO's database you must soent a very big time, YOU must be a CBO ;)
Maybe you are looking for
-
i have an roor code 1015 can someone please help me
-
Stylus & Atmosphere too heavy?
Hi, recently added Stylus and Atmosphere (which are great!, expecially Stylus) to my Logic's plugins. Now, using both of them into a Song, Logic goes weird. Sometimes the standard pop-up warning appears ("Core Audio System Overload - Disk too slow",
-
(Text) Field Enhancement - in Customer Master
Hi Experts, Pls. let me know that Steps involved in Text Enhancements? My requirement is that I need to do text enhancement for KATR6 field's description from Attribute 6 to my_own_description in Customer Master. How to proceed, pls. step by step? Th
-
I get an error message when I log out of my credit union site . What should I do?
The error message said. Your browser won't let you log out. The error number is c8572041-ee50-4969-8b9c-136006ece943. Is there anything you can tell me to do?
-
Some of my purchased songs do not play in itunes
I have purchased song from itunes either via my phone or through my computer ( but mostly through my phone). I noticed that some of the songs that i purchased do not play on my iphone and now im finding out that there are not even playing in my itune