Performance Plus Vs Viewer
Hi,
I've got a report done in discoverer desktop and I want to delivery it to the end users through the web.
I know that discoverer plus is to create reports as desktop and discoverer viewer is to view reports, but viewer doesn't give me good response times as discoverer plus does.
It is normal that plus is faster than viewer by far? (I'm saying that plus takes about 6 sec and the same drilldown the viewer takes about 45 sec and I've noticed that to run the query in viewer takes about 5 secs and building the sheets about 40 sec) what can I do to improve that?
Also I have tunned the report, I created a fact table, all calculations are done on the table's load, it has indexes and I check the sql inspector and in fact it is using the indexes
Any tip will be really helpfull
Regards
Mario A Villamizar
System Analist
Bogota - Colombia
Hi Mario
No it is not right that Viewer should be that number of times slower than Plus. As I found out today, there is a bug in the 10.1.2 Viewer code that causes drills or crosstabs to take a really long time when compared to Desktop. As this poor performance was also there in 9.0.4 I can only assume that Oracle copied the offending code from one to the other.
At this moment in time there is nothing that you can tweak or set behind the scenes that will help performance. So long as your workbooks are efficient with minimum usage of crosstabs and page items this is all you can do until Oracle fix Viewer.
Oracle tell me that they have identified the issue with 10.1.2 and are in the process of getting a patch together. They also tell me that the patch will be released in a couple of months so I guess they still have some issues to resolve.
I have no idea at this stage whether a similar patch will be released for 9.0.4 although I cannot imagine that they will not do so.
Hope this information helps
Regards
Michael
Similar Messages
-
Performing searches and scrolling within a report using Plus and Viewer
Hi All:
In Discoverer Plus and Viewer, is there a way to do searches within a report?
Also, if I have a report that has multiple pages, as much as I can adjsut the row number, is there a quick way to get from the beginning to the end of the report and vice versa without using the Down/Up option?
thanks .... cmHi cm
In Plus, you can click the Search button, it has the icon of a Torchlight. This brings up a new dialog box that allows you to search through the report. Because Viewer is an HTML page it has no similar feature. If the whole report is on screen you can the browsers own search facility by pressing CTRL-R.
As for getting to the last page in Viewer, that functionality was added as an enhancement during the release of 10.1.2.2 cumulative patch 2 and has been continued into 10.1.2.3
Here are the notes from CP8:
Enabling Bug 5639863 - ENH: NEED OPTION TO NAVIGATE TO LAST PAGE IN VIEWER
Introduced UI for page navigation, using which a user can navigate to first/previous/next/last/specific page of the worksheet in Viewer. A new config parameter, named ‘pageNavigation’ has been added in configuration.xml as part of this enhancement.
It is a Boolean parameter, which when set to true, the page navigation toolbar will appear in the Viewer page, else not. It has been added owing to the fact that non-incremental query has to be run for showing the page serial number information correctly, thus resulting in relatively more show-up time of worksheet (based on how large the worksheet is) for the first time its rendered. It is recommended to set this parameter as FALSE, because setting it to true makes the query non-incremental, which is a performance trade-off.
And some additional commentary from CP2:
The default is FALSE. If you wish to use this UI, then you may notice a performance trade off for very large queries when the query initially runs compared to the default 10.1.2 functionality. The trade off is that it is much easier and quicker to navigate to specific pages of a large report.An example of the usage of 'pageNavigation' is given below
<viewer queryRefreshPeriod="3000" queryRequestTimeout="1000"
longRequestRefreshPeriod="6000" longRequestTimeout="10000"
userDefinedConnections="true" logLevel="error" laf="dc_blaf"
switchWorksheetBehavior="prompt" defaultLocale="en" disableBrowserCaching="false"
enableAppsSSOConnection="false"
pageNavigation="true">
Best wishes
Michael -
Different Performance for a view/table
Hi,
I have a view called "Myview" which has a poor performance on one database (DBTEST) but with good performance on another database (DBDEV)
I checked the indexes on both and all of them were in place on both databases.
DBTEST and DBDEV both installed on the same Unix machine (They share the same resources).
Since both databases are configured similarly I'm wondering why querying Myview view takes two times more to return records?
How can I identify where the problem is? The "consistent gets" and "physical reads" parameters are about 2 times more in DBTEST when I query the view. I believe this is why I have poor performance in DBTEST.
Could someone give me an advice on what db parameters I should verify to identify the problem?
DBTEST> select status from Myview where id =100;
elapse time: 40 seconds
DBDEV> select status from Myview where id =100;
elapse time: 22 seconds
DBTEST> select count(*) from Myview;
5123 rows selected
DBDEV> select count(*) from Myview;
4022 rows selected
Thanks,
AmirThere are 13 tables plus one view underlying the Myview
The tables which are not listed, are lookup tables and contain equal number of rows on both DBs:
DBDEV
TableName No of Rows
user_role 3023
project 2059
project_year 647
doc_tab 3091
user 3155
org 2639
region 125
application 3353
DBDEV
TableName No of Rows
user_role 6362
project 5058
project_year 1516
doc_tab 8659
user 6936
org 6320
region 176
application 7325
Since Myview is using UNION clause I picked part of the execution plan:
DBDEV:
11 rows selected.
Elapsed: 00:00:16.01
Execution Plan
SELECT STATEMENT Optimizer=CHOOSE (Cost=525 Card=3 Bytes=111)
VIEW OF 'Myview' (Cost=525 Card=3 Bytes=111)
SORT (UNIQUE) (Cost=560 Card=3 Bytes=1103)
UNION-ALL
HASH JOIN (ANTI) (Cost=138 Card=1 Bytes=369)
HASH JOIN (Cost=135 Card=1 Bytes=356)
NESTED LOOPS (Cost=132 Card=1 Bytes=348)
NESTED LOOPS (OUTER) (Cost=131 Card=1 Bytes=330)
NESTED LOOPS (OUTER) (Cost=130 Card=1 ytes=308)
NESTED LOOPS (OUTER) (Cost=129 Card=1 Bytes=295)
FILTER
NESTED LOOPS (OUTER)
HASH JOIN (Cost=128 Card=1 Bytes=175)
VIEW OF 'Myview_PROJ_ALL_YEAR'
(Cost=123 Card=15 Bytes=2295)
MERGE JOIN (Cost=123 Card=15 Bytes=1935)
SORT (JOIN) (Cost=119 Card=529 Bytes=61893)
HASH JOIN (Cost=107 Card=529 Bytes=61893)
VIEW OF 'Myview_PROJECT' (Cost=100 Card=529 Bytes=44436)
SORT (UNIQUE) (Cost=100 Card=529 Bytes=40998)
UNION-ALL
HASH JOIN (Cost=9 Card=51 Bytes=2703)
TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=15 Bytes=135)
TABLE ACCESS (FULL) OF 'PROJECT' (Cost=6 Card=51 Bytes=2244)
HASH JOIN (Cost=48 Card=129 Bytes=11610)
HASH JOIN (Cost=41 Card=127 Bytes=9779)
HASH JOIN (Cost=29 Card=94 Bytes=5922)
HASH JOIN (Cost=9 Card=51 Bytes=2703)
TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=15 Bytes=135)
TABLE ACCESS (FULL) OF 'PROJECT' (Cost=6 Card=51 Bytes=2244)
TABLE ACCESS (FULL) OF 'APPLICATION' (Cost=19 Card=3353 Bytes=33530)
INDEX (FAST FULL SCAN) OF 'UK_user_INVOLVE' (UNIQUE) (Cost=11 Card=4527 Bytes=63378)
TABLE ACCESS (FULL) OF 'user_role' (Cost=6 Card=3023 Bytes=39299)
HASH JOIN (Cost=12 Card=298 Bytes=22350)
HASH JOIN (Cost=9 Card=51 Bytes=2907)
TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=15 Bytes=135)
TABLE ACCESS (FULL) OF 'PROJECT' (Cost=6 Card=51 Bytes=2448)
TABLE ACCESS (FULL) OF 'region' (Cost=2 Card=69 Bytes=1242)
HASH JOIN (Cost=19 Card=51 Bytes=4335)
HASH JOIN (Cost=12 Card=51 Bytes=3366)
HASH JOIN (Cost=9 Card=51 Bytes=2907)
DBINT:
9 rows selected.
Elapsed: 00:00:34.03
Execution Plan
SELECT STATEMENT Optimizer=CHOOSE (Cost=941 Card=3 Bytes=111)
VIEW OF 'Myview' (Cost=941 Card=3 Bytes=111)
SORT (UNIQUE) (Cost=976 Card=3 Bytes=1106)
UNION-ALL
HASH JOIN (ANTI) (Cost=253 Card=1 Bytes=370)
NESTED LOOPS (OUTER) (Cost=250 Card=1 Bytes=357)
NESTED LOOPS (OUTER) (Cost=250 Card=1 Bytes=341)
NESTED LOOPS (OUTER) (Cost=249 Card=1 Bytes=318)
HASH JOIN (Cost=248 Card=1 Bytes=304)
NESTED LOOPS (Cost=245 Card=1 Bytes=296)
HASH JOIN (Cost=243 Card=2 Bytes=556)
FILTER
HASH JOIN (OUTER)
VIEW OF 'Myview_PROJ_ALL_YEAR' (Cost=229 Card=35 Bytes=5355)
MERGE JOIN (Cost=229 Card=35 Bytes=4550)
SORT (JOIN) (Cost=226 Card=1262 Bytes=148916)
HASH JOIN (Cost=198 Card=1262 Bytes=148916)
VIEW OF 'Myview_PROJECT' (Cost=183 Card=1262 Bytes=106008)
SORT (UNIQUE) (Cost=183 (Card=1262 Bytes=100528)
UNION-ALL
HASH JOIN (Cost=15 Card=126 Bytes=6678)
TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=28 Bytes=252)
TABLE ACCESS (FULL) OF 'PROJECT' (Cost=12 Card=126 Bytes=5544)
HASH JOIN (Cost=98 Card=454 Bytes=41314)
HASH JOIN (Cost=88 Card=448 Bytes=34496)
HASH JOIN (Cost=48 Card=206 Bytes=12978)
HASH JOIN (Cost=15 Card=126 Bytes=6678)
TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=28 Bytes=252)
TABLE ACCESS (FULL) OF 'PROJECT' (Cost=12 Card=126 Bytes=5544)
TABLE ACCESS (FULL) OF 'APPLICATION' (Cost=32 Card=7325 Bytes=73250)
INDEX (FAST FULL SCAN) OF 'UK_user_INVOLVE' (UNIQUE) (Cost=39 Card=15889
Bytes=222446)
TABLE ACCESS (FULL) OF 'user_role' (Cost=9 Card=6362 Bytes=89068)
HASH JOIN (Cost=18 Card=556 Bytes=41700)
TABLE ACCESS (FULL) OF 'region' (Cost=2 Card=88 Bytes=1584)
HASH JOIN (Cost=15 Card=126 Bytes=7182)
TABLE ACCESS (FULL) OF 'GRANT_PROGRAM' (Cost=2 Card=28 Bytes=252)
TABLE ACCESS (FULL) OF 'PROJECT' (Cost=12 Card=126 Bytes=6048)
HASH JOIN (Cost=28 Card=126 Bytes=10836)
HASH JOIN (Cost=18 Card=126 Bytes=8316)
You see alapse time for querying DBTEST is sometimes 2 times more than DBDEV. BTW, I checked all indexes for both database are in place.
Based on the information provided can you tell me what the problem is?
Thanks, -
Hi every one,
I am new to XI . I have a small doubt in Mappings.
In XI we have 4 types of mappings:
1.Graphical mapping
2.XSLT mapping,
3.Java Mapping
4.ABAP mapping
In case of performance point of view which one is best .
Plz explain me in details .
I will give full points for correct answers.
Thanks and Regards,
P.Naganjana ReddyHi,
refer to these links:
Re: Why xslt mapping?
http://searchsap.techtarget.com/tip/0,289483,sid21_gci1217018,00.html
/people/r.eijpe/blog/2006/02/20/xml-dom-processing-in-abap-part-iiib150-xml-dom-within-sap-xi-abap-mapping
/people/sravya.talanki2/blog/2006/12/27/aspirant-to-learn-sap-xiyou-won-the-jackpot-if-you-read-this-part-iii
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/8a57d190-0201-0010-9e87-d8f327e1dba7
Regards,
Nithiyanandam -
Discoverer 10g Plus and Viewer
I have installed Discoverer Admin and Desktop, Can someone please guide me the links to download Discoverer Plus and Viewer.
Hi,
Oracle Discoverer Plus and Viewer are part of Oracle Application Server software, and no separate installation is required to access it.
Oracle Discoverer Software
http://www.oracle.com/technology/software/products/discoverer/index.html
For details about accessing Discoverer Plus/Viewer, please refer to:
Oracle Discoverer Documentation
http://www.oracle.com/technology/documentation/discoverer.html
Regards,
Hussein -
Starting Discoverer Web Plus or viewer
I installed Oracle 9iAS on an WK2000 box and seems to work fine. Have created a new EUL (via Oracle 9iDS - Disco Admin) but now cannot find out where on earth the html links are to viewer or plus.
I realize it's a simple question but where are the Plus and Viewer html starts!
Thx.Hi,
Why do you think it should be optional?
When I look at the seeded report the parameters are mandatory as well. -
Which design is best from a performance point of view?
Hello
I'm writing a small system that needs to track changes to certain columns on 4 different tables. I'm using triggers on those columns to write the changes to a single "change register" table, which has 12 columns. Beacuse the majority of tracked data is not shared between the tables, most of the columns will have null values. From a design point of view it is apparent that having 4 separate change register tables (one for each main table that is being tracked), would be better in terms of avoiding lots of null columns for each row, but I was trying to trade this off against having a single table to see all changes that have been made across the tracked tables.
From a performance point of view though, will there be any real difference whether there are 4 separate tables or 1 single register table? I'm only ever going to be inserting into the register table, and then reading back from it at a later date and there won't be any indexes on it. Someone I work with suggested that there would be more overhead on the redo logs if a single table was used rather than 4 separate tables.
Any help would be appreciated.
DavidThe volumes of data are going to be pretty small,
maybe a couple of thousand records each day, it's an
OLTP environment with 150 concurrent users max.Consider also the growing of data and if you'll put data constantly to an historical db or if the same tables will contain the increasing number of record.
The point that my colleague raised was that multiple
inserts into a single table across multiple
transactions could cause a lot of redo contention,
but I can't see how inserting into one table from
multiple triggers would result in more redo
contention that inserting into multiple tables. The
updates that will fire the triggers are only ever
going to be single row updates, and won't normally
cause more than one trigger to fire within a single
transaction. Is this a fair assumption to make?
David
I agree with you, the only thing I will consider, instead of redo problem, is the locking on the table that could be generated when logs of different tables will have to be put in a single table, i mean if after insert of a log record you could need to update it....
In this case if 2 or more users have to update the same log you could have problems. -
Please validate my logic performance point of view:
Please validate my logic performance point of view:
logic I wrote :
LOOP AT i_mara INTO wa_mara.
*-----For material description, go to makt table.
SELECT SINGLE maktx
FROM makt
INTO l_maktx
WHERE matnr = lwa_mara-matnr
AND SPRAS = 'E'.
IF sy-subrc = 0.
wa_mara-MAKTX = l_maktx.
ENDIF. " IF sy-subrc = 0.
*-----For Recurring Inspection, go to marc table.
SELECT prfrq
FROM marc
INTO l_prfrq
UP TO 1 ROWS
WHERE matnr = lwa_mara-matnr.
ENDSELECT.
IF sy-subrc = 0.
wa_mara-prfrq = l_prfrq.
ENDIF. " IF sy-subrc = 0.
MODIFY TABLE i_mara FROM wa_mara
TRANSPORTING maktx.
CLEAR : wa_mara.
ENDLOOP. " LOOP AT i_mara INTO wa_mara.
Or is it better below : ?
To SELECT all the maktx values from makt and all prfrq values from marc
in two internal tables and
Loop at i_mara.
LOOP at all maktx itab
and pass corresponding maktx values into i_mara table
and pass corresponding prfrq values into i_mara table
ENDLOOP.
OR
is there any better performance logic you suggest ?
THANKS IN ADVANCE.ok this is very funny so if someone gets a good way to code he should wait till he gets 1198 points till he write a performance wiki
so that means ppl who has high SDN points only can write wiki
for your information wiki definition is
[http://en.wikipedia.org/wiki/Wiki |http://en.wikipedia.org/wiki/Wiki]
its all about contribution and sharing.
did you try that code on a production or a Quality server. If you did you wont say that coz the results i have shown in that blog is what i my self tested on a Quality system of our client.
and for your information i did my internship at a SAP AFS consultancy firm and i created the account at that time. I have joined that company and now working as a developer over there.
if you have worked on a client system development on SD and MM you will know that most of the time
we use header and item tables like
likp,lips
vbak,vbap
vbrk,vbrp
most of the time we come across nested loops with smiler kind of condition.
in this Q he has MATNR as reference.
if you see it properly you can see both tables are sorted.
and the select statement is for all entries.
for your information there can be a delivery document item with out a header if you are aware of DB concepts in that case there will be a foreign key error.
ok lets think about a situation like that even in that case if there ant any header data then simply the client wont request for that record.( you would know if you have worked with clients )
last but not least i dont care about my points rate at SDN i just wanted to share what i know coz anyway i have a very good job here. dont try to put down ppl just because they are new.
Thomas Zloch : i never told its my code i saw it some where then i checked and i bogged it so that i can get it when i want and i saw it in se30 ( its not se38 ) but i know most of ABAP developers dont check it much so i just wanted to help.
Rui Pedro Dantas : ya your correct we dont need to use it most of the time since sorted table is easy but there are programs which works with bulky data load we can use it in places like that. Thanks for talking the truth
Nafran
sorry if i said anything to hurt anyone. -
I have ipod nano with 7 generation and I would buy a Polar Wearlink + Transmitter Nike+ Plus to view heart beats . Is this product compatible with nano?
I have the same issue! I purchased a nano 6th, polar hrm so I could run and receive spoken feedback when my heart rate was outside my preset zone. My suunto t3 used to beep when outside the preset zones. Automatic, periodic beeps or spoken feedback is the most important information to receive during exercise. I hope apple can update the software to add this simple yet critical feature.
-
Performance delta between Viewer & Plus?
All;
Relative to the middle tier, & in terms of the effect on CPU/memory etc. can anyone articulate which of the two methods effect the app/Web server more than the other?
I've been successfully deploying Load Runner scripts against & through "Viewer" however, not through "Plus" where playback is failing (presumably sessions, applet & a host of other parameters I need to look at).
The above stated, & assuming we cannot get "Plus" scripts to playback effectively, I'd like to assure that it's possible to load the middle tier as effectively via "Viewer" & definitively state that there's either no, minimal or extradionary considerations to be aware of between the two - again, as they effect the app & Web servers.
Thanks very much.In addition, to the already great information provided:
You can get a good overall "picture" with:
Capacity Planning Guide 10g (9.0.4) (mostly still applies to 10.1)
http://www.oracle.com/technology/products/discoverer/index.html
Capacity Planning Sizing Calculator 10g (10.1.2.0.2)
This is on OTN at:
http://www.oracle.com/technology/products/discoverer/files/discoverer10_1_2_0_2_sizing_calculator.xls
It really is highly dependent upon the query and type of worksheet (crosstab/tabular).
Also, you can monitor memory usage via Application Server Conrol: Discoverer link >> Performance tab (obviously after the fact :) )
To use Loadrunner with Plus (is not trivial), you need:
· discoverer5.jar
· disco5i.jar
· xmlparserv2.jar
· source.jar
discoverer5.jar can be found on your BI mid tier under $ORACLE_HOME/discoverer/lib
disco5i.jar can be found on your BI mid tier under $ORACLE_HOME/discoverer/plus_files
xmlparserv2.jar can be found on your BI mid tier under $ORACLE_HOME/lib
I believe source.jar may have to come from Mercury
Hope that helps. Comments welcomed.
~Steve. -
Hello everyone, we are having a performance problem with oracle optimizing nested views. Some with outer joins. One of the guys just sent me the following statement and I'm wondering if anyone can tell me if it sounds correct. The query is about 300 lines long so I won't post it here, but hopefully someone can still help with sheding some light on the statement below.
When Oracle executes a view it optimizes the query plan only for the columns of the view which are used and eliminates unnecessary joins, function calls, etc.. In this case the join is a LEFT OUTER i.e. optional and since the columns are not used I would hope Oracle would eliminate this from the plan but… it didn’t.
Thanks for any help,
Michael CunninghamDepending on the version of Oracle (this was introduced in 10.2), it is possible that Oracle can eliminate a join. The Oracle Optimizer group has a nice blog post that discusses the requirements for join elimination to happen. Basically, Oracle has to be sure that the additional columns aren't being used but also that the join does not change the number of rows that might be returned.
Justin -
Regarding performance of normal view and materialised view.
I have a task to load records from a remote database (db2) tables to oracle database. I have created a private dblink in oracle to access the tables in DB2 database. Then i have created a normal view for the table say employee in oracle using the dblink and created synonym for normal view. When i do a select using the synonym emp_sy from oracle using a where clause emp_id = 10 the time to fetch the records the records performance is very poor. My question is how to improve the performance. Can i create index on a column emp_id using the synonym i.e create index emp_idx on emp_sy(emp_id). ? If created will it improve the performance. Or should i go for a materialized view.
thanks,
VinodhIf you want to load the data into your oracle database then a select is not enough. At least for the normal view solution. You must also use an Insert. If you use a materialized view, then whenever the view is refreshed it will select the data from DB2 and insert it into oracle (into the mat. view table).
As soon as the data is loaded into oracle you can start to put indexes on it etc. -
Plus to Viewer format problems
Have some discoverer reports with quite wide column headings. In Plus I am able to wrap them by manually manipulating the column width. Works fine. Each run of the report uses the previously saved column widths.
However, when I try to execute the same report in viewer, it "forgets" the column setting and displays the column width based on the width of the column heading.
I can finagle the result in Viewere using the "page settings" opiton, but this only works for the current run.
So, how in Viewer do you set column widths to other than the (apparently) default option of auto adjust to widest column?
thanksHi speedypete,
Is this happening with every document. I dont think that should be a case.
What is the source of PDF, a pdf document could be created by a third party source and we dont have control on third party applications.
This issue i think is document specific.
Regards,
Ajlan Huda. -
Can you increase the amount of parameters fetched in viewer to be the same as plus. The same report in plus fetches 100 (as set on the Item Categories on the EUL), while only 25 are displayed in viewer.
Is there somewhere this can be changed on the middle tier perhaps?Hi Rod,
I assume you are referring to the number of rows returned, as the default setting in Viewer is 25.
A temporary way to set this in Viewer is to click on ‘Options’ when logged in Viewer itself. You’ll have to open a responsibility or report before this option becomes available to select. Within here you’ll be able to change the number of Rows per HTML page. However this method only makes the change for the User you logged in with.
To make the change for all users, it’s best to edit the PREF.TXT file on the Discoverer Server. Please view the documentation below for more information on how to configure Viewer. For windows platform click here : http://download-uk.oracle.com/docs/html/A90287_01/toc.htm,
For Unix platform click here : http://download-uk.oracle.com/docs/html/A90288_01/toc.htm. Once you’ve run the applypreference script, these settings will be forced down to all users of Discoverer.
Hopefully this will help ;-)
Lance
Message was edited by:
Lance Botha -
HANA Studio rev.74 - performance in calc view with union of many sources
Hi,
Windows 7 32 bit, 4GB system memory, Intel Core Duo 2.26Ghz, HANA studio rev 74.
Maintaining a graphical calculation view where first node is union of 30 calculation views each with 150 columns. Maintenance of the view is terribly slow almost to the point of unusable as I add sources to the union, leading to a consideration of remodelling the requirement with a scripted view (or breaking down into smaller sets of graphical calculation views) to work around the sluggish response.
As well as building the view for development purposes I'm considering longer term maintenance options where I cannot guarantee system specs of the supporting developer.
Does anyone else have experience of such sluggish Studio performance with a graphical calculation view using many large view sources?
Is there any theoretical limit to the number of sources in a union?
Cheers,
JP.Hi Jon-Paul,
sorry for going off topic, but...
if there's still an issue with the appliance (on CAL?), doesn't it flow over to the client? i have the client SP80 installed, but until i get the confirmation that the server i'm connecting to is 'production' ready i can't really take the full advantage of the upgraded client as i don't see any significant point of using client 80 with server 74, but it sounds like you would want to make it work, nevertheless.
to me, yellow is still yellow even though it doesn't really stand for anything.
good luck and let us know your test results,
greg
Maybe you are looking for
-
Itunes 7 + Win XP Pro (Recover from a serious error0
Everytime i turn on my PC I get this window telling me that windows has recovered from a serious error. The last blue screen I got was almost a week ago. I have done system retore to pre iTunes 7 and i continue to get this error. iTunes 7 is the only
-
Dina, Differenc Betn R12 Oracle Projects Vs Oracle 11g Fusion Project Portf
Hi Dina / Experts, Can anyone of you help me in getting the product features differences between R12 Oracle Project Suite Vs Oracel 1Fusion 11g Project Portfolio Management? can you please provide me reference document / link / RCD / PPT if possible.
-
Hi Experts, There is Requirement in which we have to maintain default value for Creation profile field (EKPO-ABUEB) in ME31L for some purchase org.
-
The blinking/refreshing ONLY occurs on the main window that I am looking at, the other windows are normal.
-
i have a macbook pro 10.6 (2010) i think this is OSX with snow leopard, i am confused how do i figure this out iguess i dont have any windows? all of .exe files i cant open and some downloads say i am in DOS mode. Where do i start to know what i have