BW Analysis Process - Extended Settings in the Query
Hello,
I am trying to find out more about the effect of different parameter settings in an APD. Against a query, properties, there is extended settings, with a number of parameters, Partitioning Characteristic, Package Size and Server Group.
When running the query in parallel as part of a process chain, I am finding that the process starts off lots of work processes, and as a result there are a lot of memory problems that are occurring. I think what is needed is for the query to run fewer work processes at any given time, and to keep the size of those work processes down as well.
Is there a clear write up anywhere on how to achieve this?
Thanks.
Cote,
it is a while ago now, and I don't have access to a BW 7 system anymore, but I did figure out what needed to be done in the end.
There is a transaction called RZ12. Here you can define server groups, and also restrict how much resource that they take.
In the APD you define which server group you want to use when running process in parallel.
I think I would advise experimenting with this in a test environment, and find out which settings restrict the number of processes that the APD starts up to the optimum level.
Similar Messages
-
Performance tuning for the query
CURSOR c_exercise_list IS
SELECT
DECODE(v_mfd_mask_id ,'Y',' ',o.opt_id) opt_id,
DECODE(v_mfd_mask_id ,'Y',' ',o.soc_sec) soc_sec,
P.plan_id plan_id, E.exer_id exer_id, E.exer_num,
DECODE(G.sar_flag, 0, DECODE(G.plan_type, 0, '1', 1, '2', 2, '3', 3, ' ', 4,'5', 5, '6', 6, '7', 7, '8', 8, '9', '0'), ' ') option_type,
TO_CHAR(G.grant_dt, 'YYYYMMDD') grant_dt, TO_CHAR(E.exer_dt, 'YYYYMMDD') exer_dt,
E.opts_exer opts_exer,
E.mkt_prc mkt_prc,
E.swap_prc swap_prc,
E.shrs_swap shrs_swap, decode(e.exer_type,2,decode(xe.cash_partial,'Y','A','2'),TO_CHAR(E.exer_type)) exer_type,
E.sar_shrs sar_shrs,
NVL(ROUND(((xe.sar_shrs_withld_optcost - (e.opts_exer * g.opt_prc) / e.mkt_prc) * e.mkt_prc),2),0)+e.sar_cash sar_cash,
NVL(f.fixed_fee1,0) fixed_fee1,
NVL(f.fixed_fee2,0) fixed_fee2,
NVL(f.fixed_fee3,0) fixed_fee3,
NVL(f.commission,0) commission,
NVL(f.sec_fee,0) sec_fee,
NVL(f.fees_paid,0) fees_paid,
NVL(ct.amount,0) cash_tend,
E.shrs_tend shrs_tend, G.grant_id grant_id, NVL(G.grant_cd, ' ') grant_cd,
NVL(xg.child_symbol,' ') child_symbol,
NVL(xg.opt_gain_deferred_flag,'N') defer_flag,
o.opt_num opt_num,
--XO.new_ssn,
DECODE(v_mfd_mask_id ,'Y',' ',xo.new_ssn) new_ssn,
xo.use_new_ssn
,xo.tax_verification_eligible tax_verification_eligible
,(SELECT TO_CHAR(MIN(settle_dt),'YYYYMMDD') FROM tb_ml_exer_upload WHERE exer_num = E.exer_num AND user_id=E.user_id AND NVL(settle_dt,TO_DATE('19000101','YYYYMMDD'))>=E.exer_dt) AS settle_dt
,xe.rsu_type AS rsu_type
,xe.trfbl_det_name AS trfbl_det_name
,o.user_txt1,o.user_txt2,xo.user_txt3,xo.user_txt4,xo.user_txt5,xo.user_txt6,xo.user_txt7
,xo.user_txt8,xo.user_txt9,xo.user_txt10,xo.user_txt11,
xo.user_txt12,
xo.user_txt13,
xo.user_txt14,
xo.user_txt15,
xo.user_txt16,
xo.user_txt17,
xo.user_txt18,
xo.user_txt19,
xo.user_txt20,
xo.user_txt21,
xo.user_txt22,
xo.user_txt23,
xo.user_dt2,
xo.adj_dt_hire_vt_svc,
xo.adj_dt_hire_vt_svc_or,
xo.adj_dt_hire_vt_svc_or_dt,
xo.severance_plan_code,
xo.severance_begin_dt,
xo.severance_end_dt,
xo.retirement_bridging_dt
,NVL(xg.pu_var_price ,0) v_pu_var_price
,NVL(xe.ficamed_override,'N') v_ficmd_ovrride
,NVL(xe.vest_shrs,0) v_vest_shrs
,NVL(xe.client_exer_id,' ') v_client_exer_id
,(CASE WHEN xg.re_tax_flag = 'Y' THEN pk_xop_reg_outbound.Fn_GetRETaxesWithheld(g.grant_num, E.exer_num, g.plan_type)
ELSE 'N'
END) re_tax_indicator -- 1.5V
,xe.je_bypass_flag
,xe.sar_shrs_withld_taxes --Added for SAR july 2010 release
,xe.sar_shrs_withld_optcost --Added for SAR july 2010 release
FROM
(SELECT exer.* FROM exercise exer WHERE NOT EXISTS (SELECT s.exer_num FROM suspense s
WHERE s.exer_num = exer.exer_num AND s.user_id = exer.user_id AND exer.mkt_prc = 0))E,
grantz G, xop_grantz xg, optionee o, xop_optionee xo, feeschgd f, cashtendered ct, planz P,xop_exercise xe
WHERE
E.grant_num = G.grant_num
AND E.user_id = G.user_id
AND E.opt_num = o.opt_num
AND E.user_id = o.user_id
AND (G.grant_num = xg.grant_num(+) AND G.user_id=xg.user_id(+))
AND (o.opt_num = xo.opt_num(+) AND o.user_id=xo.user_id(+))
AND E.plan_num = P.plan_num
AND E.user_id = P.user_id
AND E.exer_num = f.exer_num(+)
AND E.user_id = ct.user_id(+)
AND E.exer_num = ct.exer_num(+)
AND E.user_id = ct.user_id(+)
AND E.exer_num=xe.exer_num(+)
AND E.user_id=xe.user_id(+)
AND G.user_id = USER
AND NOT EXISTS (
SELECT tv.exer_num
FROM tb_xop_tax_verification tv--,exercise ex
WHERE tv.exer_num = e.exer_num
AND tv.user_id = e.user_id
AND tv.user_id = v_cms_user
AND tv.status_flag IN (0,1,3,4, 5)) -- Not Processed
;how to tune the query to impropve the performance, any1 help me ..thanks in advance
Edited by: BluShadow on 21-Feb-2013 08:14
corrected {noformat}{noformat} tags. Please read {message:id=9360002} and learn how to post code correctly.i got CPU cost: 458.50 Elapsed time: 1542.90 so anything can tune to improve the performance, but there is no full table scan applied for none of the mentioned table. . and most of the columns are unique index scan takes place.. anybody can help me to find the solution
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
Edited by: 956684 on Feb 22, 2013 4:09 AM -
Invalid read mode while executing the query
Hi,
I have developed a query based on a cube in MM module.
When I try to execute the query, the following error is being displayed:
Invalid read mode
Message no. BRAIN017
Diagnosis
If queries (often unintentionally) are assigned the read mode "Query to Read All Data at Once", this often results in performance problems. To eliminate this problem, this warning is displayed when you execute a query with this read mode when the query is generated.
Procedure for System Administration
Before you execute the query, repair the read mode:
u2022 You can repair the read mode for this query in the Query Monitor (transaction RSRT1). You can set the read mode for the selected query by choosing Query -> Read Mode.
u2022 If this warning is displayed when you check a query, you need to change the default read mode for new queries in the InfoCube in InfoCube maintenance. In InfoProvider maintenance, choose Environment -> InfoProvider Properties -> Change and make the relevant settings on the Query/Cache tab page.
Please let me know what are the exact setting that I need to do and where.
Thanks in advance.
Will assign points for the solution.
Thanks & Regards.Hi Ganesh,
You can change the read mode by going to transaction RSRT, entering the query name and then clicking the properties button. A dialog box will then be displayed that allows you to change the read mode.
The question of what's the best read mode for you query depends on the requirements of the query:
If you choose "read all data at once", it will take quite awhile before the first view of the query is shown. This is because it basically reads all data needed for your query (e.g. data for hierarchy nodes etc). However, after the initial view, successive navigations might be a lot faster.
The read mode "Read data during navigation and expansion of hierarchies" is usually the best mode. The initial view will show up in the least amount of time in this mode. However, when the users start to navigate, data will then be read from the database (aggregates, or the main cube tables) or the cache.
The read mode "Read data during navigation" is like the "Read data during navigation and expansion of hierarchies" mode. However, it will also read the data for the hierarchy nodes. Expansions of the hierarchy will therefore be faster in this mode. Of course, there's additional overhead for the initial view.
Hope this helps. Cheers. -
I have a process that runs a SQL query and returns the results as XML. When I test the query in the Process Properties tab in Workbench it appears to execute just fine. I can also test the XML information and see that the results are coming back correctly. But when I invoke the process I get an emtpy XML tag with no results. Recording the invocation and playing back the recording doesn't tell me anything useful. Has anyone ever seen this issue before? I don't understand why everything within the process seems to bring back results just fine but invoking it returns nothing.
Unfortunately I am not the admin for our LiveCycle instance and do not have access to the server logs (long story). I also am not authorized to share any LCA files for this project. Thanks though.
-
Populating cascading listboxes without processing the query in HIR 8.3
I have 2 listboxes. First listbox populated using Group Number. Second Listbox should contain all account numbers corresponding to the Group Number selected in the first listbox, I need to do this without processing the query. I tried this way :
1. Created a query section “Query” and in that created a limit Group Number. Through this limit I am populating the first lisbox.
2. Created another query section “Query2”. In that created Group Number as first limit and Account Numbers as another listbox.
3. After selecting the Group Number from listbox1, I am passing this value to the first limit of “Query2”.
4. I am doing RefreshAvailableValues on the second limit of “Query2”. Now I am expecting the records corresponding the selected account number in the second limit of “Query2”. But I’m getting all the available values.
What is wrong in the above or any better solution is available please suggest.
Edited by: user13386590 on Jul 12, 2010 9:32 PMWhy do you want to do this without processing a query.
Each time you execute the script RefreshAvailableValues you are essentially executing a query...look at the Query log.
As for the issue in # 4 you will want to change the properties on the Data Model. From menu select
DataModel
- Data Model Options
check the Filter/Limit tab and try out different options
--- below is from 11.1.1.1 help
Data Model Options: Filters
Use the Filters tab to specify filter browse level preferences and to select global filter options.
When you use Show Values to set filters, you may sometimes need to sift through a lot of data to find the particular values you need. Filter preferences enable you to dictate the way existing filters reduce the values available through the Show Values command.
For example, you want to retrieve customer information only from selected cities in Ohio. However, the database table of customer addresses is very large. Because Interactive Reporting applies a default filter preference, once you place the initial filter on State, the Show Values set returned for City is automatically narrowed to those cities located in Ohio. This saves you from returning thousands of customers, states, and from all sales regions.
You can adjust this preference so that the initial filter selection has no effect on the potential values returned for the second filter (all cities are returned regardless of state).
Filter Options
Show Minimum Value Set—Displays only values that are applicable given all existing filters. This preference takes into account filters on all tables and related through all joins in the data model (which could be potentially a very large and long running query).
Show Values Within Topic—Displays values applicable given existing filters in the same topic. This preference does not take into account filters associated by joins in the data model.
Show All Values—Displays all values associated with an item, regardless of any established filters.
Tip:
When setting these preferences for metatopics, be sure to display the data model in Original view.
Global Filter Options (Designer only)
Show Values—Globally restricts use of the Show Values command in the Filter dialog box, which is used to retrieve values from the server.
Custom Values—Globally restricts use of the Custom Values command in the Filter dialog box, which is used to access a custom values list saved with the Interactive Reporting document file or in a flat file.
Custom SQL—Enables the user to code a filter directly using SQL.
Note:
The Topic Priority dialog box is displayed only if you first select join in the data model.
Note:
Since most data models do not have the same set of topics, you cannot save changes to the topic priority as default user preferences. (For more information on default user preferences, see Saving Data Model Options as User Preferences.)Often times if I am trying to load controls for parameter screen I will have a seperate query and results section for that purpose. Then I use the Filter/Limit on the Results section to control the cascade feature you are trying to accomplish. In your case your results would contain the distinct list of Group Number and Account Number.
Wayne -
How to analyse the performance by using RSRTand byseeing the query results
Hi,
I want to see the performance of the query in each land scape. I have executed my query
using the transaction RSRT. Ho w can we analyse the query reuires aggregats or not.
I have taken the no. of records in cube . I also saw the number of records in the aggregates.
I didnot get the clear picture.
I selected the options Aggregates , Statistics and donot use cache. Query got execute and it displays one report . But I am unable to analyse the performace.
Can anyone please guide me with steps . Which factors we need to consider for the performace point of view.
Points will be rewarded.
Thanks in advacne for all your help.
VamsiHi,
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
Using cache memoery will decrease the loading time of the report.
Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work. By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Performance of BW infocubes
Go to SE38
Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
Thanks,
JituK -
ORA-2001:The approver group Process MFG Approvals has dynamic query in wron
ERROR ORA-2001:The approver group Process MFG Approvals has dynamic query in
wrong format in 11i
We are setting up the Approver Group 'Process MFG
Approvals" using a dynamic query, like:
SELECT PAPF.EMPLOYEE_NUMBER
FROM PER_ALL_PEOPLE_F PAPF,
fnd_lookup_values FLV
WHERE FLV.MEANING=PAPF.EMPLOYEE_NUMBER
AND lookup_type='SUG_SAMPLE_NOTIFICATION'
AND SYSDATE BETWEEN papf.effective_start_date AND papf.effective_end_date
AND FLV.LOOKUP_CODE= (SELECT GME.PLANT_CODE FROM GME_BATCH_HEADER GME WHERE
GME.BATCH_ID=:transactionId)
- Above query is passing the validation action from within the setup screen.
- However, when this approver group is being invoked via Sample Creation
workflow, there is following error raised:
ORA-20001:The approver group Process MFG Approvals has dynamic query in
wrong format
More, if user is trying to use a more simple query like:
select distinct person_id from PER_ALL_PEOPLE_F where full_name = 'Mr.
Oliverking G' we are getting same error
Any idea, plse, would be gretaly apprciated.
txs
PeterHi,
You need to prefix the value with a text string which indicates what kind of value you are returning.
E.g. if you are returning a user ID, prefix the value with 'user_id:'; if you are returning a person ID, then prefix it with 'person_id:'
There is an article on my blog about creating a dynamic approval group in AME as part 5 in the series on AME: http://www.workflowfaq.com/ame-part-five-defining-a-dynamic-approval-group
HTH,
Matt
WorkflowFAQ.com - the ONLY independent resource for Oracle Workflow development
Alpha review chapters from my book "Developing With Oracle Workflow" are available via my website http://www.workflowfaq.com
Have you read the blog at http://www.workflowfaq.com/blog ?
WorkflowFAQ support forum: http://forum.workflowfaq.com -
We have some custom entries in the file 'custom.ini' that are stored in
the Sybase database.
When I start a query for these custom attributes then I get the
following error message:
"Unable to process the query. Ensure that the database connection is up.
For more information, see the error message documentation at
http://www.novell.com/documentation."
The database connection is up and running because when I start a query
of a standard attributes of ZfD then I get a result.
I get this error message just on custom entries.
I have enable the debug logs, how explaned on the documentation, but
without positive results.
Any ideas?
PetePete,
It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.
Has your problem been resolved? If not, you might try one of the following options:
- Do a search of our knowledgebase at http://support.novell.com/search/kb_index.jsp
- Check all of the other support tools and options available at
http://support.novell.com.
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://support.novell.com/forums)
Be sure to read the forum FAQ about what to expect in the way of responses:
http://support.novell.com/forums/faq_general.html
If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.
Good luck!
Your Novell Product Support Forums Team
http://support.novell.com/forums/ -
hi all,
please tell me the order of execution of the below query.
my doubt is here the data will be fetched for doc_type and doc_number is null or first it will fetch the data based on company,reference,partent from activity_details
SELECT REFERENCE,SERIAL,JOB_NUMBER
FROM PROCESS_DETAIL
WHERE BASED_ON ='TBA'
AND PROCESS_TYPE = 'H'
AND STATUS IN('2','4','6')
AND EXISTS (SELECT 1 FROM JOB_HEADER JH
WHERE COMPANY = JH.COMPANY
AND REFERENCE = JH.REFERENCE
AND JH.STATUS IN('2','4','6'))
AND NOT EXISTS (SELECT 1
FROM ACTIVITY_DETAILS
WHERE COMPANY = COMPANY
AND REFERENCE = REFERENCE
AND SERIAL = PARENT
AND DOC_TYPE IS NOT NULL
AND DOC_NUMBER IS NOT NULL)please let me know how to analyse the query?Use alias for tables in SELECT clause. Try it.
SELECT REFERENCE,SERIAL,JOB_NUMBER
FROM PROCESS_DETAIL PR
WHERE BASED_ON ='TBA'
AND PROCESS_TYPE = 'H'
AND STATUS IN('2','4','6')
AND EXISTS (SELECT 1 FROM JOB_HEADER JH
WHERE COMPANY = PR.COMPANY
AND REFERENCE = PR.REFERENCE
AND STATUS IN('2','4','6'))
AND NOT EXISTS (SELECT 1
FROM ACTIVITY_DETAILS
WHERE COMPANY = JH.COMPANY
AND REFERENCE = JH.REFERENCE
AND SERIAL = PARENT -- maybe PR. alias here
AND DOC_TYPE IS NOT NULL
AND DOC_NUMBER IS NOT NULL) -
I have downloaded Yosemite but lost my outgoing mail settings in the process. I can still receive mail from all my accounts but cannot send any...
Open mail
click on mail on top
click preferences
click accounts tab
scroll down to TLS Certificate
click up & down arrow and select (com.apple.idms.appleid )
It worked for me -
Display data of query in Analysis Process Designer mismatch query result
Hi experts,
I am having an issue with APD.when ii am display data of the query in apd i am having wrong values for key figures.
any solutions?
ThanksHi,
after doing a check by restricting data in the query, I see that the condition I have set in the query, to only give me the data where keyfigure X <= 2, the DSO is updated with all the values. So it seems that conditions in queries are ignored by the APD. Does anybody know anything more about this? Then I can understand why the job was taking so much time...
Regards,
Øystein -
Bug: the query results dialog extends outside the screen boundaries
Hi, I observed this bug on MacOS 10.5.5 running Eclipse 3.4.1 with FlexBuilder 3.0.2 installed. Steps to repro:
1. make FlexBuilder full screen
2. open an MXML document
3. make the editor full screen (double-click the editor's tab)
4. select in the text editor the name of a Flex SDK class (e.g. Window, Button, ...)
5. scroll the editor window so that the selected text is near the bottom of the screen
5. click the Blueprint button, or type ctrl-B
Result: the Blueprint query results window is only partly visible, most of it is below the bottom of the screen
Expected: the query results window auto-positions itself to be fully visibleThanks for trying out Blueprint and giving us feedback!
We have filed a ticket for this bug and will do our best to address it in the next release!
Mira Dontcheva
Research Scientist
Adobe Systems -
How to manage Locale info in the URL path, but not the query string
We are building an application using Struts 1.1 and Tiles, on Oracle Application Server 10.1.3.3...
I know this is a strange question... but we have a requirement to represent the locale info in the URL string using one of the following options:
option 1: /eng/page.do?id=2 for english.../fra/page.do?id=2
option 2: /page-eng.do?id=2 for english.... and /page-fra.do?id=2 for french
We need to represent the 3 letter ISO lang code either in the directory structure, or suffix the page name (in our case, the struts action name)... we cannot replicate this using a parameter in the query string. I know this is odd, but that is what we are told to implement.
Is there any robust way of implementing either option in Struts 1.1, JSP, JSTL etc...?
Currently, we are looking at using a servlet filter to intercept the HTTP requests, parse the URL string, and extract the ISO lang value, and set locale and forward on the request.
This poses a few problems... adding additional action mappings (page-eng... page-fra... page) to our struts-xml.config file to handle lang permuations... but the biggest issue is all the embedded html:link action values throughout our code...
Because all our public facing URLs must comply with the rule, we need to change the html:link action to point to a different action, based on locale.
Very inefficent, and I'm sure not industry standard best practice... we are using Tiles, and resource bundles for all our labels etc... but fall short in meeting this rule with regards to URLs and locale.
Any advice or tips etc.. is greatly appreciated.The filter option sounds like a good solution. So it can receive the urls and parse them appropriately.
You just need to take it one step further.
Additional actionmappings in your struts-config should not be necessary.
Filter:
- analyses the url and sets the appropriate locale
- adjusts the url such that the next level of the chain does not have to know anything about the locale being encoded in the url string.
Thus your struts classes and mappings can remain unchanged
/eng/page.do or /fra/page.do once through the filter should just look like /page.do to struts.
That should get rid of half of your headache.
Next the issue of generating urls.
There are two approaches I can see here
1 - use the filter approach again, this time with some post processing. Gather the generated HTML in a buffer, and do a find/replace on any urls generated, to put the locale encoding into them.
2 - Customise struts to produce urls in this format. This would involve the html:link tag, and the html:form tag at the least (maybe others?). Get the source code for struts, and grab the html:link tag code. Extend that class to generate urls as you want them to be generated. I think you would need to extend the class org.apache.struts.taglib.html.LinkTag and override the protected method calculateURL. You would then have to edit/modify the struts-html tld to point the link tag at your classes rather than the standard ones.
Option 1 is architecturally good because it gives you a well defined layer/border between having the locale encoded in the url, and not having it there. However it involves doing a find/replace on every html going out. This would catch all urls, whether generated by html:link tag or not.
Option 2 requires customising struts for your own requirements, which may be a bit daunting, but has the advantage of generating the urls correctly without the extra overhead involved with option 1. Of course you would have to ensure that ALL urls are generated with the html:link tag.
On reflection, I think option 1 is preferable, as both easier and quicker to implement, and doing a better separation in the architecture.
Cheers,
evnafets -
Analysis Process Designer and Inventory Management
In the How to guide for Inventory Management it mentions that you can you can intialise stock in the Snap shot Cube using the Analysis Process Designer (APD). Has anyone done this and if so can you explain how or outline some steps? Thanks
Hallo,
the APD does`nt be a useful tool for modeling a performant
data flow. A lot of SAP BW user think so.
The performance problems are given by the ddic intern table, using to uploading the extracted data into the apd used wa_table and structures.
To have the best performance on your scenario, please use the following scenario as possible.
Step 1.
upload the extracted data from the psa into a data layer
there you can reduce and harmonize the data by using a transactional ods
Step 2.
build a infoset that joins the data
Step 3.
build one query to reducing data and make two copies of ist
Step 4.
build a useful data mining model
Step 5.
upload the results of the data mining model into a transactional ods
Step 6.
link the uploaded data into a infoset ore write back into a standard ods
Step 7.
query the data
If you use this scenario you have a lot of benefits. Better performance, better quality of persistent data and actual and traininged data.
The recommend next step (if you want a alerting) is to build a reporting agent report - if you have usefull processes in the query.
There are a workshop for Data mining and APD, named BW380.
I hope I helped you.
Otherwise give me a message.
[email protected] -
Not able to see the extended Vo evenif the JPXimport imported correctly.
Hi,
I am in the process of customising the Sales Dashboard page in Sales user respopnsibility in R12.1.3 instance.
I configured the correct setup for JDev also by downloading the patch p9879989_R12_GENERIC.
Now I have requirement that I need to add one column by personalization to advance table.
But this new column doesn't exists to my VO CustomerAccessesVO so I need to extend it.
I extended this VO and created subtitute and imported to database by JPXImporter.
After importing I run the query jdr_utils.printdocument('/oracle/apps/asn/common/customer/server/customizations/site/0/CustomerAccessesVO'); which gives me output <?xml version='1.0' encoding='UTF-8'?>
<customization xmlns="http://xmlns.oracle.com/jrad"
xmlns:ui="http://xmlns.oracle.com/uix/ui" xmlns:oa="http://xmlns.oracle.com/oa"
xmlns:user="http://xmlns.oracle.com/user" version="10.1.3_1312" xml:lang="en-US"
customizes="*/oracle/apps/asn/common/customer/server/CustomerAccessesVO*">
<replace
with="*/XX/oracle/apps/asn/common/customer/server/XXCustomerAccessesVO*"/>
</custo
mization>
After this I bounce the apache server
$ADMIN_SCRIPTS_HOME/ adapcctl.sh stop
$ADMIN_SCRIPTS_HOME/ adapcctl.sh start
$ADMIN_SCRIPTS_HOME/ adoacorectl.sh stop
$ADMIN_SCRIPTS_HOME/ adoacorectl.sh start.
But when I login to my application fist time I get the error JBO-26022: Could not find and load the custom class nh.oracle.apps.asn.common.customer.server.NHCustomerAccessesVORowImpl
As while extending Jdev generated only impl.java rowimpl.java, server.xml and Vo.xml. But the Rowimpl.java generated Empty.
But this error comes once only second time this error doesn't come.
And when I click on about this page link I am not able to find my new extended VO i.e. XXCustomerAccessesVO.
Now when I am trying to persomalize the table and adding new column It is niether giving me error nor displaying the new collumn.
So can any one help on below three points:
1) Why Rowimpl generated empty? Is is generating empty that why while compiling .class file is not generating that why I am getting error.
2) As the JPXImport was successful, why I am not able to get my extended VO XXCustomerAccessesVO iinstead of standard CustomerAccessesVO in about this page link.
3) Can you please share the properties values that we need to pass while creating new column in Advance table byb personlaization.
Many Many thanks in advanceHi Piyush,
1) Why Rowimpl generated empty? Is is generating empty that why while compiling .class file is not generating that why I am getting error.
--> Rowimpl should not generate empty. It should generate same as the standard VO's VORowImpl. You can go to JDeveloper and try checking and unchecking Generate Java Files.
eg.
While extension if you have not checked Generate Java File inside VO --> Edit --> Java
it would point to standard VOImpl and VORowImpl of VO which you extended.
<ViewObject
Name="xxReturnItemsVO"
Extends="oracle.apps.icx.por.rcv.server.ReturnItemsVO"
BindingStyle="Oracle"
CustomQuery="true"
RowClass="oracle.apps.icx.por.rcv.server.ReturnItemsVORowImpl"
ComponentClass="oracle.apps.icx.por.rcv.server.ReturnItemsVOImpl"
MsgBundleClass="oracle.jbo.common.JboResourceBundle"
UseGlueCode="false" >While extension if you have checked Generate Java File inside VO --> Edit --> Java
it would point to custom VOImpl and VORowImpl . It will generate two classess xxReturnItemsVORowImpl and xxReturnItemsVOImpl
which should not be empty
<ViewObject
Name="xxReturnItemsVO"
Extends="oracle.apps.icx.por.rcv.server.ReturnItemsVO"
BindingStyle="Oracle"
CustomQuery="true"
RowClass="xx.oracle.apps.icx.por.rcv.server.xxReturnItemsVORowImpl"
ComponentClass="xx.oracle.apps.icx.por.rcv.server.xxReturnItemsVOImpl"
MsgBundleClass="oracle.jbo.common.JboResourceBundle"
UseGlueCode="false" >2) As the JPXImport was successful, why I am not able to get my extended VO XXCustomerAccessesVO iinstead of standard CustomerAccessesVO in about this page link.
If jdr_utils.printdocument is showing correct substitution issue could be with RowImpl file.
3) Can you please share the properties values that we need to pass while creating new column in Advance table byb personlaization.
-- Not sure
Maybe you are looking for
-
Here I am again. Can I trust the install dvd to scan my computer to find the missing program? Can I look through the installation dvds to find the dictionary program and just click to install it? Would it be easier to install a free dictionary from t
-
Problem in Using Oracle with Thin Clients(Image) Technology
I am running Oracle on Server and mine clients are thin clients.I have image on Oracle Server from where I am booting thin clients.The Problem I am facing is when I use single Thin client then everything works but as I starts using more than one thin
-
I have an inherited installation of OEM that I cannot get to work. By which I mean I get a "503 Service Unavailable Service is not initialized correctly. Verify that the repository connection information provided is correct" error in my browser. This
-
Hello, Could you please help me? We are using the KANBAN functionality for replenishment of raw materials from our suppliers. We are using Summarized JIT calls. Our scheduling agreement type is LP. We are looking at the possibility of implementing th
-
Last thursday I coneccted my tv to my mbp since then i got two black bars when I start up my mbp the screen show like if was't a widescreen after that everything looks the same what que I do to solve this issue